Complete Reference Guide

Docker — The Full Picture

Everything you need to go from zero to production — containers, images, volumes, networking, Compose, CI/CD, and deployment patterns. Written for fullstack developers.

containers images dockerfile compose networking volumes ci/cd deployment swarm security

Core Concepts

Docker packages applications and their dependencies into containers — portable, isolated units that run identically everywhere. Understanding a few key primitives unlocks everything else.

📦

Image

Read-only blueprint of your app — OS layers, runtime, code, and config. Built from a Dockerfile.

🏃

Container

A running instance of an image. Isolated process with its own filesystem, network, and PID namespace.

📋

Dockerfile

Recipe that defines how an image is built — base image, commands, files copied, ports exposed.

🗄️

Volume

Persistent storage that survives container restarts. Lives outside the container's layered filesystem.

🌐

Network

Virtual network allowing containers to communicate. Multiple drivers: bridge, host, overlay.

🗂️

Registry

Remote store for images. Docker Hub is the default. Private registries: ECR, GCR, GHCR, etc.

How It All Fits Together

Dockerfile
docker build
Image
docker run
Container
docker push
Registry

Installation & Setup

Install Docker Desktop (macOS / Windows) or Docker Engine (Linux). Desktop includes the daemon, CLI, Compose, BuildKit, and a GUI.

verify installation
docker --version
docker info
docker run hello-world          # pulls & runs a test container

Docker Context — Switch Between Environments

Contexts let you target different Docker daemons (local, remote SSH host, Kubernetes) from the same CLI.

docker context
docker context ls                           # list contexts
docker context create remote \
  --docker "host=ssh://user@remote-host"      # create SSH context
docker context use remote                   # switch to remote
docker context use default                  # back to local

Images

Images are immutable, layered filesystems. Each Dockerfile instruction adds a new layer. Layers are cached and shared across images — making builds fast and pulls efficient.

image commands
docker image ls                             # list local images
docker image ls -a                         # include intermediate layers
docker pull node:20-alpine                 # pull a specific tag
docker image inspect node:20-alpine       # full metadata as JSON
docker image rm my-app:1.0               # remove image
docker image prune                         # remove dangling (untagged) images
docker image prune -a                     # remove ALL unused images
docker history my-app                     # show image layer history
docker tag my-app:latest my-app:1.0.2    # re-tag an image
docker save -o app.tar my-app             # export image to tar
docker load -i app.tar                    # import image from tar
ℹ️
Image Naming Format: [registry/][namespace/]name[:tag][@digest]
Examples: nginx:1.25-alpine, ghcr.io/myorg/api:sha-a3b4c5d

Containers

Containers are running (or stopped) instances of images. They are ephemeral by design — data written inside is lost when removed unless persisted via volumes.

Running Containers

docker run flags
docker run nginx                            # run in foreground (blocking)
docker run -d nginx                        # detached (background)
docker run -d --name web nginx             # named container
docker run -d -p 8080:80 nginx            # map host:container port
docker run -d -p 8080:80 -v ./html:/usr/share/nginx/html nginx  # bind mount
docker run -e NODE_ENV=production node-app  # pass env var
docker run --env-file .env node-app        # env vars from file
docker run --rm node-app                   # auto-remove when stopped
docker run -it ubuntu bash                 # interactive terminal
docker run --cpus 0.5 --memory 256m app   # limit resources
docker run --network mynet app             # join a named network
docker run --restart unless-stopped app    # auto-restart policy

Lifecycle Management

container lifecycle
docker ps                                  # running containers
docker ps -a                              # all containers
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"

docker stop web                           # graceful stop (SIGTERM → SIGKILL)
docker start web                          # restart a stopped container
docker restart web                        # stop + start
docker kill web                           # immediate SIGKILL
docker rm web                             # remove stopped container
docker rm -f web                          # force remove (even if running)
docker container prune                   # remove all stopped containers

Interacting with Running Containers

exec / logs / inspect
docker exec -it web bash                  # open shell inside container
docker exec web cat /etc/nginx/nginx.conf  # run one-off command

docker logs web                           # stdout / stderr
docker logs -f web                        # follow (tail -f)
docker logs --tail 50 -f web            # last 50 lines + follow
docker logs --since "10m" web            # last 10 minutes

docker inspect web                        # full JSON metadata
docker inspect --format '{{.NetworkSettings.IPAddress}}' web

docker stats                              # live CPU / memory usage
docker top web                            # processes inside container
docker diff web                           # filesystem changes
docker cp web:/app/output.log ./          # copy file out
docker cp ./config.json web:/app/         # copy file in

Dockerfile Deep-Dive

The Dockerfile defines every layer of your image. Every instruction that changes the filesystem creates a new cached layer. Order matters — put infrequently-changing instructions first.

All Core Instructions

Dockerfile
# syntax=docker/dockerfile:1

FROM       node:20-alpine               # base image (always first)
FROM       node:20-alpine AS builder     # named stage for multi-stage

ARG        NODE_ENV=production            # build-time variable
ENV        PORT=3000                      # runtime environment variable
ENV        PORT=3000 HOST=0.0.0.0        # multiple at once

WORKDIR    /app                           # set working dir (creates if missing)

COPY       package*.json ./              # copy specific files first (cache trick)
COPY       . .                           # copy everything
COPY       --from=builder /app/dist ./dist  # copy from another stage
COPY       --chown=node:node . .        # set file ownership

ADD        app.tar.gz /app               # like COPY but auto-extracts archives
ADD        https://example.com/file ./   # can fetch URLs (prefer COPY otherwise)

RUN        npm ci --only=production     # run command during build
RUN        apt-get update \
           && apt-get install -y curl \
           && rm -rf /var/lib/apt/lists/* # chain to reduce layers

EXPOSE     3000                          # document which port the app uses

VOLUME     ["/data"]                     # declare mount point

USER       node                          # switch to non-root user

HEALTHCHECK --interval=30s --timeout=5s \
  CMD wget -qO- http://localhost:3000/health || exit 1

CMD        ["node", "server.js"]          # default command (exec form)
ENTRYPOINT ["node", "server.js"]         # always runs; CMD provides default args
LABEL      maintainer="[email protected]"  # metadata labels

ONBUILD    COPY . .                      # trigger for child images
STOPSIGNAL SIGTERM                       # signal used to stop container
SHELL      ["/bin/bash", "-c"]           # change default shell

CMD vs ENTRYPOINT

ScenarioUse
Default command, overridable by userCMD ["node", "server.js"]
Always run this binary, allow argsENTRYPOINT ["node"] + CMD ["server.js"]
Fixed wrapper scriptENTRYPOINT ["./entrypoint.sh"]

.dockerignore

Always add a .dockerignore to exclude files from the build context — keeps builds fast and images lean.

.dockerignore
node_modules
.git
.gitignore
*.log
dist
coverage
.env
.env.local
**/*.test.ts
README.md
.DS_Store

Build Optimization

Layer Caching — Order Your Instructions Wisely

Bad vs Good
# ❌ Bad — copies ALL files first, busts cache on every code change
COPY . .
RUN  npm ci

# ✅ Good — installs deps only when package.json changes
COPY package*.json ./
RUN  npm ci
COPY . .

BuildKit — Enable for Faster Builds

buildkit
# Enable for single build
DOCKER_BUILDKIT=1 docker build -t my-app .

# Enable permanently (set in ~/.docker/daemon.json)
{ "features": { "buildkit": true } }

# Mount cache for package managers (Dockerfile)
# syntax=docker/dockerfile:1
RUN --mount=type=cache,target=/root/.npm \
    npm ci --prefer-offline

# Secret injection at build time (never stored in layer)
RUN --mount=type=secret,id=npmrc,target=/root/.npmrc \
    npm ci
# Build with:  docker build --secret id=npmrc,src=.npmrc .

Multi-Stage Builds

Multi-stage builds let you use heavyweight build tools (compilers, bundlers) without including them in the final image. Production images stay small and secure.

Node.js Example

Dockerfile (Node + React)
# syntax=docker/dockerfile:1

# ── Stage 1: build frontend ──────────────────────────────
FROM node:20-alpine AS frontend-builder
WORKDIR /app
COPY packages/frontend/package*.json ./packages/frontend/
RUN npm ci
COPY packages/frontend ./packages/frontend
RUN npm run build

# ── Stage 2: build backend ───────────────────────────────
FROM node:20-alpine AS backend-builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY src ./src

# ── Stage 3: final runtime image ─────────────────────────
FROM node:20-alpine AS production
WORKDIR /app
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
COPY --from=backend-builder /app/node_modules ./node_modules
COPY --from=backend-builder /app/src ./src
COPY --from=frontend-builder /app/packages/frontend/dist ./public
USER appuser
EXPOSE 3000
HEALTHCHECK --interval=30s CMD wget -qO- http://localhost:3000/health || exit 1
CMD ["node", "src/server.js"]
build specific stage
docker build --target backend-builder -t my-app:dev .   # stop at a stage
docker build --target production -t my-app:prod .        # final prod image

Volumes & Storage

Containers are ephemeral — their writable layer is destroyed on docker rm. Use volumes or bind mounts for persistence.

TypeUse CaseHost Path
Named VolumeDatabases, persistent app dataManaged by Docker (/var/lib/docker/volumes/…)
Bind MountLocal dev, source code hot-reloadAny host path you specify
tmpfsTemporary in-memory data, secretsRAM only, never written to disk
volumes
# ── Named Volumes ────────────────────────────────────────
docker volume create pgdata
docker volume ls
docker volume inspect pgdata
docker volume rm pgdata
docker volume prune               # remove all unused volumes

# Attach named volume to container
docker run -d -v pgdata:/var/lib/postgresql/data postgres:16

# ── Bind Mounts ──────────────────────────────────────────
docker run -v ./src:/app/src:ro node-app     # read-only bind
docker run --mount type=bind,source=$(pwd)/src,target=/app/src node-app

# ── tmpfs ────────────────────────────────────────────────
docker run --tmpfs /tmp:rw,size=64m app
docker run --mount type=tmpfs,destination=/tmp app
💡
Prefer --mount over -v The --mount syntax is verbose but explicit and error-safe. -v ./data:/data will silently create an empty directory if the source doesn't exist; --mount will error.

Networking

Docker containers communicate over virtual networks. By default each container gets a virtual NIC on the bridge network. Containers on the same user-defined network can reach each other by container name (DNS auto-discovery).

DriverWhen to Use
bridgeDefault. Isolated network on a single host. Containers talk by name.
hostContainer shares host network stack. Max performance, no isolation.
overlayMulti-host networking. Required for Docker Swarm.
noneNo networking at all.
macvlanAssign a MAC address — appears as physical device on network.
network commands
docker network ls
docker network create appnet
docker network create --driver overlay swarmnet   # for Swarm
docker network inspect appnet
docker network rm appnet
docker network prune

# Connect containers to a network
docker run -d --name db --network appnet postgres
docker run -d --name api --network appnet my-api

# api can now reach db via hostname "db"
# e.g. DATABASE_URL=postgres://db:5432/mydb

# Connect/disconnect running container
docker network connect appnet web
docker network disconnect appnet web

# Port publishing
docker run -p 3000:3000 app          # bind all interfaces
docker run -p 127.0.0.1:3000:3000 app # localhost only (more secure)
docker run -P app                    # auto-assign host ports

Environment & Config

env patterns
# Single var at runtime
docker run -e DATABASE_URL=postgres://... app

# From .env file
docker run --env-file .env.production app

# Inspect current env inside container
docker exec myapp env

# Dockerfile: build-time ARG vs runtime ENV
ARG vs ENV
ARG  COMMIT_SHA                        # available during build only
ENV  COMMIT_SHA=$COMMIT_SHA            # expose ARG as runtime ENV
ENV  NODE_ENV=production               # always set at runtime

# Build: docker build --build-arg COMMIT_SHA=abc123 .
⚠️
Never bake secrets into images. Use Docker secrets, BuildKit secret mounts, or inject at runtime via environment. Anything in a Dockerfile RUN command or ENV is visible in docker history.

Docker Compose

Compose lets you define and run multi-container applications from a single YAML file. One command starts your entire stack.

docker-compose.yml — full example
name: myapp

services:

  api:
    build:
      context: .
      dockerfile: Dockerfile
      target: production
      args:
        NODE_ENV: production
    image: myapp/api:latest
    ports:
      - "3000:3000"
    environment:
      - DATABASE_URL=postgres://user:pass@db:5432/mydb
      - REDIS_URL=redis://cache:6379
    env_file:
      - .env.local
    depends_on:
      db:
        condition: service_healthy
      cache:
        condition: service_started
    volumes:
      - ./src:/app/src                 # hot-reload in dev
    networks:
      - backend
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "wget", "-qO-", "http://localhost:3000/health"]
      interval: 30s
      timeout: 5s
      retries: 3

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
      POSTGRES_DB: mydb
    volumes:
      - pgdata:/var/lib/postgresql/data
      - ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
    networks:
      - backend
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U user -d mydb"]
      interval: 10s
      retries: 5

  cache:
    image: redis:7-alpine
    command: redis-server --appendonly yes
    volumes:
      - redisdata:/data
    networks:
      - backend

  nginx:
    image: nginx:1.25-alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./certs:/etc/ssl/certs:ro
    depends_on:
      - api
    networks:
      - backend

volumes:
  pgdata:
  redisdata:

networks:
  backend:
    driver: bridge

Essential Compose Commands

docker compose
docker compose up                         # start all services (foreground)
docker compose up -d                     # detached
docker compose up --build               # rebuild images first
docker compose up -d --scale api=3     # run 3 replicas of api
docker compose down                       # stop & remove containers + networks
docker compose down -v                  # also remove volumes
docker compose stop                       # stop without removing
docker compose start                      # start stopped services
docker compose restart api               # restart one service
docker compose ps                         # status of all services
docker compose logs -f api              # follow logs of one service
docker compose logs -f --tail 100      # all services, last 100 lines
docker compose exec api sh               # shell into running service
docker compose run --rm api npm test    # one-off command
docker compose build --no-cache        # full rebuild
docker compose config                     # validate & view merged YAML
docker compose pull                       # pull latest base images
docker compose top                        # processes in each service

Compose Patterns

Override Files (dev vs prod)

Use docker-compose.override.yml for dev-specific settings — it's merged automatically.

docker-compose.override.yml (dev only)
services:
  api:
    build:
      target: development          # use dev stage
    volumes:
      - .:/app                       # hot-reload
      - /app/node_modules            # exclude node_modules from bind
    command: npm run dev
    environment:
      - NODE_ENV=development
explicit file selection
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d

Profiles — Conditional Services

profiles
services:
  adminer:
    image: adminer
    profiles: ["tools"]              # only starts with --profile tools

# Start only tool-flagged services
docker compose --profile tools up -d

Registries & Pushing Images

push to Docker Hub
docker login                                      # Docker Hub
docker login ghcr.io -u USERNAME --password-stdin  # GHCR
docker login 123456789.dkr.ecr.us-east-1.amazonaws.com  # ECR (use aws ecr get-login-password)

docker build -t myuser/myapp:1.0.0 .
docker push myuser/myapp:1.0.0
docker push myuser/myapp:latest

# Multi-platform build & push in one step (buildx)
docker buildx build \
  --platform linux/amd64,linux/arm64 \
  -t myuser/myapp:1.0.0 \
  --push .

CI/CD Integration

GitHub Actions — Build, Test & Push

.github/workflows/docker.yml
name: Build & Push

on:
  push:
    branches: [main]
    tags: ['v*']

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Log in to GHCR
        uses: docker/login-action@v3
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Extract metadata (tags, labels)
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ghcr.io/${{ github.repository }}

      - name: Build and push
        uses: docker/build-push-action@v5
        with:
          context: .
          platforms: linux/amd64,linux/arm64
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          cache-from: type=gha          # GitHub Actions cache
          cache-to: type=gha,mode=max

CI Commands Reference

CI/CD shell commands
# Run tests inside a container (no local deps needed)
docker run --rm -v $(pwd):/app -w /app node:20 npm test

# Integration tests with Compose
docker compose -f docker-compose.test.yml up --abort-on-container-exit
docker compose -f docker-compose.test.yml down -v

# Scan image for vulnerabilities (Docker Scout)
docker scout cves myapp:latest
docker scout quickview myapp:latest

Deployment

Single Host (SSH + Compose)

deploy to remote host
# Create SSH context pointing to prod server
docker context create prod \
  --docker "host=ssh://[email protected]"

# Run any docker/compose command against prod
docker --context prod compose up -d
docker --context prod ps

# Or set context for the whole session
export DOCKER_CONTEXT=prod
docker compose pull && docker compose up -d

Docker Swarm (Multi-Host)

swarm
# Initialize a Swarm
docker swarm init --advertise-addr 
docker swarm join --token  MANAGER:2377   # on worker nodes
docker node ls                                    # list nodes

# Deploy a stack (compose file format)
docker stack deploy -c docker-compose.yml myapp
docker stack ls
docker stack ps myapp                             # tasks / containers
docker stack rm myapp                             # teardown

# Scale a service
docker service scale myapp_api=5
docker service ls
docker service update --image myapp/api:2.0 myapp_api  # rolling update
ℹ️
Kubernetes vs Swarm For larger production workloads, Kubernetes (k8s) is the industry standard. Docker Swarm is simpler to set up and great for smaller teams or self-hosted deployments.

Security

security practices
# 1. Run as non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

# 2. Read-only root filesystem
docker run --read-only --tmpfs /tmp app

# 3. Drop capabilities
docker run --cap-drop ALL --cap-add NET_BIND_SERVICE app

# 4. Prevent privilege escalation
docker run --security-opt no-new-privileges app

# 5. Use Docker secrets (Swarm)
echo "mysecretpassword" | docker secret create db_password -
# In compose: secrets: [db_password], then mount at /run/secrets/db_password

# 6. Scan for CVEs
docker scout cves myapp:latest

# 7. Minimal base images
# Use alpine, distroless, or scratch where possible
FROM gcr.io/distroless/nodejs20-debian12
🚨
Never use --privileged in production. It gives the container near-full host access. If you need it for a tool, sandbox it carefully and never expose it to the internet.

Debugging

debug toolkit
# Shell into a running container
docker exec -it myapp sh

# Shell into an image (even if it has no shell installed)
docker debug myapp:latest

# Inspect why a container exited
docker inspect --format '{{.State}}' myapp
docker logs --tail 50 myapp

# Live resource stats
docker stats
docker stats myapp --no-stream              # one snapshot

# Inspect filesystem diffs
docker diff myapp

# Check what a container sees on the network
docker exec myapp nslookup db               # DNS resolution
docker exec myapp wget -qO- http://db:5432  # connectivity test

# Validate Compose config
docker compose config

# System-wide cleanup
docker system df                            # disk usage
docker system prune                         # remove stopped containers, unused images, networks
docker system prune -a --volumes           # nuclear option

Quick Reference Cheatsheet

Images

CommandDescription
docker build -t name:tag .Build image from Dockerfile
docker pull image:tagPull image from registry
docker push image:tagPush image to registry
docker image lsList local images
docker image rm nameRemove image
docker image prune -aRemove all unused images
docker history imageShow layer history
docker tag src dstTag/alias an image

Containers

CommandDescription
docker run -d -p 8080:80 --name web nginxRun detached with port and name
docker run -it ubuntu bashInteractive shell
docker run --rm imageAuto-remove on exit
docker ps / docker ps -aRunning / all containers
docker stop / start / restart nameLifecycle control
docker rm -f nameForce remove container
docker exec -it name shShell into running container
docker logs -f nameFollow logs
docker statsLive resource usage
docker inspect nameFull metadata JSON
docker cp src container:/dstCopy files in/out
docker container pruneRemove all stopped containers

Volumes & Networks

CommandDescription
docker volume create nameCreate named volume
docker volume ls / rm / pruneManage volumes
docker network create nameCreate network
docker network connect net containerAttach container to network
docker network ls / rm / pruneManage networks

Compose

CommandDescription
docker compose up -d --buildBuild & start in background
docker compose down -vStop, remove containers + volumes
docker compose logs -f serviceFollow service logs
docker compose exec svc shShell into service
docker compose run --rm svc cmdOne-off command
docker compose psService status
docker compose configValidate merged YAML

System

CommandDescription
docker system dfDisk usage breakdown
docker system pruneClean unused resources
docker system prune -a --volumesFull nuclear cleanup
docker infoDocker daemon info
docker versionCLI + daemon versions
docker context ls / use nameSwitch environment