Joshua's Docs - Docker Cheatsheet, notes, and more

Resources

What & Link Type
Docs: Base Commands - Quick Ref CLI Reference
Docs: Compose - getting started Tutorial
Compose Guide (Heroku) Guide
FreeCodeCamp: Practical Introduction to Docker Compose Tutorial
Fireship: Docker in 100 Seconds YouTube Video
Just Enough Docker Cheatsheet / Quick Guide

Docker CLI - Basic Commands

What Command Helpful Flags
Build an image docker build {path}
{path} is
--tag / -t Specify image name (+ optional tag). Format is name:tag, but :tag is optional.
     - Example: -t joshua/custom-image:3.2
     - You can use -t multiple times, for multiple tags
--build-arg Specify environment variable values only for build-time.
List active containers docker ps --all
Restart a container docker restart {containerName}
Stop a container docker kill {containerName}
Inspect something docker inspect {thingIdOrName}
View logs docker logs {?:containerName} -f follow
-tail
--details show extra detail
Execute a command inside a container docker exec {containerName} -i interactive
-t allocate TTY
-d detach
Create volume docker volume create {options} {volumeName} --driver (see notes on volume types)
--label
--name
--opt
Copy file(s) into container docker cp {srcPath} {containerName}:{containerPath}
List volumes docker volume ls
Drop volume docker volume rm {volumeName} -f or --force
Print version docker version
List images docker image ls
Pull an image docker pull {imageRef}
Run an image docker run {imageRef}

Example: docker run hello-world
--name (give container a short name)
Run an image temporarily, with a shell docker run -rm -it {imageRef} /bin/bash
List downloaded images docker images OR docker image ls --filter

Note: For most of the commands that take an image name, you can optionally request a specific tag or digest version of an image by using a special suffix. For example, {imageName}:{tag} or {imageName}@sha256:{digestSha}

docker-compose is listed below, in its own section

Docker Compose (aka docker-compose)

Docker Compose - Commands

There is an online official compose command overview, or use docker-compose --help

Many of these commands are in common with compose-less environments, just using docker instead of docker-compose

There are common flags that can be applied with any subcommand. For example:

-f or --file lets you specify a docker .yml file other than the default of docker-compose.yml.

If using -f / --file, it should come before the command, like docker-compose -f ./my-compose-file.yml up -d

What Command Helpful Flags
Test out a config docker-compose config
Stop containers, without removing docker-compose stop
Stop and remove containers docker-compose down -v: remove named volumes
Launch container docker-compose up -d (starts in detached mode)
--force-recreate
--build
--renew-anon-volumes
Build an image for a service docker-compose build {optArgs} {optServiceName} --no-cache
Start a specific service, and run a specific command against it docker-compose run {serviceName} {OPT_command} {OPT_args}
Removed stopped containers docker-compose rm If you don't pass a name, it removes all (like rm *)
-v removes the volumes as well. Useful if something is persisting when you don't want it to and need a hard reset.
-f : Force
Validate a config without running it docker-compose config -q - Quiet mode, only validates true/false without verbose output
Show logs docker-compose logs -f follow
--tail={#} Tail lines
You can also pass the container name - docker-compose logs {containerName}
Run a command inside the container docker-compose exec -it {serviceName} {command} -it (interactive, tty)

Docker-Compose File syntax and options

Docker Tag vs Docker Label

The primary difference between docker tags and docker labels are that tags are just values attached to the image, without keys, whereas labels are key-value pairs, stored as metadata.

Another way to think of it is that tags are like nicknames or aliases ("fred", "freddie"), whereas labels are like properties ("Fred has a blue shirt").

With this in mind, tags are usually best reserved for version strings, release names, or target platforms, so you can easily pull an image by such, whereas labels are for any other metadata that might be useful to later refer to.

  Docker Tags Docker Labels
Type Value Key-Value Pair
Set in/at Build CLI (--tag) Dockerfile (LABEL key="value")
Can be changed post-build
Easily enforced as part of build
Retrieved by docker images, registry info docker inspect

Networking

Main resource: Docker - Networking Overview

Ports vs Expose

📄 Helpful StackOverflow: "What is the difference between docker-compose ports vs expose"

In newer versions of docker, EXPOSE doesn't even do anything; its purpose is informational, as a way to explicitly advertise (to readers of the code) that the image (or service) will be working with the port(s) referenced, and other services should be able to talk to it through that port. Expose does not expose them to the host machine.

The reason why this is only informational is that cross-service communication is enabled by Docker out-of-the-box, regardless of whether or not you use expose.

Port settings (e.g. docker-compose {service}.ports or --publish) actually do expose container ports to the host.

The syntax for port settings are host_port:container_port, or just container_port (which uses a random host port)

Cross-Container Networking / Cross-Container Port Sharing

By default, if the containers are on the same docker network instance, they can talk to each other directly via service name. E.g., if you have a db service and an api service, the API service should be able to reach any port on the DB service by using db:${PORT}.

This works for multiple traffic types, including regular HTTP (e.g. http://backend:80).

However, there a few cases where this is problematic - namely with programs that expect an address that strictly confirms to an IP address (e.g. IPV4, 127.0.0.1).

Static IPs for Internal Cross-Container Networking

If you want to set up cross-container communication where one container can essentially see the other's port as being on the local network (not the service name) you can do something like this:

services:
  my_app:
    networks:
      my_custom_network:
        ipv4_address: 172.20.0.2

networks:
  my_custom_network:
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet: "172.20.0.0/16"

Volumes

Volume types options / drivers

See: Docs: Docker storage drivers

Type Description
tmpfs Temp data, stored in memory, not persisted between sessions
btrfs Advanced, persisted storage. Requires a bunch of pre-reqs, including Linux Kernel support.
nfs Shared, persisted storage

Selectively Bind-Mounting Nested Files / Folders - Excluding Nested Paths

Extra search terms: Sub-directory overrides, selective sub-directory bind mounts, exclude subdirectory from bind-mount

A common issues that you can run into with Docker is when you want to share an entire folder between the host and container - like src for development work - but exclude certain sub-paths.

This comes up a lot with dependencies, since you generally want to avoid bind-mounting Node's node_modules or Python's site-packages directory, since sharing dependencies between a host and container often leads to issues.

The easiest (AFAIK, and StackOverflow and GitHub seem to agree) way to get around this is to override a higher-level bind-mount with a lower-level / more specific mount to an anonymous or named volume. Here is an example:

version: '3'

services:
  app:
    image: node:slim
    volumes:
      - ${PWD}/src:/opt/app
      - /opt/app/node_modules
    command: node /opt/app/index.mjs

In the above example, the /opt/app/node_modules is an anonymous volume mount, which is there to prevent the node_modules subdirectory from being shared with the host (host = src/node_modules), even though the rest of ${PWD}/src is still passed through / bind-mounted to /opt/app.

This also works for things like swapping out symlink targets, selectively sharing build output with other containers, etc.

How do i...

How do I...

  • List active containers
    • docker ps
  • Restart a container with changes?
    • Bring up, and force image rebuild:
      • docker-compose up -d --force-recreate --build (or without --build if only the compose YML has changed, and not the image)
      • If you don't need to bring up: docker-compose build --no-cache
      • NOTE: neither of the above options rebuilds volumes, they only rebuild images. You can use --renew-anon-volumes to recreate anonymous volumes, but this won't work for named volumes.
    • If only the YML / YAML has changed?
      • docker-compose up -d --force-recreate
      • Note that this is necessary for a lot of changes - for example, updated bind mounts won't take effect until you recreate
    • If you run into issues (changes don't seem to take affect, passwords not working, etc)...
      • Sometimes the volume data will persist, even if you ran docker-compose down -v first.
      • Try finding the volume data directory and running rm -rf on it, then rebuild and start back up
  • Rebuild volumes?
    • If it is an anonymous volume, you can use --renew-anon-volumes with docker-compose up
    • If it is a named volume:
      • docker compose down -v (works for both stopped and running)
        • Or
      • Named compose service: docker compose down {SERVICE_NAME} -v (works for both stopped and running)
        • OR
      • Stop container, then run docker volume rm {VOLUME_REF}
    • To remove both a container and its volumes, use docker rm --volumes {containerName}
  • Get a container IP address
    • docker inspect {containerName}
    • Just IP address:
      • docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
      • Use double quotes if you have issues.
      • Credit: S/O
  • Debug a container not starting correctly (die event), or view logs
    • Multiple options for log viewing:
      • See this S/O
        • Summary:
          • Start event logger with docker events& - this kind of does tail
          • Do "thing" - e.g. up, that triggers error
          • You should see error appear in event log
          • Copy error instance ID from event log and use with docker logs {eventId}
      • docker-compose -f {composeFile} logs
        • Use -f to follow
        • Use --tail={#} to tail # of lines
      • docker logs {?:containerName}
  • Access shell inside container
    • Already running container?
      • docker exec -it {containerName} bash (or sh)
        • Use exit to stop
      • You can also try docker attach {containerName} to link console STDIN/STDOUT, but this seems less optimal than exec
    • Stopped container?
      • docker compose run {serviceName} bash (or sh)
        • E.g. docker compose run my-backend bash
      • docker run -it -d {IMAGE_REF} bin/bash
        • If you don't have an image for the container, you can use docker commit {CONTAINER_REF} temporary_image to generate one, then use with the above command
        • Use something like --entrypoint=bash if you need to override the entrypoint
      • Lots of tips in this S/O question thread
  • Remove a stubborn network ("error while removing network ... has active endpoints")
    1. Find out what containers are using the network
      • docker inspect {networkName}
    2. Either bring down those containers, or break the connection
      • To break connection(s), use docker network disconnect {networkName} {containerName}
    3. Once you have broken linkage, you should be able to bring down
      • Either re-run down, or manually remove network with docker network rm {networkName}
  • Kill *all containers
    • docker ps -q | xargs docker kill
  • Find the size of an image before downloading?
    • Use the "Tags" tab on a Docker image listing page, e.g. Alpine
  • Bind / expose a port for an already running container?
  • Run an image, temporarily, and immediately tear down afterwards?
    • docker run --rm

Docker Introspection / Debugging Tools

Inspecting Builds, Failed Image Builds

If you are trying to inspect or debug a Docker image build, there are a few different options (but unfortunately fewer, if you are using the newer buildx engine):

  • buildkit: Inspecting each layer, by hash
    • With the older buildkit engine, it will log each image layer's hash, which you can then use to spin up a container at that exact point
  • buildg (ktock/buildg): An interactive debugger for Docker build. Based on buildkit.
  • buildx: They are working on building debugging tooling into newer versions of buildx - see buildkit Issue #1472 and the buildx proposal / tracker issue #1104

Inspecting Stopped Containers

Docker has limited tooling to support inspecting stopped containers, and many commands assume that a container is running and will error out if it is not. However, there are several options for getting around this.

The first (obvious) answer to how to inspect a stopped container is to start it back up (e.g. with docker start) and then inspect it, but this might not be possible. Perhaps the entry point for the container is something that exits quickly, or errors out when started directly.

If you are using docker compose and don't need to override the entrypoint, you can also use docker compose run:

docker compose run -it {serviceName} bash

If you can't simply use docker start / docker compose run, the next thing to try is overriding the entry point / last cmd entry:

# If the image already exists
docker run -it --rm --entrypoint=/bin/bash {IMAGE_REF}

Inspecting Volumes

docker run -it --rm -v {VOLUME_REF}:{CONTAINER_PATH} busybox sh

# e.g.
docker run -it --rm -v my-volume:/tmp/volume busybox sh

Issues / Troubleshooting

  • Linux / macOS: Commands only work when prefaced with sudo
    • Add your user to the Docker usergroup: sudo usermod -aG docker $USER
      • You must either log out and back in, or use su -s ${USER} to refresh group membership
    • See Docker docs: Linux: Manager Docker as a non-root user
  • exec -t output is missing lines, line endings don't make sense, etc.
    • -t(or -tty) allocates a pseudo tty for output, which by default uses CR instead of LF for line endings. If you are piping the output back to your shell, this can lead to confusing output and errors if you are trying to pipe it back into programs expecting LF.
    • You can use something like tr to replace the line endings, as a quick fix
  • PostgreSQL issues with Windows - usually due to permissions issue
  • Can't remove volume (even with -f or --force): "Error response from daemon: remove {volume}: volume is in use"
    • Try this first:
      • docker-compose down -v (or docker-compose rm -f -v) (WARNING: This will remove all volumes)
    • If the volume is being locked by a container, you can find what is using it:
      • docker ps -a --filter volume={volume} to find container, and then docker rm {CONTAINER} to remove container
    • Try docker system prune, and then remove volumes again
    • As a last ditch effort, you can try restarting the entire Docker service (either via GUI, or CLI), then try removing container
    • Lots of tips in the responses to this S/O question
  • Can't stop or kill a running container (maybe with a message like An HTTP request took too long...)
    • Sometimes Docker just gets in a really strange state. AFAIK, there are a bunch of open issues for this, but the only real fix is to just restart the Docker service entirely. There should even be a menu option to restart Docker Desktop!
    • You could try getting in through a shell (e.g. with docker exec), but that is likely to fail if the container is hosed anyways.
  • Docker keeps pulling from the wrong node_modules, or stale modules, etc.
    • Make sure you aren't actually mounting your local node_modules directory directly, or else you are going to run into version issues
    • If you need to bind-mount the directory that contains node_modules (e.g. /src), then create an anonymous volume for the nested node_modules to prevent it from being bind-mounted to the host
      • You usually want to use an anonymous volume, not* a named volume, or else you are going to run into weird data persistence issues between runs
  • For NodeJS projects, keep getting a unexpected token "export" error
    • Likely issue with node_modules resolution; make sure that dependencies can be found within the container, and try to avoid bind-mounting node_modules
      • Don't use node_modules as a volume name!
    • If node_modules is not bind-mounted and everything is set up correctly, but you are still getting this error, make sure that node_modules is actually up-to-date - if package.json changed, you likely need to rebuild your container image, as it probably has npm install as a step
      • Example reset: docker-compose down -v && docker-compose build && docker-compose up
      • Also, make sure both package.json and package-lock.json are copied to the image before running npm install
Markdown Source Last Updated:
Wed Nov 27 2024 17:08:33 GMT+0000 (Coordinated Universal Time)
Markdown Source Created:
Thu Dec 05 2019 01:06:19 GMT+0000 (Coordinated Universal Time)
© 2024 Joshua Tzucker, Built with Gatsby
Feedback