Resources
What & Link | Type |
---|---|
Docs: Base Commands - Quick Ref | CLI Reference |
Docs: Compose - getting started | Tutorial |
Compose Guide (Heroku) | Guide |
FreeCodeCamp: Practical Introduction to Docker Compose | Tutorial |
Fireship: Docker in 100 Seconds | YouTube Video |
Just Enough Docker | Cheatsheet / Quick Guide |
Docker CLI - Basic Commands
What | Command | Helpful Flags |
---|---|---|
Build an image | docker build {path} {path} is |
--tag / -t Specify image name (+ optional tag). Format is name:tag , but :tag is optional.- Example: -t joshua/custom-image:3.2 - You can use -t multiple times, for multiple tags--build-arg Specify environment variable values only for build-time. |
List active containers | docker ps |
--all |
Restart a container | docker restart {containerName} |
|
Stop a container | docker kill {containerName} |
|
Inspect something | docker inspect {thingIdOrName} |
|
View logs | docker logs {?:containerName} |
-f follow-tail --details show extra detail |
Execute a command inside a container | docker exec {containerName} |
-i interactive-t allocate TTY-d detach |
Create volume | docker volume create {options} {volumeName} |
--driver (see notes on volume types)--label --name --opt |
Copy file(s) into container | docker cp {srcPath} {containerName}:{containerPath} |
|
List volumes | docker volume ls |
|
Drop volume | docker volume rm {volumeName} |
-f or --force |
Print version | docker version |
|
List images | docker image ls |
|
Pull an image | docker pull {imageRef} |
|
Run an image | docker run {imageRef} Example: docker run hello-world |
--name (give container a short name) |
Run an image temporarily, with a shell | docker run -rm -it {imageRef} /bin/bash |
|
List downloaded images | docker images OR docker image ls |
--filter |
Note: For most of the commands that take an image name, you can optionally request a specific tag or digest version of an image by using a special suffix. For example,
{imageName}:{tag}
or{imageName}@sha256:{digestSha}
docker-compose
is listed below, in its own section
Docker Compose (aka docker-compose)
Docker Compose - Commands
There is an online official compose command overview, or use docker-compose --help
Many of these commands are in common with compose-less environments, just using
docker
instead ofdocker-compose
There are common flags that can be applied with any subcommand. For example:
-f
or --file
lets you specify a docker .yml
file other than the default of docker-compose.yml
.
If using
-f
/--file
, it should come before the command, likedocker-compose -f ./my-compose-file.yml up -d
What | Command | Helpful Flags |
---|---|---|
Test out a config | docker-compose config |
|
Stop containers, without removing | docker-compose stop |
|
Stop and remove containers | docker-compose down |
-v : remove named volumes |
Launch container | docker-compose up |
-d (starts in detached mode)--force-recreate --build --renew-anon-volumes |
Build an image for a service | docker-compose build {optArgs} {optServiceName} |
--no-cache |
Start a specific service, and run a specific command against it | docker-compose run {serviceName} {OPT_command} {OPT_args} |
|
Removed stopped containers | docker-compose rm |
If you don't pass a name, it removes all (like rm * )-v removes the volumes as well. Useful if something is persisting when you don't want it to and need a hard reset.-f : Force |
Validate a config without running it | docker-compose config |
-q - Quiet mode, only validates true/false without verbose output |
Show logs | docker-compose logs |
-f follow--tail={#} Tail linesYou can also pass the container name - docker-compose logs {containerName} |
Run a command inside the container | docker-compose exec -it {serviceName} {command} |
-it (interactive, tty) |
Docker-Compose File syntax and options
- Official Compose File Docs
- DevHints: Docker-Compose Cheatsheet
- Guide on Docker Compose Volume syntax - Maxim Orlov: Docker Compose Syntax: Volume or Bind Mount?
Docker-Compose Build
- Worth noting that the
dockerfile
compose value is evaluated relative to the context, so in the below example:
services:
app:
build:
context: ./a/b/c
dockerfile: ../Dockerfile
The Dockerfile
would be expected to be located at ./a//b/Dockerfile
.
Docker-Compose: Extending Configurations via the CLI
Let's say that you have an existing docker-compose config file, like docker-compose.yml
, but you want to change some of the settings without actually modifying the file (maybe you can't?).
One option is to create another config file (B) that extends the original (A), and run them together, with docker-compose -f ${PATH_TO_FILE_A} -f ${PATH_TO_FILE_B} ${REST_OF_CMD}
- Example:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up webapp
Docker-Compose: Random Tips and Notes
- You can use environment values within your docker-compose file (variable substitution)
- You can pass environment values to the container itself
- WARNING: These have to be passed explicitly - Docker will not just pass all values through
- Full details
- You can use the
env_file
option to pass an entire.env
file contents to the container- Put under
environment:
but leave off value - Pass via
docker-compose run -e {VAR}={VAL} {imageName}
- Put under
- You can use comments (prefix with
#
, like shell scripts) - Specifying
ports
in YAML:- Compose syntax is
host_port:container_port
- Example:
3000:80
would mean that you would go tolocalhost:3000
on your computer (host) to access a web app that is serving on80
within the container
- Example:
- If both host and container are using the same port, you still have to specify both
- This is because a
ports
value of80
does not convert to80:80
- instead it turns into{RANDOM_port}:80
- This is because a
- Compose syntax is
- You can start an individual service within a compose file with
docker-compose up {SERVICE}
Dockerfile
Dockerfile syntax and options
💡 Doc: Dockerfile reference
📄 Cheatsheet: Dockerfile Syntax Cheatsheet
💡 Doc: Best Practices
Passing Values into Dockerfiles, Exposing Host Environment Variables
Using Environment Variables in Dockerfiles, Passing Environment Variable Values into Dockerfiles
You can use the ARG
keyword to declare an environment variable that the Dockerfile should accept and pass through to the commands running during the image build (at build-time, not run-time).
ARGS
can define a default value, but don't have to. However, if a default is omitted, the argument [...]
ARG
should not be used for credentials / secrets; use the--mount-type=secret
option to pass secret values through
Docker Entrypoints, Services, init, and Process Management
An often glossed-over part of crafting a Docker image is how you are responsible for orchestrating the services and processes that you spin up as part of the main entrypoint (often referred to as part of init). When a Docker image does a singular thing after start up, this doesn't present much of a challenge, but when a container needs to run multiple services, it can get complicated (especially if there are zombie processes).
Docs: Multi Service Containers
Rather than writing a complicated shell script to orchestrate all your services, it is often worth exploring the use of a service orchestration tool, like:
💡 If you are looking for a good explanation on why this matters for Docker / what this is all about, this writeup from the maintainer of
tini
is a great resource
Inspecting CMD
and ENTRYPOINT
Generally speaking,
docker inspect
is your friend when trying to inspect Docker configurations
For both images and containers, you can use docker inspect
to retrieve metadata, including the Cmd
and Entrypoint
.
Using a utility like jq
, you can filter to just the items of interest:
docker inspect {ImageOrContainerRef} | jq '.[0].Config | {Cmd, Entrypoint}'
CMD
vs RUN
vs ENTRYPOINT
vs docker-compose command
So what exactly is the difference between
CMD
vsRUN
vsENTRYPOINT
vs docker-composecommand
?
RUN
can be used to execute a command as part of a Dockerfile / Docker image build process. There are a few caveats / things to note:- This doesn't really work for starting services / daemons / processes. Generally, the output of a
RUN
command should be something tangible that can be cached and is a one-shot process.
- This doesn't really work for starting services / daemons / processes. Generally, the output of a
CMD
andcommand
are what gets executed against your entrypoint on the startup of your container. It is one of the most critical components of a Docker configuration- If you define
command
through docker-compose, it will override the Dockerfile / image'sCMD
value
- If you define
ENTRYPOINT
: UnlikeCMD
, this is not designed to be overridden via the CLI arbitrarily (*); once set in a Dockerfile, it makes the container act more like an executable, where you can use theCMD
as arguments to the entrypoint (combining them), but not overriding it.- Technically, you can actually override this via CLI, when using
docker run --entrypoint
- E.g.
docker run --entrypoint /bin/bash {ref}
- E.g.
- Technically, you can actually override this via CLI, when using
Dockerfile Gotchas and Tips
There are some interesting "gotchas" with Dockerfiles - usually related to order-of-operations and combining or splitting up commands.
- You never want a command like
apt-get update
to run as its own command - it should always be combined with an install command to make sure the results are not cached. RUN
only accepts a single shell string at a time. However, there are some workarounds, which are really just the same as multi-line shell tricks:- Use
\
at the end of lines to break up long commands - Use the heredoc syntax to run true multi-line command strings:
RUN <<EOF echo "Line 1" echo "Line 2" EOF
- Important: Even though most shells accept a space between the
<<
and the EOF delimiter, in Dockerfiles there needs to be no space. So use<<EOF
not<< EOF
- Important: Even though most shells accept a space between the
- Use
CMD
vsRUN
vsENTRYPOINT
vs docker-composecommand
- See section above
- Order of operations and cache-invalidation
- You pretty much always want to order operations that are more likely to frequently change (e.g. copying source code, installing dependencies) later in the Dockerfile than stable commands, so you can reuse the cache layers before them instead of invalidating them
Docker Build and Managing Docker Images
Analyzing and Optimizing Docker Builds
Cool tools:
Docker Build vs Builder vs Buildx vs Compose Build
If you come to the Docker docs and look for sections on building, you will find docker build, docker builder, and docker buildx. Plus docker compose build
.
In short:
docker builder
is an alias for the default behavior ofdocker build
buildx
(docker buildx
) is a more powerful builder that can be used for multi-platform builds
Finally, docker compose build
is really just a wrapper around docker build
- it runs docker build
over the services in your docker-compose.yml
file to build the individual image for each service.
Exporting and Importing Docker Images as Files
Exporting Docker Images as Files
# Export an image to a tarball
docker save -o "exported_image.tar" "$image_name"
# you can pipe to a compressor, like gzip
docker save "$image_name" | gzip > "exported_image.tar.gz"
# Or, you could zip up after the fact
# (which is useful if you have multiple tarballs)
# E.g.:
tar --directory ./exported-images -cvf - . | pigz > all_images.tar.gz
# Or, non-pigz
tar --directory ./exported-images -czvf "all_images.tar.gz" .
Importing Docker Image Files
Use docker load
.
💡
docker load
works out of the box with compressed tars; no need to un-compress first.
docker load --input "exported_image.tar"
# You can also pipe to `docker load`, from stdin
# E.g.:
(some_command) | docker load
Debugging Failed Docker Builds
It used to be that debugging failed Docker builds was easier than it is now, because the build command would export each build layer with a SHA ID you could use to spin up that layer as a container and inspect things (i.e., these steps). You could take the ID of the layer right before the failure, spin it up, and do your debugging.
However, as of August 2023 (time of writing this), this will not work with a modern Docker installation out-of-the-box, as this relies on the older backend for builds, and the new backend, buildkit
, does not offer this functionality. You can workaround this by temporarily using the older (non-buildkit) backend for builds (e.g. DOCKER_BUILDKIT=0 docker build ...
) (more detailed instructions).
There are also efforts underway to improve the debugging experience with the new buildkit
backend (see buildkit issue #1472), as well as third-party tools for this purpose - such as ktock/buildg.
Multi-Stage Builds
https://docs.docker.com/build/building/multi-stage/
Docker Tag vs Docker Label
The primary difference between docker tags and docker labels are that tags are just values attached to the image, without keys, whereas labels are key-value pairs, stored as metadata.
Another way to think of it is that tags are like nicknames or aliases ("fred", "freddie"), whereas labels are like properties ("Fred has a blue shirt").
With this in mind, tags are usually best reserved for version strings, release names, or target platforms, so you can easily pull an image by such, whereas labels are for any other metadata that might be useful to later refer to.
Docker Tags | Docker Labels | |
---|---|---|
Type | Value | Key-Value Pair |
Set in/at | Build CLI (--tag ) |
Dockerfile (LABEL key="value" ) |
Can be changed post-build | ✅ | ❌ |
Easily enforced as part of build | ❌ | ✅ |
Retrieved by | docker images , registry info |
docker inspect |
Networking
Main resource: Docker - Networking Overview
Ports vs Expose
📄 Helpful StackOverflow: "What is the difference between docker-compose ports vs expose"
In newer versions of docker, EXPOSE
doesn't even do anything; its purpose is informational, as a way to explicitly advertise (to readers of the code) that the image (or service) will be working with the port(s) referenced, and other services should be able to talk to it through that port. Expose
does not expose them to the host machine.
The reason why this is only informational is that cross-service communication is enabled by Docker out-of-the-box, regardless of whether or not you use
expose
.
Port settings (e.g. docker-compose {service}.ports
or --publish
) actually do expose container ports to the host.
The syntax for port settings are
host_port:container_port
, or justcontainer_port
(which uses a random host port)
Cross-Container Networking / Cross-Container Port Sharing
By default, if the containers are on the same docker network instance, they can talk to each other directly via service name. E.g., if you have a db
service and an api
service, the API service should be able to reach any port on the DB service by using db:${PORT}
.
This works for multiple traffic types, including regular HTTP (e.g. http://backend:80
).
However, there a few cases where this is problematic - namely with programs that expect an address that strictly confirms to an IP address (e.g. IPV4, 127.0.0.1
).
Static IPs for Internal Cross-Container Networking
If you want to set up cross-container communication where one container can essentially see the other's port as being on the local network (not the service name) you can do something like this:
services:
my_app:
networks:
my_custom_network:
ipv4_address: 172.20.0.2
networks:
my_custom_network:
driver: bridge
ipam:
driver: default
config:
- subnet: "172.20.0.0/16"
Volumes
Volume types options / drivers
See: Docs: Docker storage drivers
Type | Description |
---|---|
tmpfs |
Temp data, stored in memory, not persisted between sessions |
btrfs |
Advanced, persisted storage. Requires a bunch of pre-reqs, including Linux Kernel support. |
nfs |
Shared, persisted storage |
Selectively Bind-Mounting Nested Files / Folders - Excluding Nested Paths
Extra search terms: Sub-directory overrides, selective sub-directory bind mounts, exclude subdirectory from bind-mount
A common issues that you can run into with Docker is when you want to share an entire folder between the host and container - like src
for development work - but exclude certain sub-paths.
This comes up a lot with dependencies, since you generally want to avoid bind-mounting Node's node_modules
or Python's site-packages
directory, since sharing dependencies between a host and container often leads to issues.
The easiest (AFAIK, and StackOverflow and GitHub seem to agree) way to get around this is to override a higher-level bind-mount with a lower-level / more specific mount to an anonymous or named volume. Here is an example:
version: '3'
services:
app:
image: node:slim
volumes:
- ${PWD}/src:/opt/app
- /opt/app/node_modules
command: node /opt/app/index.mjs
In the above example, the /opt/app/node_modules
is an anonymous volume mount, which is there to prevent the node_modules
subdirectory from being shared with the host (host = src/node_modules
), even though the rest of ${PWD}/src
is still passed through / bind-mounted to /opt/app
.
This also works for things like swapping out symlink targets, selectively sharing build output with other containers, etc.
How do i...
How do I...
- List active containers
docker ps
- Restart a container with changes after a .yml / yaml config change?
- Bring up, and force image rebuild:
docker-compose up -d --force-recreate --build
- If you don't need to bring up:
docker-compose build --no-cache
- NOTE: neither of the above options rebuilds volumes, they only rebuild images. You can use
--renew-anon-volumes
to recreate anonymous volumes, but this won't work for named volumes.
- If you run into issues (changes don't seem to take affect, passwords not working, etc)...
- Sometimes the volume data will persist, even if you ran
docker-compose down -v
first. - Try finding the volume data directory and running
rm -rf
on it, then rebuild and start back up
- Sometimes the volume data will persist, even if you ran
- Bring up, and force image rebuild:
- Rebuild volumes?
- If it is an anonymous volume, you can use
--renew-anon-volumes
withdocker-compose up
- If it is a named volume:
docker compose down -v
(works for both stopped and running)- Or
- Named compose service:
docker compose down {SERVICE_NAME} -v
(works for both stopped and running)- OR
- Stop container, then run
docker volume rm {VOLUME_REF}
- To remove both a container and its volumes, use
docker rm --volumes {containerName}
- If it is an anonymous volume, you can use
- Get a container IP address
docker inspect {containerName}
- Just IP address:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
- Use double quotes if you have issues.
- Credit: S/O
- Debug a container not starting correctly (
die
event), or view logs- Multiple options for log viewing:
- See this S/O
- Summary:
- Start event logger with
docker events&
- this kind of doestail
- Do "thing" - e.g.
up
, that triggers error - You should see error appear in event log
- Copy error instance ID from event log and use with
docker logs {eventId}
- Start event logger with
- Summary:
docker-compose -f {composeFile} logs
- Use
-f
to follow - Use
--tail={#}
to tail # of lines
- Use
docker logs {?:containerName}
- See this S/O
- Multiple options for log viewing:
- Access shell inside container
- Already running container?
docker exec -it {containerName} bash
(orsh
)- Use
exit
to stop
- Use
- You can also try
docker attach {containerName}
to link console STDIN/STDOUT, but this seems less optimal thanexec
- Stopped container?
docker compose run {serviceName} bash
(orsh
)- E.g.
docker compose run my-backend bash
- E.g.
docker run -it -d {IMAGE_REF} bin/bash
- If you don't have an image for the container, you can use
docker commit {CONTAINER_REF} temporary_image
to generate one, then use with the above command - Use something like
--entrypoint=bash
if you need to override the entrypoint
- If you don't have an image for the container, you can use
- Lots of tips in this S/O question thread
- Already running container?
- Remove a stubborn network ("error while removing network ... has active endpoints")
- Find out what containers are using the network
docker inspect {networkName}
- Either bring down those containers, or break the connection
- To break connection(s), use
docker network disconnect {networkName} {containerName}
- To break connection(s), use
- Once you have broken linkage, you should be able to bring down
- Either re-run
down
, or manually remove network withdocker network rm {networkName}
- Either re-run
- Find out what containers are using the network
- Kill *all containers
docker ps -q | xargs docker kill
- Find the size of an image before downloading?
- Use the "Tags" tab on a Docker image listing page, e.g. Alpine
- Bind / expose a port for an already running container?
- Short answer; you can't
- Longer answer:
- You can try some special duct tape or tricky forwarding
- You can commit the live container, then run the clone (this still acts like a restart though)
- Run an image, temporarily, and immediately tear down afterwards?
docker run --rm
Docker Introspection / Debugging Tools
Inspecting Builds, Failed Image Builds
If you are trying to inspect or debug a Docker image build, there are a few different options (but unfortunately fewer, if you are using the newer buildx
engine):
buildkit
: Inspecting each layer, by hash- With the older
buildkit
engine, it will log each image layer's hash, which you can then use to spin up a container at that exact point
- With the older
buildg
(ktock/buildg
): An interactive debugger for Docker build. Based onbuildkit
.buildx
: They are working on building debugging tooling into newer versions ofbuildx
- see buildkit Issue #1472 and the buildx proposal / tracker issue #1104
Inspecting Stopped Containers
Docker has limited tooling to support inspecting stopped containers, and many commands assume that a container is running and will error out if it is not. However, there are several options for getting around this.
The first (obvious) answer to how to inspect a stopped container is to start it back up (e.g. with docker start
) and then inspect it, but this might not be possible. Perhaps the entry point for the container is something that exits quickly, or errors out when started directly.
If you are using docker compose
and don't need to override the entrypoint, you can also use docker compose run
:
docker compose run -it {serviceName} bash
If you can't simply use docker start
/ docker compose run
, the next thing to try is overriding the entry point / last cmd entry:
# If the image already exists
docker run -it --rm --entrypoint=/bin/bash {IMAGE_REF}
Inspecting Volumes
docker run -it --rm -v {VOLUME_REF}:{CONTAINER_PATH} busybox sh
# e.g.
docker run -it --rm -v my-volume:/tmp/volume busybox sh
Issues / Troubleshooting
- Linux / macOS: Commands only work when prefaced with
sudo
- Add your user to the Docker usergroup:
sudo usermod -aG docker $USER
- You must either log out and back in, or use
su -s ${USER}
to refresh group membership
- You must either log out and back in, or use
- See Docker docs: Linux: Manager Docker as a non-root user
- Add your user to the Docker usergroup:
exec -t
output is missing lines, line endings don't make sense, etc.-t
(or-tty
) allocates a pseudotty
for output, which by default usesCR
instead ofLF
for line endings. If you are piping the output back to your shell, this can lead to confusing output and errors if you are trying to pipe it back into programs expectingLF
.- You can use something like
tr
to replace the line endings, as a quick fix
- PostgreSQL issues with Windows - usually due to permissions issue
- This is a known issue, and requires using named volumes. Links:
- Can't remove volume (even with
-f
or--force
): "Error response from daemon: remove {volume}: volume is in use"- Try this first:
docker-compose down -v
(ordocker-compose rm -f -v
) (WARNING: This will remove all volumes)
- If the volume is being locked by a container, you can find what is using it:
docker ps -a --filter volume={volume}
to find container, and thendocker rm {CONTAINER}
to remove container
- Try
docker system prune
, and then remove volumes again - As a last ditch effort, you can try restarting the entire Docker service (either via GUI, or CLI), then try removing container
- Windows: restart via CLI
- Lots of tips in the responses to this S/O question
- Try this first:
- Can't stop or kill a running container (maybe with a message like An HTTP request took too long...)
- Sometimes Docker just gets in a really strange state. AFAIK, there are a bunch of open issues for this, but the only real fix is to just restart the Docker service entirely. There should even be a menu option to restart Docker Desktop!
- You could try getting in through a shell (e.g. with
docker exec
), but that is likely to fail if the container is hosed anyways.
- Docker keeps pulling from the wrong
node_modules
, or stale modules, etc.- Make sure you aren't actually mounting your local
node_modules
directory directly, or else you are going to run into version issues - If you need to bind-mount the directory that contains
node_modules
(e.g./src
), then create an anonymous volume for the nestednode_modules
to prevent it from being bind-mounted to the host- You usually want to use an anonymous volume, not* a named volume, or else you are going to run into weird data persistence issues between runs
- Make sure you aren't actually mounting your local
- For NodeJS projects, keep getting a
unexpected token "export"
error- Likely issue with
node_modules
resolution; make sure that dependencies can be found within the container, and try to avoid bind-mountingnode_modules
- Don't use
node_modules
as a volume name!
- Don't use
- If
node_modules
is not bind-mounted and everything is set up correctly, but you are still getting this error, make sure thatnode_modules
is actually up-to-date - ifpackage.json
changed, you likely need to rebuild your container image, as it probably hasnpm install
as a step- Example reset:
docker-compose down -v && docker-compose build && docker-compose up
- Also, make sure both
package.json
andpackage-lock.json
are copied to the image before runningnpm install
- Example reset:
- Likely issue with