AB
The second part of our Docker Compose series breaks down the structure and components of docker-compose.yml files, explaining services, networks, volumes, and essential directives.
Welcome to the second installment of our Docker Compose series! In Part 1, we covered the fundamentals of Docker Compose and why it’s essential for managing multi-container applications. Now, we’ll dive deeper into the heart of Docker Compose: the docker-compose.yml
file.
Understanding how to structure this file is crucial for effectively orchestrating your containers. We’ll break down each component, explain their purpose, and provide practical examples to help you craft your own Docker Compose configurations with confidence.
Before we dissect the docker-compose.yml
file, let’s review the core concepts that make up a Docker Compose configuration:
The docker-compose.yml
file is the blueprint for your multi-container application. It defines:
Think of this file as a single source of truth for your entire application’s infrastructure. Instead of managing multiple Dockerfiles and commands, everything is organized in one YAML file.
Let’s break down the basic structure of a docker-compose.yml
file and examine each section in detail:
The version
key specifies the Compose file format version. Different versions support different features.
version: "3.9"
Why it matters: The version determines which features are available. For most modern use cases, version 3.x is recommended. Specifically:
Best practice: Use the latest stable version (currently 3.9) to access all features.
Services represent the containers that make up your application. Each service can be configured with various options, including the image to use, ports to expose, and environment variables.
services:
web:
image: nginx:latest
ports:
- "8080:80"
volumes:
- ./website:/usr/share/nginx/html
In this example:
web
is the service nameimage: nginx:latest
specifies the Docker image to useports
maps port 8080 on the host to port 80 in the containervolumes
mounts the local website
directory to /usr/share/nginx/html
in the containerServices can also be built from a Dockerfile:
services:
app:
build:
context: ./app
dockerfile: Dockerfile
ports:
- "3000:3000"
Here, instead of using a pre-built image, Docker Compose builds the image from the Dockerfile in the ./app
directory.
Networks define how your containers communicate with each other. By default, Docker Compose creates a single network for all services, but you can define multiple networks for more complex setups.
services:
web:
image: nginx:latest
networks:
- frontend
api:
image: my-api:latest
networks:
- frontend
- backend
db:
image: postgres:13
networks:
- backend
networks:
frontend:
driver: bridge
backend:
driver: bridge
In this example:
web
and api
are on the frontend
networkapi
and db
are on the backend
networkweb
cannot directly communicate with db
This network isolation improves security by limiting which services can communicate with each other.
Volumes provide persistent storage for your containers. When a container is removed, any data stored inside it is typically lost. Volumes solve this problem by storing data outside the container lifecycle.
services:
db:
image: postgres:13
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
In this example:
postgres_data
/var/lib/postgresql/data
in the db
containerYou can also use bind mounts to map host directories to container paths:
services:
web:
image: node:16
volumes:
- ./src:/app/src
Here, the local ./src
directory is mounted to /app/src
in the container, allowing you to update code without rebuilding the image.
Now, let’s explore some of the most commonly used directives in docker-compose.yml
files:
The build
directive allows you to build a Docker image from a Dockerfile instead of using a pre-built image.
Syntax:
build:
context: .
dockerfile: Dockerfile
args:
buildno: 1
Explanation:
context
: The build context path (typically where your Dockerfile is located)dockerfile
: The name of the Dockerfile to use (if not the default Dockerfile
)args
: Build arguments passed to the Dockerfile during the build processExample:
services:
app:
build:
context: ./backend
dockerfile: Dockerfile.dev
args:
NODE_ENV: development
This builds an image from ./backend/Dockerfile.dev
with the build argument NODE_ENV=development
.
Environment variables allow you to pass configuration to your containers. There are several ways to define them:
Direct declaration:
environment:
- DEBUG=true
- DB_HOST=database
Using a .env
file:
Create a .env
file:
DB_PASSWORD=secret
API_KEY=abcdef123456
Then reference variables in your Compose file:
services:
app:
image: myapp
environment:
- DB_PASSWORD=${DB_PASSWORD}
- API_KEY=${API_KEY}
This approach helps keep sensitive information out of version control.
The ports
directive maps ports from the container to the host, allowing external access to services.
Syntax:
ports:
- "host_port:container_port"
Example:
services:
web:
image: nginx
ports:
- "8080:80" # Map port 8080 on host to port 80 in container
- "443:443" # Map port 443 on host to port 443 in container
You can also specify just the container port, letting Docker choose a random host port:
ports:
- "80"
This is useful for running multiple instances of the same service.
The depends_on
directive controls the order in which services start.
Syntax:
depends_on:
- service_name
Example:
services:
app:
image: myapp
depends_on:
- db
db:
image: postgres
Here, the app
service will start only after the db
service has started. However, it’s important to note that depends_on
only waits for the container to start, not for the service inside to be ready (like a database accepting connections).
For more sophisticated dependency management, use health checks:
services:
app:
image: myapp
depends_on:
db:
condition: service_healthy
db:
image: postgres
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
This ensures the app
service only starts after the db
service is healthy (PostgreSQL is ready to accept connections).
The command
directive overrides the default command specified in the Docker image.
Syntax:
command: command_to_run
Example:
services:
app:
image: node:16
command: ["npm", "run", "dev"]
This is useful for customizing how a container runs without modifying the image.
The restart
directive defines the restart policy for a container.
Syntax:
restart: policy
Example:
services:
app:
image: myapp
restart: always
Available policies:
no
: Never restart (default)always
: Always restart if the container stopson-failure
: Restart only if the container exits with a non-zero exit codeunless-stopped
: Always restart unless explicitly stoppedThe expose
directive exposes ports without publishing them to the host machine - they’ll only be accessible to linked services.
Syntax:
expose:
- "port"
Example:
services:
app:
image: myapp
expose:
- "3000"
This is different from ports
because exposed ports are only accessible to other containers, not from the host machine.
Now, let’s combine all these directives into a complete docker-compose.yml
file for a web application with a frontend, backend API, and database:
version: "3.9"
services:
frontend:
build:
context: ./frontend
ports:
- "3000:3000"
volumes:
- ./frontend/src:/app/src
environment:
- API_URL=http://backend:4000
depends_on:
- backend
networks:
- app-network
restart: unless-stopped
backend:
build:
context: ./backend
ports:
- "4000:4000"
volumes:
- ./backend/src:/app/src
environment:
- DB_HOST=database
- DB_PORT=5432
- DB_USER=postgres
- DB_PASSWORD=${DB_PASSWORD}
- DB_NAME=myapp
depends_on:
database:
condition: service_healthy
networks:
- app-network
restart: unless-stopped
database:
image: postgres:13
volumes:
- db-data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=myapp
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
networks:
- app-network
restart: unless-stopped
networks:
app-network:
driver: bridge
volumes:
db-data:
This Compose file:
frontend
, backend
, and database
frontend
and backend
The result is a complete development environment that can be started with a single command: docker-compose up
.
Now that we’ve covered the basics, let’s create a simple docker-compose.yml
file for a web application with a Redis cache. This is a great first project to practice what you’ve learned:
version: "3.9"
services:
web:
image: nginx:alpine
ports:
- "8080:80"
volumes:
- ./html:/usr/share/nginx/html
depends_on:
- redis
networks:
- web-network
redis:
image: redis:alpine
volumes:
- redis-data:/data
networks:
- web-network
networks:
web-network:
volumes:
redis-data:
To test this:
docker-compose.yml
file with the above contenthtml
directory with a simple index.html
filedocker-compose up
http://localhost:8080
in your browserYou should see your HTML page served by Nginx, with Redis running in the background.
Yes! Docker Compose automatically reads a .env
file in the same directory as your docker-compose.yml
. Variables can then be referenced as ${VARIABLE_NAME}
.
Services can communicate using their service names as hostnames. For example, if you have a service named database
, other services can reach it at http://database:port
.
Yes, Docker Compose allows you to extend and override configurations using multiple files. For example:
# Base configuration
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
We’ll cover this in more detail in Part 4 of this series.
In this article, we’ve explored the structure and key directives of the docker-compose.yml
file. You now have the knowledge to create basic Docker Compose configurations for your applications.
In Part 3, we’ll dive into Docker Compose commands and day-to-day operations. You’ll learn how to use Docker Compose to manage your applications effectively, from starting services to viewing logs and executing commands within containers.
Continue to Part 3: Docker Compose Commands and Operations or go back to Part 1: Introduction and Fundamentals