A brief discussion on docker compose writing rules

A brief discussion on docker compose writing rules

This article does not introduce anything related to cluster deployment

Version Constraints

  • Docker Engine >= 19.03
  • Docker Compose >= 3.8

Structure Introduction

docker-compose.yaml file structure is mainly composed of

version # docker compose versionnetworks # network, used for internal communication in docker containerx-{name} # template naming rule starts with x- for reusevolumes # mount volumeservices # service module, internally defines container information, its internal parameters are equivalent to the parameters of docker run

Module Introduction

Docker Compose official documentation

version

If the version of docker-compose.yaml needs to be upgraded, refer to the document version upgrade reference document

Compose file version Docker Engine version
3.8 19.03.0+
3.7 18.06.0+
3.6 18.02.0+
3.5 17.12.0+
3.4 17.09.0+
3.3 17.06.0+
3.2 17.04.0+
3.1 1.13.1+
3.0 1.13.0+
2.4 17.12.0+
2.3 17.06.0+
2.2 1.13.0+
2.1 1.12.0+
2.0 1.10.0+
1.0 1.9.1.+

network_mode

Use the same values ​​as for --network parameter, with the special form service:[service name]

network_mode: "bridge"
network_mode: "host"
network_mode: "none"
network_mode: "service:[service name]"
network_mode: "container:[container name/id]"

networks

Set the network for the container created by the current docker-compose.yaml file

It does not necessarily exist at the same level as version, but can also exist in other modules, such as services.

Internal Network

services:
 some-service:
  networks:
   -some-network
   - other-network

Public Network

version: "3"
networks:
 default-network:

aliases (to be added)

Network alias

version: "3.8"

services:
 web:
  image: "nginx:alpine"
  networks:
   - new

 worker:
  image: "my-worker-image:latest"
  networks:
   - legacy

 db:
  image: mysql
  networks:
   new:
    aliases:
     - database
   legacy:
    aliases:
     -mysql

networks:
 new:
 legacy:

ipv4_address , ipv6_address (to be added)

version: "3.8"

services:
 app:
  image: nginx:alpine
  networks:
   app_net:
    ipv4_address: 172.16.238.10
    ipv6_address: 2001:3984:3989::10

networks:
 app_net:
  ipam:
   driver: default
   config:
    - subnet: "172.16.238.0/24"
    - subnet: "2001:3984:3989::/64"

services

The most important part is used to configure each service

build

Used to build images. When both the build and image fields exist, the image name and tag specified by image are used as the name and tag of the built image.

version: "3.8" # docker compose version services:
 webapp: # The service (container) name defined by docker-compose is mainly for the parameters of the docker-compose command, which may not be consistent with the container name seen by docker ps build: # Use Dockerfile to build the image context: ./dir context path, the relative path is relative to the compose file path dockerfile: Dockerfile-alternate # Specify the Dockerfile file name args: # Specify the parameters of the Dockerfile environment variable buildno: 1 # Both directory and list writing are acceptable

context

You can use a relative path or the URL of a git repository.

build:
 context: ./dir

Dockerfile

Specify the Dockerfile file name, and context must be specified

build:
 context: .
 dockerfile: Dockerfile-alternate

args

The ARG field in the Dockerfile is used to specify the environment variables when docker build

ARG buildno
ARG gitcommithash

RUN echo "Build number: $buildno" # bash-like style RUN echo "Based on commit: $gitcommithash"

You can use list or map to set args

build:
 context: .
 args: # map
  buildno: 1
  gitcommithash:cdc3b19
build:
 context: .
 args: # list
  -buildno=1
  -gitcommithash=cdc3b19

Tips
If you need to use boolean values, you need to use double quotes ("true", "false", "yes", "no", "on", "off") so that the parser will parse them as strings.

cache_from

Specifying cache for the build process

build:
 context: .
 cache_from:
  - alpine:latest
  -corp/web_app:3.14

labels

Same as the LABEL instruction in Dockerfile, setting metadata for the image

build:
 context: .
 labels: # map
  com.example.description: "Accounting webapp"
  com.example.department: "Finance"
  com.example.label-with-empty-value: ""
build:
 context: .
 labels: # list
  - "com.example.description=Accounting webapp"
  - "com.example.department=Finance"
  - "com.example.label-with-empty-value"

network

Same as docker --network command, specify the network for the container. I understand that setting up LAN bridge can connect two physical LANs in three modes

build:
 context: .
 network: host # host mode, the network latency is the lowest, and the performance is consistent with the host machine
build:
 context: .
 network: custom_network_1 # Custom network
build:
 context: .
 network: none # No network

shm_size

Set the size of the /dev/shm directory in the container

The /dev/shm directory is very important. This directory is not on the hard disk, but in the memory. The default size is half the size of the memory. The files stored in it will not be cleared. Dividing this directory in the container can specify the performance of the container to a certain extent.

build:
 context: .
 shm_size: '2gb' # Use a string to set the size
build:
 context: .
 shm_size: 10000000 # Set the byte size

command

Equivalent to the CMD command in Dockerfile

command: bundle exec thin -p 3000 # shell-like
command: ["bundle", "exec", "thin", "-p", "3000"] # json-like

container_name

Equivalent to docker run --name

container_name: my-web-container

depends_on

Used to express dependencies between services

When docker-compose up , the startup order will be determined according to depends_on

version: "3.8"
services:
 web:
  build: .
  depends_on: # Start db and redis first
   -db
   - redis
 redis:
  image: redis
 db:
  image: postgres

Tips:
Docker-compose will not wait for the status of the container in depends_on to be 'ready' before starting, so you need to check the status of the container after the startup is complete. The official solution is to use shell scripts to solve the problem, which will not be elaborated here.

depends_on does not wait for db and redis to be "ready" before starting web - only until they have been started. If you need to wait for a service to be ready, see Controlling startup order for more on this problem and strategies for solving it.
----form https://docs.docker.com/comp...

devices

Mounted external devices, same as --devices

devices:
 - "/dev/ttyUSB0:/dev/ttyUSB0"

DNS

Custom DNS address

dns: 8.8.8.8 # single string value
dns: # list 
 - 8.8.8.8
 - 9.9.9.9

dns_search

Customize DNS search domain name

dns_search: example.com # single string value
dns_search:
 - dc1.example.com
 - dc2.example.com

entrypoint

Override the default entrypoint

entrypoint: /code/entrypoint.sh

Same as in Dockerfile

entrypoint: ["php", "-d", "memory_limit=-1", "vendor/bin/phpunit"]

Tips:
entrypoint in docker-compose.yaml clears the CMD command in the Dockerfile and overrides all ENTRYPOINT instructions in the Dockerfile.

env_file

Add environment variables file to docker-compose.yaml . If you set a compose file in docker-compose -f FILE , the file path in env_file is relative to FILE

env_file: .env # single value
env_file: # list
 - ./common.env
 - ./apps/web.env
 - /opt/runtime_opts.env

Tips:
.env file format:

# Set Rails/Rack environment # '#' is a comment,
# Empty lines are ignored RACK_ENV=development # The format is VAR=VAL

The environment variables in the .env file cannot be read explicitly during the build process. They are only read by the docker-compose.yaml file. If you need to use the environment variables during the build, add the args sub-parameter after build.

For specifying multiple .env files, the official website has this sentence which is very complicated

Keep in mind that the order of files in the list is significant in determining the value assigned to a variable that shows up more than once.
---from https://docs.docker.com/comp...

The literal translation is

Keep in mind that the order of the files in the list is important in determining the values ​​assigned to variables that appear multiple times.

Because the environment parameter files are processed from top to bottom, it means that if multiple parameter files contain the same environment variable, the last one will prevail.

environment

Add environment variables

environment: # map
 RACK_ENV: development
 SHOW: 'true'
 SESSION_SECRET:
environment: # list
 - RACK_ENV=development
 - SHOW=true
 -SESSION_SECRET

Tips:
The environment variables in the .env file cannot be read explicitly during the build process. They are only read by the docker-compose.yaml file. If you need to use the environment variables during the build, add the args sub-parameter after build.
This environment variable can be read by our own code when building the image, for example:

func getEnvInfo() string {
  rackEnv := os.Getenv("RACK_ENV")
  fmt.Println(rackEnv)
}

output:
development

expose

Expose ports, but only for communication between services. What is exposed is the internal port, similar to EXPOSE instruction in Dockerfile

expose:
 - "3000"
 - "8000"

external_links

Connectivity Services

external_links:
 - redis_1
 - project_db_1:mysql
 - project_db_1:postgresql

Tips:
The official recommendation is to use network

extra_hosts

Add a custom domain name, same as --add-host

extra_hosts:
 - "somehost:162.242.195.82"
 - "otherhost:50.31.209.229"

You can also write to the /etc/hosts file in the container

162.242.195.82 somehost
50.31.209.229 otherhost

Health Check

Same as HEALTHCHECK instruction in Dockerfile

healthcheck:
 test: ["CMD", "curl", "-f", "http://localhost"]
 interval: 1m30s
 timeout: 10s
 retries: 3
 start_period: 40s

Use disabel: true , which is equivalent to test: ["NONE"]

healthcheck:
 disable: true

image

Specify the image to be pulled or used. You can also use倉庫/標簽or partial image ID.

image: redis #Default label latest
image: ubuntu:18.04
image: tutum/influxdb
image: example-registry.com:4000/postgresql
image: a4bc65fd

init

Run an init program inside the container to forward signals to start the process.

version: "3.8"
services:
 web:
  image: alpine:latest
  init: true

Tips:
The default init binary used is Tini and is installed on the daemon host at /usr/libexec/docker-init. You can configure the daemon to use a custom init binary via the init-path configuration option.

Isolation

Specifies the container isolation technology. Linux only supports default , while Windows supports default `process For details on the three values ​​of hyperv, refer to Docker Engine Docs

labels

Same as the LABEL instruction in Dockerfile, setting metadata for the container

build:
 context: .
 labels: # map
  com.example.description: "Accounting webapp"
  com.example.department: "Finance"
  com.example.label-with-empty-value: ""
build:
 context: .
 labels: # list
  - "com.example.description=Accounting webapp"
  - "com.example.department=Finance"
  - "com.example.label-with-empty-value"

links

Old version function, not recommended

logging

Set the daily parameters for the current service

logging:
 driver: syslog
 options:
  syslog-address: "tcp://192.168.0.42:123"

driver parameter is the same as --log-driver command

driver: "json-file"
driver: "syslog"
driver: "none"

Tips:
Only when json-file and journald are used, docker-compose up and docker-compose logs can see the logs. No other drivers can see the log printing.

Specify log settings, same as docker run --log-opt . The format is kv structure

driver: "syslog"
options:
 syslog-address: "tcp://192.168.0.42:123"

The default log driver is json-file , and you can set storage limits

options:
 max-size: "200k" # Maximum storage of a single file max-file: "10" # Maximum number of files

Tips:
The above option parameters are only supported by the json-file log driver. Different drivers support different parameters. For details, refer to the following table.

List of supported drivers

Driver Description
none No log output.
local Logs are stored in a custom format designed to minimize overhead.
json-file Logs are stored in a custom format designed to minimize overhead.
syslog Write log messages to the syslog facility. The syslog daemon must be running on the host.
journald Write log messages to journald. The journald daemon must be running on the host.
gelf Writes log messages to a Graylog Extended Log Format (GELF) endpoint, such as Graylog or Logstash.
fluentd Write log messages to fluentd (forward input). The fluentd daemon must be running on the host.
awslogs Write log messages to Amazon CloudWatch Logs.
splunk Write log messages to Splunk using HTTP Event Collector.
etwlogs Writes log messages as Event Tracing for Windows (ETW) events. Available only on Windows platforms.
gcplogs Writes log messages to Google Cloud Platform (GCP) logging.
logentries Writes log messages to Rapid7 Logentries.

Tips:
For details, please refer to Configure logging drivers

ports

Externally exposed ports

short syntax:
Either specify both ports HOST:CONTAINER , or just the container port (an ephemeral host port is selected).

ports:
 - "3000"
 - "3000-3005"
 - "8000:8000"
 - "9090-9091:8080-8081"
 - "49100:22"
 - "127.0.0.1:8001:8001"
 - "127.0.0.1:5000-5010:5000-5010"
 - "6060:6060/udp"
 - "12400-12500:1240"

Tips:
When mapping ports in HOST:CONTAINER format, using a container port lower than 60 may result in an error because YAML interprets numbers in the format xx:yy as sexagesimal numbers (which can be understood as time). It is therefore recommended to always specify port mappings explicitly as strings.

long syntax

The long syntax allows fields that the short syntax does not allow

  • arget: port in the container
  • published: publicly exposed port
  • protocol: port protocol (tcp or udp)
  • mode: host to publish the host port on each node, or ingress to load balance the cluster mode ports.
ports:
 - target: 80
  published: 8080
  protocol: tcp
  mode: host

restart

Container restart policy

restart: "no" # Do not restart on failure restart: always # Always restart after failure restart: on-failure # Restart only when the error code is on-failure restart: unless-stopped # Do not restart after manual stop

secrets (to be added)

volumes

Used to mount data volumes

short syntax
Short syntax uses the simplest format [SOURCE:]TARGET[:MODE]

  • SOURCE can be a host address or a data volume name.
  • TARGET is the path inside the container.
  • MODE includes ro for read-only and rw for read-write (default)

If you use a relative path to the host, expand on docker-compose.yaml .

volumes:
 #Specify the path in the container, Docker automatically creates the path - /var/lib/mysql

 #Mount absolute path - /opt/data:/var/lib/mysql

 # Mount relative path - ./cache:/tmp/cache

 # User directory relative path - ~/configs:/etc/configs/:ro

 # Named mount - datavolume:/var/lib/mysql

long syntax

The long syntax allows the use of fields that cannot be expressed in the short syntax.

  • type : installation type, bind , tmpfs or npipe source : mount source, which can be the host path or the volume name set in the top-level mount settings. tmpfs is not suitable for mounting target : Mount path in container read_only : Set the mount path to read-only
  • bind : configure additional bind settings
    • propagation : used to set the propagation mode of the binding
  • volume : configure additional mount configuration
    • nocopy : Disable copying data from container when creating volume
  • tmpfs : configure additional tmpfs configuration
    • size : Set the size of the mounted tmpfs (bytes)
  • consistency : the consistency requirement for the mount, one of consistent (host and container have the same view), cached (read cache, host view is authoritative), or delegated (read-write cache, container view is authoritative)
version: "3.8"
services:
 web:
  image: nginx:alpine
  ports:
   - "80:80"
  volumes:
   - type: volume
    source: mydata
    target: /data
    volume:
     nocopy: true
   - type: bind
    source: ./static
    target: /opt/app/static

networks:
 webnet:

volumes:
 mydata:

This is the end of this article about the writing rules of docker compose. For more relevant content about writing rules of docker compose, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • Docker-compose detailed explanation and sample code
  • Detailed steps for installing and setting up Docker-compose
  • Detailed examples of using Docker-Compose
  • Docker Compose network settings explained
  • How to run MySQL using docker-compose
  • Two simplest ways to install docker-compose
  • Detailed explanation of the available environment variables in Docker Compose
  • Docker container orchestration tool Compose (Getting Started)
  • Docker-compose one-click deployment of gitlab Chinese version method steps

<<:  JavaScript to achieve fireworks effects (object-oriented)

>>:  Springboot+Vue-Cropper realizes the effect of avatar cutting and uploading

Recommend

How MySQL supports billions of traffic

Table of contents 1 Master-slave read-write separ...

Detailed explanation of how to use the Vue license plate input component

A simple license plate input component (vue) for ...

Native JS to implement sharing sidebar

This article shares a sharing sidebar implemented...

Detailed explanation of Vue custom instructions and their use

Table of contents 1. What is a directive? Some co...

CentOS7 uses rpm to install MySQL 5.7 tutorial diagram

1. Download 4 rpm packages mysql-community-client...

Springboot+VUE to realize login and registration

This article example shares the specific code of ...

How to implement MySQL bidirectional backup

MySQL bidirectional backup is also called master-...

Tips to prevent others from saving as my web page and copying my site

Nowadays, copying websites is very common on the I...

CSS style to center the HTML tag in the browser

CSS style: Copy code The code is as follows: <s...

Implementation of nginx flow control and access control

nginx traffic control Rate-limiting is a very use...

Implementing a distributed lock using MySQL

introduce In a distributed system, distributed lo...