Basic Concepts By default, Compose creates a network for our application, and each container of the service will join the network. In this way, the container can be accessed by other containers in the network. Not only that, the container can also be accessed by other containers using the service name as the hostname. By default, the application's network name is based on the Compose project name, which in turn is based on the name of the directory where the docker-compose.yml file is located. To change the project name, use the --project-name flag or the COMPOSE_PORJECT_NAME environment variable. For example, if an application is in a directory called myapp and the docker-compose.yml looks like this: version: '2' services: web: build: . ports: - "8000:8000" db: image: postgres When we run docker-compose up, the following steps will be executed:
Containers can access each other using the service name (web or db) as the hostname. For example, the web service can use postgres://db:5432 to access the db container. Update container When the configuration of a service changes, you can use the docker-compose up command to update the configuration. At this point, Compose deletes the old container and creates a new one. The new container will join the network with a different IP address, but the name will remain the same. Any connections to the old container will be closed, and the container will find the new container and connect to it. As mentioned in the previous article, by default, services can access each other using the service name. links Allows us to define an alias and use that alias to access other services. For example: version: '2' services: web: build: . links: - "db:database" db: image: postgres In this way, the web service can use db or database as the hostname to access the db service. Specifying a custom network In some scenarios, the default network configuration cannot meet our needs. In this case, we can use the networks command to customize the network. The networks command allows us to create more complex network topologies and specify custom network drivers and options. Not only that, we can also use networks to connect services to externally created networks that are not managed by Compose. As shown below, we define two custom networks. version: '2' services: proxy: build: ./proxy networks: - front app: build: ./app networks: - front - back db: image: postgres networks: - back networks: front: # Use a custom driver driver: custom-driver-1 back: # Use a custom driver which takes special options driver: custom-driver-2 driver_opts: foo: "1" bar: "2" The proxy service is isolated from the db service, and each uses its own network; the app service can communicate with both. It is not difficult to find from this example that using the networks command can easily achieve network isolation and connection between services. Configuring the Default Network In addition to customizing the network, we can also customize the configuration for the default network. version: '2' services: web: build: . ports: - "8000:8000" db: image: postgres networks: default: # Use a custom driver driver: custom-driver-1 This way, a custom network driver can be specified for the application. Use an existing network In some scenarios, we do not need to create a new network, but just join an existing network. In this case, we can use the external option. Example: networks: default: external: name: my-pre-existing-network Several ways to link external containers with Docker Compose In Docker, linking between containers is a very common operation: it provides the ability to access the network services of one of the containers without exposing the required ports to the Docker Host. Support for this feature in Docker Compose is also very convenient. However, if the containers that need to be linked are not defined in the same docker-compose.yml, it will be a little more troublesome and complicated. When not using Docker Compose, it is relatively simple to link two containers using the --link parameter. Take the nginx image as an example: docker run --rm --name test1 -d nginx #Start an instance test1 docker run --rm --name test2 --link test1 -d nginx #Start an instance test2 and establish a link with test1 In this way, a link is established between test2 and test1, and the services in test1 can be accessed in test2. If you use Docker Compose, this is even easier. Taking the above nginx image as an example, edit the docker-compose.yml file as follows: version: "3" services: test2: image: nginx depends_on: -test1 links: -test1 test1: image: nginx The final effect is no different from the link established using the normal Docker command docker run xxxx. This is just the most ideal situation. How should I link containers if they are not defined in the same docker-compose.yml file? What should I do if the container defined in the docker-compose.yml file needs to be linked to the container started by docker run xxx? For these two typical situations, here are my personal test solutions: Method 1: Make the containers to be linked belong to the same external network We still use the nginx image to simulate such a scenario: suppose we need to link two nginx containers (test1 and test2) managed by Docker Compose so that test2 can access the services provided by test1. Here we take the ping as the standard. First, we define the content of the docker-compose.yml file of container test1 as follows: version: "3" services: test2: image: nginx container_name: test1 networks: - default - app_net networks: app_net: external: true The content of container test2 is basically the same as test1, except that there is an additional external_links. It should be noted that the recently released Docker version no longer needs to use external_links to link containers. The container's DNS service can make correct judgments. Therefore, if you need to be compatible with an older version of Docker, the content of the docker-compose.yml file of container test2 is: version: "3" services: test2: image: nginx networks: - default - app_net external_links: -test1 container_name: test2 networks: app_net: external: true Otherwise, the definition of test2's docker-compose.yml is exactly the same as that of test1, and there is no need to specify an additional external_links. For related questions, please refer to the related questions on stackoverflow: docker-compose + external container As you can see, both containers use the same external network app_net in their definitions. Therefore, we need to create the external network before starting the two containers with the following command: docker network create app_net After that, start the two containers with the docker-compose up -d command, and then execute docker exec -it test2 ping test1. You will see the following output: docker exec -it test2 ping test1 PING test1 (172.18.0.2): 56 data bytes 64 bytes from 172.18.0.2: icmp_seq=0 ttl=64 time=0.091 ms 64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.146 ms 64 bytes from 172.18.0.2: icmp_seq=2 ttl=64 time=0.150 ms 64 bytes from 172.18.0.2: icmp_seq=3 ttl=64 time=0.145 ms 64 bytes from 172.18.0.2: icmp_seq=4 ttl=64 time=0.126 ms 64 bytes from 172.18.0.2: icmp_seq=5 ttl=64 time=0.147 ms This proves that the two containers are successfully linked. Conversely, pingtest2 can also be pinged normally in test1. If we start a container (test3) by using docker run --rm --name test3 -d nginx and do not specify the external network to which it belongs, and need to link it with test1 or test2, we can manually link the external network: docker network connect app_net test3 In this way, the three containers can access each other. Method 2: Change the network mode of the container to be linked You can do this by changing the network mode of the containers you want to link to each other to bridge and specifying the external containers that need to be linked (external_links). Unlike link mode 1, where containers belonging to the same external network can access each other, this access is one-way. Taking the nginx container image as an example, if the container instance nginx1 needs to access the container instance nginx2, the docker-compose.yml definition of nginx2 is: version: "3" services: nginx2: image: nginx container_name: nginx2 network_mode: bridge Correspondingly, nginx1's docker-compose.yml is defined as: version: "3" services: nginx1: image: nginx external_links: - nginx2 container_name: nginx1 network_mode: bridge It should be noted that external_links cannot be omitted, and nginx1 must be started after nginx2, otherwise an error may be reported that the container nginx2 cannot be found. Next we use ping to test connectivity: $ docker exec -it nginx1 ping nginx2 # nginx1 to nginx2 PING nginx2 (172.17.0.4): 56 data bytes 64 bytes from 172.17.0.4: icmp_seq=0 ttl=64 time=0.141 ms 64 bytes from 172.17.0.4: icmp_seq=1 ttl=64 time=0.139 ms 64 bytes from 172.17.0.4: icmp_seq=2 ttl=64 time=0.145 ms $ docker exec -it nginx2 ping nginx1 #nginx2 to nginx1 ping: unknown host The above can fully prove that this method is a one-way connection. In actual applications, you can flexibly choose these two linking methods according to your needs. If you want to be lazy, you can choose the second one. However, I recommend the first one more. It is not difficult to see that the second one, which is more friendly than changing the network mode, is more user-friendly in terms of connectivity and flexibility. The attached docker-compose.yml file explains the compatibility between Compose and Docker: There are three versions of the Compose file format: 1, 2.x and 3.x The current mainstream is 3.x, which supports docker 1.13.0 and above. Common parameters: version #Specify the version of the compose fileservices #Define all service information. The first-level key under services is the name of a servicebuild #Specify the path containing the build context, or as an object with context and the specified Dockerfile file and args parameter valuecontext #context: Specify the path where the Dockerfile file is locateddockerfile #dockerfile: Specify the name of the Dockerfile under the directory specified by context (default is Dockerfile) args # args: Parameters required by Dockerfile during the build process (equivalent to docker container build --build-arg) cache_from # New parameter in v3.2, specifies the cached image list (equivalent to docker container build --cache_from) labels # New parameter in v3.3, set the metadata of the image (equivalent to the function of docker container build --labels) shm_size # New parameter in v3.5, sets the size of the container /dev/shm partition (equivalent to docker container build --shm-size) command # Override the default command executed after the container is started, supporting shell format and [] format configs # Don't know how to use cgroup_parent # Don't know how to use container_name # Specify the name of the container (equivalent to docker run --name) credential_spec # I don't know how to use deploy # Version v3 and above, specifies configuration related to deploying and running services, the deploy part is used by docker stack, docker stack depends on docker swarm endpoint_mode # New feature in v3.3, specifies the way the service is exposedvip # Docker assigns a virtual IP (VIP) to the service, which is used by the client to access the servicednsrr # DNS round robin, Docker sets up a DNS entry for the service, so that the DNS query for the service name returns a list of IP addresses, and the client directly accesses one of the addresseslabels # Specifies the labels of the service, which are only set on the servicemode # Specifies the mode of deployglobal # Each cluster node has only one containerreplicated # Users can specify the number of containers in the cluster (default) placement # Don't know how to use replicas # When the deploy mode is replicated, specify the number of container copies resources # Resource limits limits # Set resource limits for the container cpus: "0.5" # Set the container to use only 50% of the CPU at most memory: 50M # Set the container to use a maximum of 50M of memory space reservations # Set the system resources reserved for the container (available at any time) cpus: "0.2" # Reserve 20% of the CPU for this container memory: 20M # Reserve 20M memory space for the container restart_policy # Define container restart policy, used to replace the restart parameter condition # Define container restart policy (accepts three parameters) none # Do not attempt to restart on-failure # Restart only if there is a problem with the application inside the container any # Try to restart anyway (default) delay # The interval between restart attempts (default is 0s) max_attempts # Number of attempts to restart (default is to always try to restart) window # Waiting time before checking whether the restart is successful (that is, if the container is started, how many seconds will it take to check whether the container is normal, the default is 0s) update_config # Used to configure rolling update configuration parallelism # The number of containers updated at one time delay # The interval between updating a group of containers failure_action # Define the strategy for failed updates continue # Continue update rollback # Rollback update pause # Pause update (default) monitor # Duration after each update to monitor if the update fails (unit: ns|us|ms|s|m|h) (default 0) max_failure_ratio # The failure rate tolerated during rollback (default value is 0) order # New parameter in v3.4, the order of operations during rollback stop-first #The old task is stopped before starting the new task (default) start-first #Start the new task first, and the running tasks will temporarily overlap rollback_config #A new parameter added in v3.7, used to define the rollback strategy when update_config fails to update parallelism #The number of containers to roll back at a time. If set to 0, all containers will roll back at the same time delay #The time interval between each group rollback (default is 0) failure_action # Defines the rollback failure strategy continue # Continue rollback pause # Pause rollback monitor # Duration after each rollback task to monitor failure (unit: ns|us|ms|s|m|h) (default is 0) max_failure_ratio # The failure rate tolerated during rollback (default value 0) order # Order of operations during rollback stop-first # Old tasks are stopped before starting new ones (default) start-first # Start a new task first, and the running tasks will temporarily overlap. Note: Supports docker-compose up and docker-compose run but not docker stack deploy suboptions security_opt container_name devices tmpfs stop_signal links cgroup_parent network_mode external_links restart build userns_mode sysctls devices #Specify the device mapping list (equivalent to docker run --device) depends_on #Define the order in which containers start (this option resolves dependencies between containers and will be ignored when using swarm deployment in v3) Example: docker-compose up starts services in dependency order. In the following example, redis and db services are started before web services. By default, when you start the web service with docker-compose up web, redis and db services are also started because the dependency relationship version: '3' is defined in the configuration file. services: web: build: . depends_on: -db - redis redis: image: redis db: image: postgres dns # Set the DNS address (equivalent to docker run --dns) dns_search # Set DNS search domain (equivalent to docker run --dns-search) tmpfs # Version v2 and above, mount the directory into the container as a temporary file system for the container (equivalent to docker run --tmpfs, this option will be ignored when using swarm deployment) entrypoint # Override the default entrypoint instruction of the container (equivalent to docker run --entrypoint) env_file # Read variables from the specified file and set them as environment variables in the container. It can be a single value or a list of files. If the variables in multiple files have the same name, the latter variable will overwrite the former variable, and the value of environment will overwrite the value of env_file. File format: RACK_ENV=development environment #Set environment variables. The value of environment can override the value of env_file (equivalent to the function of docker run --env) expose # expose ports, but do not establish a mapping relationship with the host, similar to the EXPOSE instruction in Dockerfile external_links # connect to containers not defined in docker-compose.yml or not managed by compose (containers started by docker run, this option will be ignored when using swarm deployment in version v3) extra_hosts # Add host records to /etc/hosts in the container (equivalent to docker run --add-host) healthcheck # For versions above v2.1, define container health check, similar to the HEALTHCHECK instruction in Dockerfiletest # Command to check container status. This option must be a string or a list. The first item must be NONE, CMD or CMD-SHELL. If it is a string, it is equivalent to CMD-SHELL plus the string NONE # Disable container health check CMD # test: ["CMD", "curl", "-f", "http://localhost"] CMD-SHELL # test: ["CMD-SHELL", "curl -f http://localhost || exit 1"] or test: curl -f https://localhost || exit 1 interval: 1m30s # The interval between each checktimeout: 10s # The timeout for running the commandretries: 3 # The number of retriesstart_period: 40s # New option in v3.4 and above, defines the container startup intervaldisable: true # true or false, indicating whether to disable health status detection and test: NONESame as image # Specifies the docker image, which can be a remote repository image or a local imageinit # New parameter in v3.7, true or false indicates whether to run an init in the container, which receives signals and passes them to the processisolation # Isolation container technology, only the default value is supported in Linuxlabels # Use Docker labels to add metadata to containers, similar to LABELS in Dockerfilelinks # Links to containers in other services, this option is a legacy option of docker, and has been replaced by user-defined network namespaces, and may eventually be deprecated (this option will be ignored when using swarm deployment) logging # Set the container log service driver # Specify the logging driver, default is json-file (equivalent to docker run --log-driver) options #Specify log-related parameters (equivalent to docker run --log-opt) max-size # Set the size of a single log file. When this value is reached, the log will be rolled over max-file # The number of log files to retain network_mode # Specify the network mode (equivalent to docker run --net, this option will be ignored when using swarm deployment) networks # Add the container to the specified network (equivalent to the function of docker network connect), networks can be located in the top-level key of the compose file and the secondary key of the services keyaliases # Containers on the same network can use the service name or alias to connect to the container of one of the servicesipv4_address # IP V4 formatipv6_address # IP V6 formatExample: version: '3.7' services: test: image: nginx:1.14-alpine container_name: mynginx command: ifconfig networks: app_net: # Call the app_net network defined in the networks below ipv4_address: 172.16.238.10 networks: app_net: driver: bridge ipam: driver: default config: - subnet: 172.16.238.0/24 pid: 'host' # Shared host process space (PID) ports # Establish a port mapping relationship between the host and the container. Ports supports two syntax formats: SHORT syntax format example: - "3000" # Expose port 3000 of the container. Docker will randomly map the host's port to an unoccupied port. - "3000-3005" # Expose ports 3000 to 3005 of the container. Docker will randomly map the host's port to an unoccupied port. - "8000:8000" # Map port 8000 of the container to port 8000 of the host. - "9090-9091:8080-8081" - "127.0.0.1:8001:8001" #Specify the specified address of the mapping host - "127.0.0.1:5000-5010:5000-5010" - "6060:6060/udp" #Specify the protocol LONG syntax format example: (syntax format newly added in v3.2) ports: - target: 80 # container port published: 8080 # host port protocol: tcp # protocol type mode: host # host publishes host ports on each node, ingress load balances swarm mode ports secrets # don't know how to use security_opt # override default labels for each container (this option will be ignored when using swarm deployment) stop_grace_period #Specifies how many seconds the container waits before exiting after sending a SIGTERM signal (default 10s) stop_signal #Specifies the signal to stop the container (the default is SIGTERM, which is equivalent to kill PID; SIGKILL is equivalent to kill -9 PID; this option will be ignored when using swarm deployment) sysctls # Set kernel parameters in the container (this option will be ignored when using swarm deployment) ulimits # Set container limits userns_mode # Disable user namespaces for this service if the Docker daemon is configured with user namespaces (this option will be ignored when using a swarm deployment) volumes #Defines the volume mapping relationship between the container and the host. Like networks, it can be located in the secondary key of the services key and the top-level key of compose. If it needs to be used across services, it is defined in the top-level key and referenced in services. SHORT syntax format example: volumes: - /var/lib/mysql # Map the /var/lib/mysql in the container to a random directory on the host - /opt/data:/var/lib/mysql # Map the /var/lib/mysql in the container to /opt/data on the host - ./cache:/tmp/cache # Map the /var/lib/mysql in the container to the location of the host compose file - ~/configs:/etc/configs/:ro # Map the directory of the container host to the container, with read-only permissions - datavolume:/var/lib/mysql # datavolume is the directory defined by the volumes top-level key, and LONG is directly called here. Example of syntax format: (new syntax format added in v3.2) version: "3.2" services: web: image: nginx:alpine ports: - "80:80" volumes: - type: volume # The type of mount, must be bind, volume or tmpfs source: mydata # host directorytarget: /data # container directoryvolume: # configure additional options, its key must be the same as the value of typenocopy: true # volume additional options, disable copying data from the container when creating a volume-type: bind # volume mode only needs to specify the container path, the host path is randomly generated; bind needs to specify the mapping path of the container and the data machinesource: ./static target: /opt/app/static read_only: true # Set the file system to a read-only file system volumes: mydata: # Defined in volume, restart can be called in all services # Define container restart policy (this option will be ignored when using swarm deployment, use restart_policy instead of restart in swarm) no # Disable automatic restart of container (default) always # The container will restart no matter what on-failure # When an on-failure error occurs, the container will restart Other options: domainname, hostname, ipc, mac_address, privileged, read_only, shm_size, stdin_open, tty, user, working_dir Each of the above options only accepts a single value and is similar to the corresponding parameters of docker run in terms of acceptable values for time: 2.5s 10s 1m30s 2h32m 5h34m56s Time unit: us, ms, s, m, h Acceptable values for size: 2b 1024kb 2048k 300m 1gb Unit: b, k, m, g or kb, mb, gb networks # Define networks informationdriver # Specify network mode, in most cases, it bridges between a single host and overlay Swarmbridge # Docker uses bridge by default to connect networks on a single hostoverlay # overlay driver creates a named network across multiple nodeshost # Shared host network namespace (equivalent to docker run --net=host) none # Equivalent to docker run --net=none driver_opts # Versions above v3.2, parameters passed to the driver, these parameters depend on the driver attachable # driver is used when overlay, if set to true, in addition to services, independent containers can also be attached to the network; if an independent container is connected to the network, it can communicate with services and independent containers of the network connected to other Docker daemons ipam # Custom IPAM configuration. This is an object with multiple attributes, each of which is optional driver # IPAM driver, bridge or default config # Configuration item subnet # CIDR format subnet, indicating the network segment of this network external # External network, if set to true, docker-compose up will not try to create it, and an error will be raised if it does not exist name # For versions above v3.5, set the name for this network File format Example: version: "3" services: redis: image: redis:alpine ports: - "6379" networks: -frontend deploy: replicas: 2 update_config: parallelism: 2 delay: 10s restart_policy: condition: on-failure db: image: postgres:9.4 volumes: -db-data:/var/lib/postgresql/data networks: - backend deploy: placement: constraints: [node.role == manager] The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM. You may also be interested in:
|
<<: Summary of methods to improve mysql count
>>: JavaScript implements random generation of verification code and verification
Table of contents Preface Introduction ngram full...
Today I was dealing with the issue of migrating a...
This article example shares the specific code of ...
Table of contents 1. The origin of fork 2. Early ...
What products do you want to mention? Recently, t...
Of course, it also includes some personal experien...
1. Check sql_mode select @@sql_mode The queried v...
This article shares the installation steps of MyS...
disabled definition and usage The disabled attrib...
Maybe you are using include files here, which is u...
This article shares the specific code of jQuery t...
Preface tcpdump is a well-known command-line pack...
When using <a href="" onclick="&...
Install fastdfs on Docker Mount directory -v /e/f...
With the continuous development of the Internet ec...