Detailed explanation of building a continuous integration cluster service based on docker-swarm

Detailed explanation of building a continuous integration cluster service based on docker-swarm

Preface

This article is just a simple record of my own construction process. If you have any questions during practice, you can discuss them together.

In order to simulate the cluster environment on the local machine (macOS), vb and docker-machine are used. The following are the machine facilities for overall continuous integration:

1. Service nodes: three manager nodes and one worker node. The manager needs to occupy more resources, so the manager configuration should be as high as possible. The fault tolerance rate of the swarm manager node is (N-1)/2. N is the number of manager nodes. That is to say, if there are 3 managers, then one manager node can be tolerated to fail. Official algorithm description: Raft consensus in swarm mode.

2. Local image repository registry: used to store all service docker images that need to be deployed.

https://docs.docker.com/registry/deploying/

Because the swarm mechanism is used, there is no need to consider service discovery and load balancing issues in inter-service communication (instead of the original consul & registor method).

3. Build the operation and maintenance node ops of the image:

That is, the operation and maintenance machine. A single node will suffice. Mainly responsible for building images and pushing images. You can build a private repository for gitlab in ops. Maintain build scripts. The configuration required for the machine is not high, but the network bandwidth should be as high as possible.

Use docker-machine to simulate a cluster environment

Create a registry node

docker-machine create -d virtualbox --virtualbox-memory "512" registry

–engine-registry-mirror This parameter can be used to set the addresses of some acceleration repositories.

Create manager and worker nodes

manager

Copy the code as follows:
docker-machine create -d virtualbox --virtualbox-memory "800" manager1

worker:

docker-machine create -d virtualbox --virtualbox-memory "800" worker1 
docker-machine create -d virtualbox --virtualbox-memory "800" worker2 
docker-machine create -d virtualbox --virtualbox-memory "800" worker3

Create an ops node

docker-machine create -d virtualbox --virtualbox-memory "512" ops

View the machine list status

docker-machine ls

Create a registry service

Log in to the registry machine.

docker-machine ssh registry

Create a registry service.

docker run -d -p 5000:5000 --restart=always --name registry \
 -v `pwd` /data:/var/lib/registry \
 registry:2

The command sets the -v volume option so that the pulled image data will not be lost each time the container service is restarted. For containers of storage types such as registry and mysql, it is recommended to set the volume. For better expansion, you can also back up the image repository to other drivers, such as Alibaba Cloud's OSS.

Run docker ps to see a started registry service. Of course, for better expansion, you can also mount it under your own domain name and add authentication information when re-running.

To make it easier to manage containers, you can use the docker-compose component. Install:

Copy the code as follows:
curl -L "https://github.com/docker/compose/releases/download/1.9.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

You can also start directly after writing the compose file:

docker-compose up -d

Local repository push image

Now you can try to pull an image and tag it to your local registry, such as

Copy the code as follows:
docker pull lijingyao0909/see:1.0.3 && docker tag lijingyao0909/see:1.0.3 localhost:5000/see:1.0.3

Then execute the push command:

docker push localhost:5000/see:1.0.3

This image is pushed to the registry service. The most direct way to view the image data is to view the local volume directory, such as the data directory in this example.
If you want to manage the registry more conveniently, you can also try registry UI related images such as hyper/docker-registry-web.

Copy the code as follows:
docker run -it -p 8080:8080 --name registry-web --link registry-srv -e REGISTRY_URL=http://registry-srv:5000/v2 -e REGISTRY_NAME=localhost:5000 hyper/docker-registry-web

Then visit hostname:5000/registory/index to see a simple image list UI.

https issues

If you perform the above steps during local testing, you may encounter the following problems when pushing or pulling images in other VB:

Error response from daemon: Get https://registry:5000/v1/_ping: dial tcp 218.205.57.154:5000: i/o timeout

The solution is to modify the registry authentication. First modify "/var/lib/boot2docker/profile" as follows:

sudo vi /var/lib/boot2docker/profile

Add to

DOCKER_OPTS="--insecure-registry <host-name>:5000"
DOCKER_OPTS="--insecure-registry registry:5000"

Because the hostname of the registry is registry. So executing the docker ifno command you can see:

Insecure Registries:
 registry:5000
 127.0.0.0/8

At the same time, on other worker machines and manager machines, the –insecure-registry attribute also needs to be modified before the private library image can be pulled. After modification, you need to restart vb.

After restarting, try pulling again in the manager

docker pull registry:5000/see:1.0.3

You can see that the repository is successfully connected and the image is pulled. Note that this example uses the machine name, registry, rather than the IP address. Therefore, when pulling the image, you need to configure the mapping between IP and machine name in the etc/hosts file of each vb. It is easier to remember the operation by using the machine name. Of course, the best way is to access the warehouse through the domain name.

References

Deploy the registry service

Create ops service

The swarm service cluster will pull the image directly from the registry and start the application service directly. The service image is directly stored in the registry, but the source code can be maintained on the ops machine. The ops virtual-box created earlier can deploy the gitlab service. For startup parameters, refer to Deploy GitLab Docker images

Pull a gitlab image first

docker run --detach \
 --hostname gitlab.lijingyao.com \
 --publish 443:443 --publish 80:80 --publish 22:22 \
 --name gitlab \
 --restart always \
 --volume `pwd`/gitlab/config:/etc/gitlab \
 --volume `pwd` /gitlab/logs:/var/log/gitlab \
 --volume `pwd` /gitlab/data:/var/opt/gitlab \
 gitlab/gitlab-ce:8.14.4-ce.0

Using git private repository

Because port 80 is bound, start gitlab and visit: http://machine-host/

The first time you enter gitlab, you will automatically jump to reset your password. You can set a new password, which is the password for the root account. You can then register other git users for use.
Here, if you apply for a domain name service, or locally bind gitlab.lijingyao.com to the IP address of this virtualbox, you can directly access the gitlab.lijingyao.com address. In the actual production environment, there is a fixed public IP and its own DNS service, so there is no need to bind a host. This is just a local test, so temporarily bind the host.

swarm

The service in this example is a simple springboot and gradle project. The service image can be pulled from docker hub, see service image. After packaging the image, push it to the registry repository directly in the gradle task. In the local environment, it can be executed directly in the project directory. The gradle task is then pushed to the vb registry, and then the image can be pulled from the registry repository. Now we are ready to initialize the swarm cluster.

Now the machines in the entire vb cluster are as follows:

$ docker-machine ls

NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
haproxy - virtualbox Running tcp://192.168.99.103:2376 v1.12.3 
manager1 - virtualbox Running tcp://192.168.99.100:2376 v1.12.3 
ops - virtualbox Running tcp://192.168.99.106:2376 v1.12.3 
registry - virtualbox Running tcp://192.168.99.107:2376 v1.12.3 
worker1 - virtualbox Running tcp://192.168.99.101:2376 v1.12.3 
worker2 - virtualbox Running tcp://192.168.99.102:2376 v1.12.3 
worker3 - virtualbox Running tcp://192.168.99.105:2376 v1.12.3

Then use docker-machine ssh manager1 to log in to the manager1 machine.

Initialize the swarm manager node

Initialize swarm on the manager1 machine. This initialized machine is the manager of the swarm. Execute:

 docker swarm init --advertise-addr 192.168.99.100

You will see the following execution output:

Swarm initialized: current node (03x5vnxmk2gc43i0d7xpycvjg) is now a manager.

To add a worker to this swarm, run the following command:

 docker swarm join \
 --token SWMTKN-1-5ru6lyco3upj7oje6hidug3erqczok84wk7bekzfaca4uv51r9-22bcjhkbxnclmw3nl3ui8601l \
 192.168.99.100:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

The generated token value is the key for other swarm cluster nodes to join the entire swarm. If you forget the token, you can execute it on manager1:

$docker swarm join-token manager

To view the current token value. The official recommendation is to replace the following tokens at least every 6 months. Replace the command:

$docker swarm join-token --rotate worker

Adding a worker node

Log in to worker1, worker2, and worker3 respectively and execute the join command.

Before joining, check the Docker network facilities. implement

$ docker network ls
NETWORK ID NAME DRIVER SCOPE
4b7fe1416322 bridge bridge local    
06ab6f3352b0 host host local    
eebd5c8e0d5d none null local

According to the command after manager1 is initialized, execute:

docker swarm join \
 --token SWMTKN-1-5ru6lyco3upj7oje6hidug3erqczok84wk7bekzfaca4uv51r9-22bcjhkbxnclmw3nl3ui8601l \
 192.168.99.100:2377

At this point, if you execute docker network ls on any worker node, you can see that there is an additional overlay network channel with a coverage range of swarm.

After all three workers have joined, the manager's node status can be seen on manager1

docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
03x5vnxmk2gc43i0d7xpycvjg * manager1 Ready Active Leader  
2y5wrndibe8c8sqv6851vrlgp worker1 Ready Active Reachable
dwgol1uinkpsybigc1gm5jgsv worker2 Ready Active  
etgyky6zztrapucm59yx33tg1 worker3 Ready Active Reachable

The Reachable status of manager status indicates that the node is also a manager node. This is because we executed the following in worker1 and worker3 respectively:

docker node promote worker1
docker node promote worker3

Worker1 and worker3 can also execute swarm commands at this time. When manager1 is shut down, one of them will be elected as the new leader. If you want to remove the manager status of a node, you can use the demote command to remove it. After execution, the worker node becomes a normal task node.

docker node demote worker1 worker3

Other states of swarm nodes

The swarm node can be set to drain state. The node in drain state will not execute any service.

Set a node to be unavailable:

docker node update --availability drain worker1

There are three states: Pause, Drain, and Active. Pause indicates that there is a task running and no new tasks are accepted.

If you want to remove the worker1 node from the swarm center, now execute the node to be removed (worker1): docker swarm leave and then execute: docker node rm worker1 on the manager to remove a swarm node.

Creating a swarm service

In this example, swarm is used to deploy a springboot-based rest api service. The address of the warehouse is springboot-restful-exam. The created service name is deftsee, which is bound to port 80 and extends 4 running container services.

docker service create \
 --replicas 4 \
 --name deftsee \
 --update-delay 10s \
 --publish 8080:80 \
 lijingyao0909/see:1.0.3

After the service is created, you can view the status of the service node

docker@manager1:~$ docker service ls
ID NAME REPLICAS IMAGE COMMAND
a6s5dpsyz7st deftsee 4/4 lijingyao0909/see:1.0.3 

REPLICAS represents the number of running containers of the service. If it is 0/4, it means that all services are not started. To view the running status of each node in detail, you can use docker service ps servicename

docker@manager1:~$ docker service ps deftsee
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
8lsdkf357lk0nmdeqk7bi33mp deftsee.1 lijingyao0909/see:1.0.3 worker2 Running Running 5 minutes ago 
cvqm5xn7t0bveo4btfjsm04jp deftsee.2 lijingyao0909/see:1.0.3 manager1 Running Running 7 minutes ago 
6s5km76w2vxmt0j4zgzi4xi5f deftsee.3 lijingyao0909/see:1.0.3 worker1 Running Running 5 minutes ago 
4cl9vnkssedpvu2wtzu6rtgxl deftsee.4 lijingyao0909/see:1.0.3 worker3 Running Running 6 minutes ago

You can see that the tasks are evenly divided among all four task nodes for execution. Next, expand the deftsee service

docker@manager1:~$ docker service scale deftsee=6
deftsee scaled to 6
docker@manager1:~$ docker service ps deftsee
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
8lsdkf357lk0nmdeqk7bi33mp deftsee.1 lijingyao0909/see:1.0.3 worker2 Running Running 8 minutes ago 
cvqm5xn7t0bveo4btfjsm04jp deftsee.2 lijingyao0909/see:1.0.3 manager1 Running Running 10 minutes ago 
6s5km76w2vxmt0j4zgzi4xi5f deftsee.3 lijingyao0909/see:1.0.3 worker1 Running Running 8 minutes ago 
4cl9vnkssedpvu2wtzu6rtgxl deftsee.4 lijingyao0909/see:1.0.3 worker3 Running Running 9 minutes ago 
71uv51uwvso4l340xfkbacp2i deftsee.5 lijingyao0909/see:1.0.3 manager1 Running Running 5 seconds ago 
4r2q7q782ab9fp49mdriq0ssk deftsee.6 lijingyao0909/see:1.0.3 worker2 Running Running 5 seconds ago

lijingyao0909/see:1.0.3 is a mirror of the public warehouse of dockerhub. The mirror will be pulled when the service is created. The overall speed is slow, so you can combine it with a private warehouse and pull the mirror directly on the registry machine. The service can be directly removed using docker service rm deftsee, and then rebuilt through the registry.

docker service create \
 --replicas 6 \
 --name deftsee \
 --update-delay 10s \
 --publish 8080:80 \
 registry:5000/see:1.0.4

At this point, log in to any worker service and view the running container image:

docker@worker2:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
89d4f588290b registry:5000/see:1.0.4 "/bin/sh -c 'java -Dc" About a minute ago Up About a minute 8080/tcp deftsee.1.eldpgb1aqtf9v49cxolydfjm9

If you want to update the service, you can directly update the version through the update command, and the service will be released in a rolling manner. By setting *–update-delay 10s *, you can change the delay time of each node during the update.

docker service update --image registry:5000/see:1.0.5 deftsee

Restart a node service

Shutdown: docker node update –availability drain worker1

Open: docker node update –availability active worker1

Update service port

Updating a service's port will restart the service (shut down the original service, recreate the service and start it):

 docker service update \
 --publish-add <PUBLISHED-PORT>:<TARGET-PORT> \
 <SERVICE>

docker@manager1:~$ docker service update \
 --publish-add 8099:8080 \
 deftsee

docker@manager1:~$ docker service ps deftsee
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
3xoe34msrht9eqv7eplnmlrz5 deftsee.1 registry:5000/see:1.0.4 manager1 Running Running 39 seconds ago  
eldpgb1aqtf9v49cxolydfjm9 \_ deftsee.1 registry:5000/see:1.0.4 worker2 Shutdown Shutdown 39 seconds ago  
9u4fh3mi5kxb14y6gih5d8tqv deftsee.2 registry:5000/see:1.0.4 manager1 Running Running about a minute ago 
0skgr5fx4xtt6y71yliksoft0 \_ deftsee.2 registry:5000/see:1.0.4 worker1 Shutdown Shutdown about a minute ago 
8hposdkqe92k7am084z6kt1j0 deftsee.3 registry:5000/see:1.0.4 worker3 Running Running about a minute ago 
c5vhx1wx0q8mxaweaq0mia6n7 \_ deftsee.3 registry:5000/see:1.0.4 manager1 Shutdown Shutdown about a minute ago 
9se1juxiinmetuaccgkjc3rr2 deftsee.4 registry:5000/see:1.0.4 worker1 Running Running about a minute ago 
4wofho0axvrjildxhckl52s41 \_ deftsee.4 registry:5000/see:1.0.4 worker3 Shutdown Shutdown about a minute ago

Service Authentication and Networking

After the service in the example is started, it can be accessed directly through ip:port. For example, http://192.168.99.100:8099/see, you can see that the service request will be distributed to each running node. That is the overlay network layer of swarm. The networks of each node are interconnected. Swarm does load balancing. Based on the LB of swarm, you can also build your own overlay network. All nodes in the created overlay network can communicate with this network. However, network options need to be specified when the service is created.

$docker network create \
 --driver-overlay \
 --subnet 10.0.9.0/24 \
 --opt encrypted \  
 my-network

$ docker service create \
 --name deftsee \
 --publish 8099:80 \
 --replicas 4 \
 --network my-network \
 -l com.df.serviceDomain=deftsee.com \
 -l com.df.notify=true \
 lijingyao0909/see:1.0.3

–network my-network specifies the docker network that the service can connect to. You can create a network named my-network on the swarm node. Therefore, it is also possible to build the service discovery and LB mechanism of consul and haproxy in the swarm mechanism.

When a network is specified for a service, the tasks executed on the swarm must also be on this specified network in order to communicate with the service. If the node is not added to the swarm mode node, or is not running and mounted on this specified network, it will not communicate with this network. Docker network ls will not detect the network. When creating a service, link it to this network using the --network my-network tag. When viewing the network, you can use docker network inspect my-network to view the mounted containers of the node listed in the returned Containers.

A network service is created. When a service is connected to the network, swarm will assign a VIP to the service under this network. The lb inside swarm will automatically distribute the service. There is no need to specify each service port. That is, containers connected to the same network can access the service through the service name. Because all containers added to this network will share a DNS mapping through the gossip protocol (vip mapping is based on the dns alias mapping bound to the service name).

View the VIP network information of the service:

$ docker service inspect \
 --format='{{json .Endpoint.VirtualIPs}}' \
 deftsee

Output: [{"NetworkID":"dn05pshfagohpebgonkhj5kxi","Addr":"10.255.0.6/16"}]

Swarm management

In order to maintain the availability of the manager node (heartbeat mechanism, leader election), the manager node can be set to not accept service operation, save manager node resources, and isolate the manager node from the task environment.

docker node update --availability drain <NODE>

Back up /var/lib/docker/swarm/raft status

Clean up unavailable nodes

docker node demote <NODE> 
docker node rm <id-node>.

The node re-joins the manager

$docker node demote <NODE>.
$docker node rm <NODE>.
$ docker swarm join ...

Specify a fixed IP address during initialization, init –advertise-addr. Worker nodes can use dynamic IP.

References

Swarm mode

The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM.

You may also be interested in:
  • Example of using Docker Swarm to build a distributed crawler cluster
  • How does docker swarm run a specified container on a specified node?
  • Detailed explanation of Docker Swarm service discovery and load balancing principles
  • Detailed explanation of docker swarm cluster failures and exceptions
  • How to use Docker Swarm to build a cluster
  • Docker Swarm Getting Started Example
  • Detailed explanation of using Docker 1.12 to build a multi-host Docker swarm cluster
  • How to create a Docker container cluster with Docker Swarm and DigitalOcean on Ubuntu 16.04
  • Easily install Docker and run Docker Swarm mode
  • How to install Docker and use it in Docker Swarm mode

<<:  Detailed discussion of InnoDB locks (record, gap, Next-Key lock)

>>:  Vue basic instructions example graphic explanation

Recommend

MySQL multi-table join query example explanation

In actual projects, there are relationships betwe...

MySQL 5.7.21 installation and configuration tutorial under Window10

This article records the installation and configu...

MySQL import and export backup details

Table of contents 1. Detailed explanation of MySQ...

Detailed process of building mongodb and mysql with docker-compose

Let's take a look at the detailed method of b...

How to configure Linux CentOS to run scripts regularly

Many times we want the server to run a script reg...

Complete steps to implement face recognition login in Ubuntu

1. Install Howdy: howdy project address sudo add-...

MySQL multi-instance installation boot auto-start service configuration process

1.MySQL multiple instances MySQL multi-instance m...

Implementation of MYSQL (telephone number, ID card) data desensitization

1. Data desensitization explanation In daily deve...

Conflict resolution when marquee and flash coexist in a page

The main symptom of the conflict is that the FLASH...

Learn MySQL in a simple way

Preface The database has always been my weak poin...

MySQL installation tutorial under Centos7

MySQL installation tutorial, for your reference, ...

Differences between this keyword in NodeJS and browsers

Preface Anyone who has learned JavaScript must be...

jQuery realizes the shuttle box effect

This article example shares the specific code of ...

How to build a tomcat image based on Dockerfile

Dockerfile is a file used to build a docker image...