Detailed tutorial on building an ETCD cluster for Docker microservices

Detailed tutorial on building an ETCD cluster for Docker microservices

etcd is a highly available key-value storage system mainly used for shared configuration and service discovery. etcd is developed and maintained by CoreOS. It is inspired by ZooKeeper and Doozer. It is written in Go and handles log replication through the Raft consensus algorithm to ensure strong consistency. Raft is a new consistency algorithm from Stanford, suitable for log replication in distributed systems. Raft achieves consistency through elections. In Raft, any node may become a leader. Google's container cluster management system Kubernetes, open source PaaS platform Cloud Foundry, and CoreOS's Fleet all widely use etcd.

Features of etcd

Simple: curl-accessible user API (HTTP+JSON) Well-defined, user-facing API (gRPC)

Security: Optional SSL client certificate authentication

Fast: 1000 write operations per second per instance

Reliable: Use Raft to ensure consistency

There are three main forms of etcd building its own high availability cluster

1) Static discovery: Know in advance which nodes are in the Etcd cluster, and directly specify the addresses of each node of Etcd at startup 2) Etcd dynamic discovery: Use the existing Etcd cluster as the data interaction point, and then implement the service discovery mechanism through the existing cluster when expanding the new cluster 3) DNS dynamic discovery: Obtain other node address information through DNS query

The basic environment for this construction

Underlying OS: Centos7
Docker version: Docker version 18.06.1-ce
IP:
Server A: 192.167.0.168
Server B: 192.167.0.170
Server C: 192.167.0.172
First, download the latest etcd image on each server

# docker pull quay.io/coreos/etcd

Due to limited machine resources, I configured 3 containers on one machine and created a subnetwork on the machine. The three containers are in one network.

# docker network create --subnet=192.167.0.0/16 etcdnet

Next, I used two methods to create the cluster: 1. Add the three servers to the cluster one by one; 2. Add the three servers to the cluster at the same time. The following commands marked with A are executed on machine A, and similarly for B and C.

1. Add servers to the cluster one by one

A runs an ETCD instance on container/server A, named autumn-client0. Note that its status is new and only its own IP is in "-initial-cluster".

# docker run -d -p 2379:2379 -p 2380:2380 --net etcdnet --ip 192.167.0.168 --name etcd0 quay.io/coreos/etcd /usr/local/bin/etcd --name autumn-client0 -advertise-client-urls http://192.167.0.168:2379 -listen-client-urls http://0.0.0.0:2379 -initial-advertise-peer-urls http://192.167.0.168:2380 -listen-peer-urls http://0.0.0.0:2380 -initial-cluster-token etcd-cluster -initial-cluster "autumn-client0=http://192.167.0.168:2380" -initial-cluster-state new

Parameter Description

--data-dir specifies the data storage directory of the node. The data includes node ID, cluster ID, cluster initialization configuration, and Snapshot files. If --wal-dir is not specified, WAL files are also stored.
—wal-dir specifies the storage directory of the node's was files. If this parameter is specified, the wal files will be stored separately from other data files.
--name node name --initial-advertise-peer-urls inform the cluster of other node urls.
— listen-peer-urls listens to the URL for communication with other nodes — advertise-client-urls informs the client url, which is the service url
— initial-cluster-token cluster ID
— all nodes in the initial-cluster cluster

Configuration file description, such as

# [member]
# Node name ETCD_NAME=node1
#Data storage location ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#Listen to the addresses of other Etcd instances ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
# Listen client address ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
# Notify other Etcd instance addresses ETCD_INITIAL_ADVERTISE_PEER_URLS="http://node1:2380"
# if you use different ETCD_NAME (eg test), set ETCD_INITIAL_CLUSTER value for this name, ie "test=http://..."
# Initialize the node addresses in the cluster ETCD_INITIAL_CLUSTER="node1=http://node1:2380,node2=http://node2:2380,etcd2=http://etcd2:2380"
# Initialize the cluster status, new means creating a new ETCD_INITIAL_CLUSTER_STATE="new"
# Initialize cluster token
ETCD_INITIAL_CLUSTER_TOKEN="mritd-etcd-cluster"
# Notify client address ETCD_ADVERTISE_CLIENT_URLS=http://node1:2379,http://node1:4001

A On the etcd service of server A, add a new node by calling the API: 192.167.0.170

# curl http://127.0.0.1:2379/v2/members -XPOST -H "Content-Type: application/json" -d '{"peerURLs": ["http://192.167.0.170:2480"]}'

B runs an ETCD instance on container/server B, named autumn-client1. Note that its status is existing, and the previous IP and its own IP are in "-initial-cluster".

# docker run -d -p 2479:2479 -p 2480:2480 --name etcd1 quay.io/coreos/etcd /usr/local/bin/etcd --name autumen-client1 -advertise-client-urls http://192.167.0.170:2379 -listen-client-urls http://0.0.0.0:2379 -initial-advertise-peer-urls http://192.167.0.170:2380 -listen-peer-urls http://0.0.0.0:2480 -initial-cluster-token etcd-cluster -initial-cluster "autumn-client0=http://192.167.0.168:2380,autumn-client1=http://192.167.0.170:2480" -initial-cluster-state existing

A On the etcd service of server A, add a new node by calling the API: 192.168.7.172

# curl http://127.0.0.1:2379/v2/members -XPOST -H "Content-Type: application/json" -d '{"peerURLs": ["http://192.167.0.172:2580"]}'

C Run an ETCD instance on server C, named autumn-client2. Note that its status is existing. "-initial-cluster" contains the IP addresses of all previous nodes and its own IP address.

# docker run -d -p 2579:2579 -p 2580:2580 --name etcd quay.io/coreos/etcd -name autumn-client2 -advertise-client-urls http://192.167.0.172:2579 -listen-client-urls http://0.0.0.0:2379 -initial-advertise-peer-urls http://192.167.0.172:2580 -listen-peer-urls http://0.0.0.0:2380 -initial-cluster-token etcd-cluster -initial-cluster "autumn-client0=http://192.167.0.168:2380,autumn-client1=http://192.167.0.170:2480,autumn-client2=http://192.167.0.172:2580" -initial-cluster-state existing

2. Add servers to the cluster

(“-initial-cluster” contains the IP addresses of all nodes, and the status is new)

Execute on A

# docker run -d -p 2379:2379 -p 2380:2380 --restart=always --net etcdnet --ip 192.167.0.168 --name etcd0 quay.io/coreos/etcd /usr/local/bin/etcd --name autumn-client0 -advertise-client-urls http://192.167.0.168:2379 -listen-client-urls http://0.0.0.0:2379 -initial-advertise-peer-urls http://192.167.0.168:2380 -listen-peer-urls http://0.0.0.0:2380 -initial-cluster-token etcd-cluster -initial-cluster autumn-client0=http://192.167.0.168:2380,autumn-client1=http://192.167.0.170:2480,autumn-client2=http://192.167.0.172:2580 -initial-cluster-state new

Execute on B

# docker run -d -p 2479:2479 -p 2480:2480 --restart=always --net etcdnet --ip 192.167.0.170 --name etcd1 quay.io/coreos/etcd /usr/local/bin/etcd --name autumn-client1 -advertise-client-urls http://192.167.0.170:2479 -listen-client-urls http://0.0.0.0:2479 -initial-advertise-peer-urls http://192.167.0.170:2480 -listen-peer-urls http://0.0.0.0:2480 -initial-cluster-token etcd-cluster -initial-cluster autumn-client0=http://192.167.0.168:2380,autumn-client1=http://192.167.0.170:2480,autumn-client2=http://192.167.0.172:2580 -initial-cluster-state new

Execute on C

# docker run -d -p 2579:2579 -p 2580:2580 --restart=always --net etcdnet --ip 192.167.0.172 --name etcd2 quay.io/coreos/etcd /usr/local/bin/etcd --name autumn-client2 -advertise-client-urls http://192.167.0.172:2579 -listen-client-urls http://0.0.0.0:2579 -initial-advertise-peer-urls http://192.167.0.172:2580 -listen-peer-urls http://0.0.0.0:2580 -initial-cluster-token etcd-cluster -initial-cluster autumn-client0=http://192.167.0.168:2380,autumn-client1=http://192.167.0.170:2480,autumn-client2=http://192.167.0.172:2580 -initial-cluster-state new

Cluster verification. The clusters created by the two methods can be verified by the following methods:

1. Verify cluster members. Check members on each machine in the cluster and the results should be the same

[root@localhost ~]# curl -L http://127.0.0.1:2379/v2/members
{"members":[{"id":"1a661f2b9997ba39","name":"autumn-client0","peerURLs":["http://192.167.0.168:2380"],"clientURLs":["http://192.168.7.168:2379"]},{"id":"4932c8ea462e079c","name":"autumn-client2","peerURLs":["http://192.167.0.172:2580"],"clientURLs":["http://192.167.0.172:2579"]},{"id":"c1dbdde07e61741e","name":"autumn-client1","peerURLs":["http://192.167.0.170:2480"],"clientURLs":[http://192.167.0.170:2479]}]}

2. Add data on one machine and check the data on other machines. The results should be the same. Execute on A

[root@localhost ~]# curl -L http://127.0.0.1:2379/v2/keys/message -XPUT -d value="Hello autumn"
{"action":"set","node":{"key":"/message","value":"Hello autumn","modifiedIndex":13,"createdIndex":13},"prevNode":{"key":"/message","value":"Hello world1","modifiedIndex":11,"createdIndex":11}}

Execute on B and C

[root@localhost ~]# curl -L http://127.0.0.1:2379/v2/keys/message
{"action":"get","node":{"key":"/message","value":"Hello autumn","modifiedIndex":13,"createdIndex":13}}

etcd api interface

  Basic operation api: https://github.com/coreos/etcd/blob/6acb3d67fbe131b3b2d5d010e00ec80182be4628/Documentation/v2/api.md
 
  Cluster configuration api: https://github.com/coreos/etcd/blob/6acb3d67fbe131b3b2d5d010e00ec80182be4628/Documentation/v2/members_api.md
 
  Authentication API: https://github.com/coreos/etcd/blob/6acb3d67fbe131b3b2d5d010e00ec80182be4628/Documentation/v2/auth_api.md
 
  Configuration items: https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md
  
  https://coreos.com/etcd/docs/latest/runtime-configuration.html
  https://coreos.com/etcd/docs/latest/clustering.html
  https://coreos.com/etcd/docs/latest/runtime-configuration.html
  https://coreos.com/etcd/docs/latest/
  https://coreos.com/etcd/docs/latest/admin_guide.html#disaster-recovery
  It uses a standard restful interface and supports both http and https protocols.

Service Registration and Discovery


Traditional service calls are generally made by reading the IP address through the configuration file. There are many limitations here, such as inflexibility, inability to perceive the status of the service, and complex load balancing for service calls. After the introduction of etcd, the problem will be greatly simplified. Here are several steps

After the service is started, it registers with etcd and reports its listening port, current weight factor and other information, and sets the ttl value for this information.

The service periodically reports information such as weight factors within the ttl period.

When the client calls a service, it obtains information from etcd, makes the call, and monitors whether the service changes (implemented through the watch method).
When a new service is added, the watch method monitors the change and adds the service to the substitute list. When the service hangs up, the ttl becomes invalid. The client detects the change and kicks the service out of the call list, thereby realizing dynamic expansion of the service.

On the other hand, the client side uses the weight factor obtained from each change to implement a weighted call strategy on the client side, thereby ensuring load balancing of the backend service.

The above is the detailed content of the tutorial on setting up the ETCD cluster of Docker microservices. For more information on setting up the ETCD cluster of Docker microservices, please pay attention to other related articles on 123WORDPRESS.COM!

You may also be interested in:
  • Learn to deploy microservices with docker in ten minutes
  • An example of using Dapr to simplify microservices from scratch

<<:  Detailed explanation of CSS child element fixed positioning solution relative to parent element

>>:  Mysql case analysis of transaction isolation level

Recommend

The process of building lamp architecture through docker container

Table of contents 1. Pull the centos image 2. Bui...

Implementation of docker view container log command

Why should we read the log? For example, if the c...

CSS delivery address parallelogram line style example code

The code looks like this: // Line style of the pa...

CSS style writing order and naming conventions and precautions

The significance of writing order Reduce browser ...

Import CSS files using judgment conditions

Solution 1: Use conditional import in HTML docume...

Let's talk about the Vue life cycle in detail

Table of contents Preface 1. Life cycle in Vue2 I...

How to use CURRENT_TIMESTAMP in MySQL

Table of contents Use of CURRENT_TIMESTAMP timest...

js native carousel plug-in production

This article shares the specific code for the js ...

Tips for implementing multiple borders in CSS

1. Multiple borders[1] Background: box-shadow, ou...

Vue axios interceptor commonly used repeated request cancellation

introduction The previous article introduced the ...

Detailed tutorial on building a local idea activation server

Preface The blogger uses the idea IDE. Because th...

CSS sets Overflow to hide the scroll bar while allowing scrolling

CSS sets Overflow to hide the scroll bar while al...

Solution to MySql Error 1698 (28000)

1. Problem description: MysqlERROR1698 (28000) so...

Detailed explanation of encoding issues during MySQL command line operations

1. Check the MySQL database encoding mysql -u use...