etcd is a highly available key-value storage system mainly used for shared configuration and service discovery. etcd is developed and maintained by CoreOS. It is inspired by ZooKeeper and Doozer. It is written in Go and handles log replication through the Raft consensus algorithm to ensure strong consistency. Raft is a new consistency algorithm from Stanford, suitable for log replication in distributed systems. Raft achieves consistency through elections. In Raft, any node may become a leader. Google's container cluster management system Kubernetes, open source PaaS platform Cloud Foundry, and CoreOS's Fleet all widely use etcd. Features of etcdSimple: curl-accessible user API (HTTP+JSON) Well-defined, user-facing API (gRPC) Security: Optional SSL client certificate authentication Fast: 1000 write operations per second per instance Reliable: Use Raft to ensure consistency There are three main forms of etcd building its own high availability cluster1) Static discovery: Know in advance which nodes are in the Etcd cluster, and directly specify the addresses of each node of Etcd at startup 2) Etcd dynamic discovery: Use the existing Etcd cluster as the data interaction point, and then implement the service discovery mechanism through the existing cluster when expanding the new cluster 3) DNS dynamic discovery: Obtain other node address information through DNS query The basic environment for this construction Underlying OS: Centos7 # docker pull quay.io/coreos/etcd Due to limited machine resources, I configured 3 containers on one machine and created a subnetwork on the machine. The three containers are in one network. # docker network create --subnet=192.167.0.0/16 etcdnet Next, I used two methods to create the cluster: 1. Add the three servers to the cluster one by one; 2. Add the three servers to the cluster at the same time. The following commands marked with A are executed on machine A, and similarly for B and C. 1. Add servers to the cluster one by oneA runs an ETCD instance on container/server A, named autumn-client0. Note that its status is new and only its own IP is in "-initial-cluster". # docker run -d -p 2379:2379 -p 2380:2380 --net etcdnet --ip 192.167.0.168 --name etcd0 quay.io/coreos/etcd /usr/local/bin/etcd --name autumn-client0 -advertise-client-urls http://192.167.0.168:2379 -listen-client-urls http://0.0.0.0:2379 -initial-advertise-peer-urls http://192.167.0.168:2380 -listen-peer-urls http://0.0.0.0:2380 -initial-cluster-token etcd-cluster -initial-cluster "autumn-client0=http://192.167.0.168:2380" -initial-cluster-state new Parameter Description --data-dir specifies the data storage directory of the node. The data includes node ID, cluster ID, cluster initialization configuration, and Snapshot files. If --wal-dir is not specified, WAL files are also stored. —wal-dir specifies the storage directory of the node's was files. If this parameter is specified, the wal files will be stored separately from other data files. --name node name --initial-advertise-peer-urls inform the cluster of other node urls. — listen-peer-urls listens to the URL for communication with other nodes — advertise-client-urls informs the client url, which is the service url — initial-cluster-token cluster ID — all nodes in the initial-cluster cluster Configuration file description, such as # [member] # Node name ETCD_NAME=node1 #Data storage location ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #ETCD_WAL_DIR="" #ETCD_SNAPSHOT_COUNT="10000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" #Listen to the addresses of other Etcd instances ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380" # Listen client address ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" #ETCD_CORS="" # #[cluster] # Notify other Etcd instance addresses ETCD_INITIAL_ADVERTISE_PEER_URLS="http://node1:2380" # if you use different ETCD_NAME (eg test), set ETCD_INITIAL_CLUSTER value for this name, ie "test=http://..." # Initialize the node addresses in the cluster ETCD_INITIAL_CLUSTER="node1=http://node1:2380,node2=http://node2:2380,etcd2=http://etcd2:2380" # Initialize the cluster status, new means creating a new ETCD_INITIAL_CLUSTER_STATE="new" # Initialize cluster token ETCD_INITIAL_CLUSTER_TOKEN="mritd-etcd-cluster" # Notify client address ETCD_ADVERTISE_CLIENT_URLS=http://node1:2379,http://node1:4001 A On the etcd service of server A, add a new node by calling the API: 192.167.0.170 # curl http://127.0.0.1:2379/v2/members -XPOST -H "Content-Type: application/json" -d '{"peerURLs": ["http://192.167.0.170:2480"]}' B runs an ETCD instance on container/server B, named autumn-client1. Note that its status is existing, and the previous IP and its own IP are in "-initial-cluster". # docker run -d -p 2479:2479 -p 2480:2480 --name etcd1 quay.io/coreos/etcd /usr/local/bin/etcd --name autumen-client1 -advertise-client-urls http://192.167.0.170:2379 -listen-client-urls http://0.0.0.0:2379 -initial-advertise-peer-urls http://192.167.0.170:2380 -listen-peer-urls http://0.0.0.0:2480 -initial-cluster-token etcd-cluster -initial-cluster "autumn-client0=http://192.167.0.168:2380,autumn-client1=http://192.167.0.170:2480" -initial-cluster-state existing A On the etcd service of server A, add a new node by calling the API: 192.168.7.172 # curl http://127.0.0.1:2379/v2/members -XPOST -H "Content-Type: application/json" -d '{"peerURLs": ["http://192.167.0.172:2580"]}' C Run an ETCD instance on server C, named autumn-client2. Note that its status is existing. "-initial-cluster" contains the IP addresses of all previous nodes and its own IP address. # docker run -d -p 2579:2579 -p 2580:2580 --name etcd quay.io/coreos/etcd -name autumn-client2 -advertise-client-urls http://192.167.0.172:2579 -listen-client-urls http://0.0.0.0:2379 -initial-advertise-peer-urls http://192.167.0.172:2580 -listen-peer-urls http://0.0.0.0:2380 -initial-cluster-token etcd-cluster -initial-cluster "autumn-client0=http://192.167.0.168:2380,autumn-client1=http://192.167.0.170:2480,autumn-client2=http://192.167.0.172:2580" -initial-cluster-state existing 2. Add servers to the cluster(“-initial-cluster” contains the IP addresses of all nodes, and the status is new) Execute on A # docker run -d -p 2379:2379 -p 2380:2380 --restart=always --net etcdnet --ip 192.167.0.168 --name etcd0 quay.io/coreos/etcd /usr/local/bin/etcd --name autumn-client0 -advertise-client-urls http://192.167.0.168:2379 -listen-client-urls http://0.0.0.0:2379 -initial-advertise-peer-urls http://192.167.0.168:2380 -listen-peer-urls http://0.0.0.0:2380 -initial-cluster-token etcd-cluster -initial-cluster autumn-client0=http://192.167.0.168:2380,autumn-client1=http://192.167.0.170:2480,autumn-client2=http://192.167.0.172:2580 -initial-cluster-state new Execute on B # docker run -d -p 2479:2479 -p 2480:2480 --restart=always --net etcdnet --ip 192.167.0.170 --name etcd1 quay.io/coreos/etcd /usr/local/bin/etcd --name autumn-client1 -advertise-client-urls http://192.167.0.170:2479 -listen-client-urls http://0.0.0.0:2479 -initial-advertise-peer-urls http://192.167.0.170:2480 -listen-peer-urls http://0.0.0.0:2480 -initial-cluster-token etcd-cluster -initial-cluster autumn-client0=http://192.167.0.168:2380,autumn-client1=http://192.167.0.170:2480,autumn-client2=http://192.167.0.172:2580 -initial-cluster-state new Execute on C # docker run -d -p 2579:2579 -p 2580:2580 --restart=always --net etcdnet --ip 192.167.0.172 --name etcd2 quay.io/coreos/etcd /usr/local/bin/etcd --name autumn-client2 -advertise-client-urls http://192.167.0.172:2579 -listen-client-urls http://0.0.0.0:2579 -initial-advertise-peer-urls http://192.167.0.172:2580 -listen-peer-urls http://0.0.0.0:2580 -initial-cluster-token etcd-cluster -initial-cluster autumn-client0=http://192.167.0.168:2380,autumn-client1=http://192.167.0.170:2480,autumn-client2=http://192.167.0.172:2580 -initial-cluster-state new Cluster verification. The clusters created by the two methods can be verified by the following methods: 1. Verify cluster members. Check members on each machine in the cluster and the results should be the same [root@localhost ~]# curl -L http://127.0.0.1:2379/v2/members {"members":[{"id":"1a661f2b9997ba39","name":"autumn-client0","peerURLs":["http://192.167.0.168:2380"],"clientURLs":["http://192.168.7.168:2379"]},{"id":"4932c8ea462e079c","name":"autumn-client2","peerURLs":["http://192.167.0.172:2580"],"clientURLs":["http://192.167.0.172:2579"]},{"id":"c1dbdde07e61741e","name":"autumn-client1","peerURLs":["http://192.167.0.170:2480"],"clientURLs":[http://192.167.0.170:2479]}]} 2. Add data on one machine and check the data on other machines. The results should be the same. Execute on A [root@localhost ~]# curl -L http://127.0.0.1:2379/v2/keys/message -XPUT -d value="Hello autumn" {"action":"set","node":{"key":"/message","value":"Hello autumn","modifiedIndex":13,"createdIndex":13},"prevNode":{"key":"/message","value":"Hello world1","modifiedIndex":11,"createdIndex":11}} Execute on B and C [root@localhost ~]# curl -L http://127.0.0.1:2379/v2/keys/message {"action":"get","node":{"key":"/message","value":"Hello autumn","modifiedIndex":13,"createdIndex":13}} etcd api interfaceBasic operation api: https://github.com/coreos/etcd/blob/6acb3d67fbe131b3b2d5d010e00ec80182be4628/Documentation/v2/api.md Cluster configuration api: https://github.com/coreos/etcd/blob/6acb3d67fbe131b3b2d5d010e00ec80182be4628/Documentation/v2/members_api.md Authentication API: https://github.com/coreos/etcd/blob/6acb3d67fbe131b3b2d5d010e00ec80182be4628/Documentation/v2/auth_api.md Configuration items: https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md https://coreos.com/etcd/docs/latest/runtime-configuration.html https://coreos.com/etcd/docs/latest/clustering.html https://coreos.com/etcd/docs/latest/runtime-configuration.html https://coreos.com/etcd/docs/latest/ https://coreos.com/etcd/docs/latest/admin_guide.html#disaster-recovery It uses a standard restful interface and supports both http and https protocols. Service Registration and Discovery
After the service is started, it registers with etcd and reports its listening port, current weight factor and other information, and sets the ttl value for this information. The service periodically reports information such as weight factors within the ttl period. When the client calls a service, it obtains information from etcd, makes the call, and monitors whether the service changes (implemented through the watch method). On the other hand, the client side uses the weight factor obtained from each change to implement a weighted call strategy on the client side, thereby ensuring load balancing of the backend service. The above is the detailed content of the tutorial on setting up the ETCD cluster of Docker microservices. For more information on setting up the ETCD cluster of Docker microservices, please pay attention to other related articles on 123WORDPRESS.COM! You may also be interested in:
|
<<: Detailed explanation of CSS child element fixed positioning solution relative to parent element
>>: Mysql case analysis of transaction isolation level
Table of contents 1. Pull the centos image 2. Bui...
Why should we read the log? For example, if the c...
The code looks like this: // Line style of the pa...
The significance of writing order Reduce browser ...
Solution 1: Use conditional import in HTML docume...
Table of contents Preface 1. Life cycle in Vue2 I...
Table of contents Use of CURRENT_TIMESTAMP timest...
This article shares the specific code for the js ...
1. Multiple borders[1] Background: box-shadow, ou...
Mysql limit paging statement usage Compared with ...
introduction The previous article introduced the ...
Preface The blogger uses the idea IDE. Because th...
CSS sets Overflow to hide the scroll bar while al...
1. Problem description: MysqlERROR1698 (28000) so...
1. Check the MySQL database encoding mysql -u use...