This article will use Note that the For the archive distributions, the config directory location defaults to That is, it is set by the environment variable Preparation Install Here we promote the use of daocloud to accelerate the installation: #docker curl -sSL https://get.daocloud.io/docker | sh #docker-compose curl -L \ https://get.daocloud.io/docker/compose/releases/download/1.23.2/docker-compose-`uname -s`-`uname -m` \ > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose #View the installation results docker-compose -v Data Directory #Create data/log directory Here we deploy 3 nodes mkdir /opt/elasticsearch/data/{node0,nod1,node2} -p mkdir /opt/elasticsearch/logs/{node0,nod1,node2} -p cd /opt/elasticsearch #Permissions I'm also confused. Giving privileged doesn't work either, so I just use 0777. chmod 0777 data/* -R && chmod 0777 logs/* -R #Prevent JVM from reporting an error echo vm.max_map_count=262144 >> /etc/sysctl.conf sysctl -p docker-compse orchestration service Create an orchestration file Parameter Description Cluster name Node name, whether it can be used as a master node, and whether it stores data Lock the physical memory address of the process to avoid swapping (swapped) to improve performance Enable cors to use the Head plugin JVM memory size configuration Since versions after Set Of course, you can also mount your own configuration file. The configuration file of volumes: - path/to/local/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro docker-compose.yml version: '3' services: elasticsearch_n0: image: elasticsearch:6.6.2 container_name: elasticsearch_n0 privileged: true environment: - cluster.name=elasticsearch-cluster - node.name=node0 - node.master=true - node.data=true - bootstrap.memory_lock=true - http.cors.enabled=true - http.cors.allow-origin=* - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - "discovery.zen.ping.unicast.hosts=elasticsearch_n0,elasticsearch_n1,elasticsearch_n2" - "discovery.zen.minimum_master_nodes=2" ulimits: memlock: soft: -1 hard: -1 volumes: - ./data/node0:/usr/share/elasticsearch/data - ./logs/node0:/usr/share/elasticsearch/logs ports: - 9200:9200 elasticsearch_n1: image: elasticsearch:6.6.2 container_name: elasticsearch_n1 privileged: true environment: - cluster.name=elasticsearch-cluster - node.name=node1 - node.master=true - node.data=true - bootstrap.memory_lock=true - http.cors.enabled=true - http.cors.allow-origin=* - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - "discovery.zen.ping.unicast.hosts=elasticsearch_n0,elasticsearch_n1,elasticsearch_n2" - "discovery.zen.minimum_master_nodes=2" ulimits: memlock: soft: -1 hard: -1 volumes: - ./data/node1:/usr/share/elasticsearch/data - ./logs/node1:/usr/share/elasticsearch/logs ports: - 9201:9200 elasticsearch_n2: image: elasticsearch:6.6.2 container_name: elasticsearch_n2 privileged: true environment: - cluster.name=elasticsearch-cluster - node.name=node2 - node.master=true - node.data=true - bootstrap.memory_lock=true - http.cors.enabled=true - http.cors.allow-origin=* - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - "discovery.zen.ping.unicast.hosts=elasticsearch_n0,elasticsearch_n1,elasticsearch_n2" - "discovery.zen.minimum_master_nodes=2" ulimits: memlock: soft: -1 hard: -1 volumes: - ./data/node2:/usr/share/elasticsearch/data - ./logs/node2:/usr/share/elasticsearch/logs ports: - 9202:9200 Here we open the host's If multi-machine deployment is required, map the #For example, one of the hosts is 192.168.1.100 ... - "discovery.zen.ping.unicast.hosts=192.168.1.100:9300,192.168.1.101:9300,192.168.1.102:9300" ... ports: ... - 9300:9300 Create and start the service [root@localhost elasticsearch]# docker-compose up -d [root@localhost elasticsearch]# docker-compose ps Name Command State Ports -------------------------------------------------------------------------------------------- elasticsearch_n0 /usr/local/bin/docker-entr ... Up 0.0.0.0:9200->9200/tcp, 9300/tcp elasticsearch_n1 /usr/local/bin/docker-entr ... Up 0.0.0.0:9201->9200/tcp, 9300/tcp elasticsearch_n2 /usr/local/bin/docker-entr ... Up 0.0.0.0:9202->9200/tcp, 9300/tcp #Startup failed to view errors [root@localhost elasticsearch]# docker-compose logs #At most, it is some access rights/JVM vm.max_map_count setting issues Check the cluster status Visit ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 172.25.0.3 36 98 79 3.43 0.88 0.54 mdi * node0 172.25.0.2 48 98 79 3.43 0.88 0.54 mdi - node2 172.25.0.4 42 98 51 3.43 0.88 0.54 mdi - node1 Verify Failover Check the status through the cluster interface Simulate the master node going offline, the cluster starts electing a new master node, and migrates and reshards the data. [root@localhost elasticsearch]# docker-compose stop elasticsearch_n0 Stopping elasticsearch_n0 ... done Cluster status (note that the original master node is offline after changing the http port). The downed node is still in the cluster and will be removed after waiting for a period of time without recovery. ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 172.25.0.2 57 84 5 0.46 0.65 0.50 mdi - node2 172.25.0.4 49 84 5 0.46 0.65 0.50 mdi * node1 172.25.0.3 mdi-node0 Wait for a while ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 172.25.0.2 44 84 1 0.10 0.33 0.40 mdi - node2 172.25.0.4 34 84 1 0.10 0.33 0.40 mdi * node1 Restore node node0 [root@localhost elasticsearch]# docker-compose start elasticsearch_n0 Starting elasticsearch_n0 ... done Wait for a while ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 172.25.0.2 52 98 25 0.67 0.43 0.43 mdi - node2 172.25.0.4 43 98 25 0.67 0.43 0.43 mdi * node1 172.25.0.3 40 98 46 0.67 0.43 0.43 mdi - node0 Observe with Head plug-in git clone git://github.com/mobz/elasticsearch-head.git cd elasticsearch-head npm install npm run start The cluster status diagram makes it easier to see the process of automatic data migration 1. The normal data of the cluster is safely distributed on 3 nodes 2. Offline node1 master node cluster starts to migrate data Migrating Migration Complete 3. Restore node1 Question Note elasticsearch watermark After deployment, when creating the index, it was found that some shards were in the Unsigned state. This was due to the elasticsearch watermark: low, high, flood_stage limitations. By default, an alarm will be issued when the hard disk usage rate is higher than curl -X PUT http://192.168.20.6:9201/_cluster/settings \ -H 'Content-type':'application/json' \ -d '{"transient":{"cluster.routing.allocation.disk.threshold_enabled": false}}' The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM. You may also be interested in:
|
<<: Detailed explanation of the implementation principle of Vue2.0/3.0 two-way data binding
>>: Solution to the problem of flash back after entering the password in MySQL database
/******************** * Character device driver**...
Table of contents 1. Install Docker 2. Write code...
Linux task management - background running and te...
When we develop a web project with Django, the te...
Detailed explanation of replace into example in m...
This article shares the specific code of javascri...
1. Statistics of PV and IP Count the PV (Page Vie...
Table of contents Preface 1. The significance of ...
This article shares the specific code of js to re...
Here are some tips from training institutions and...
【content】: 1. Use background-image gradient style...
Without further ado, these three methods are: ren...
Table of contents premise TypeScript vs JavaScrip...
Table of contents el-scrollbar scroll bar el-uplo...
Table of contents 1. Description of functions in ...