1. Single machine environment construction# 1.1 Download Download the corresponding version of Zookeeper. Here I downloaded version 3.4.14. Official download address: https://archive.apache.org/dist/zookeeper/ # wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz 1.2 Unzip # tar -zxvf zookeeper-3.4.14.tar.gz 1.3 Configuring Environment Variables # vim /etc/profile Add environment variables: export ZOOKEEPER_HOME=/usr/app/zookeeper-3.4.14 export PATH=$ZOOKEEPER_HOME/bin:$PATH Make the configured environment variables take effect: # source /etc/profile 1.4 Modify the configuration# Go to the conf/ directory of the installation directory, copy the configuration sample and modify it: # cp zoo_sample.cfg zoo.cfg Specify the data storage directory and log file directory (directories do not need to be created in advance, the program will create them automatically). The complete configuration after modification is as follows: # The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/usr/local/zookeeper/data dataLogDir=/usr/local/zookeeper/log # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 Configuration parameter description: •tickTime: The basic time unit used for calculations. For example, session timeout: N*tickTime; 1.5 Startup Since the environment variables have been configured, you can directly start it using the following command: zkServer.sh start 1.6 Verification Use JPS to verify whether the process has been started. If QuorumPeerMain appears, it means the process has been started successfully. [root@hadoop001 bin]# jps 3814 QuorumPeerMain 2. Cluster environment construction# To ensure high availability of the cluster, the number of nodes in the Zookeeper cluster should be an odd number, with at least three nodes, so here we demonstrate building a three-node cluster. Here I use three hosts for construction, the host names are hadoop001, hadoop002, and hadoop003. 2.1 Modify the configuration# Unzip a zookeeper installation package and modify its configuration file zoo.cfg to the following content. Then use the scp command to distribute the installation package to the three servers: tickTime=2000 initLimit=10 syncLimit=5 dataDir=/usr/local/zookeeper-cluster/data/ dataLogDir=/usr/local/zookeeper-cluster/log/ clientPort=2181 # server.1 The 1 is the server ID, which can be any valid number. It indicates the server node. This ID should be written to the myid file under the dataDir directory. # Specify the inter-cluster communication port and election port server.1=hadoop001:2287:3387 server.2=hadoop002:2287:3387 server.3=hadoop003:2287:3387 2.2 Identifying Nodes# Create a new myid file in the dataDir directory of the three hosts and write the corresponding node ID. The Zookeeper cluster identifies cluster nodes through the myid file, and communicates with nodes through the node communication port and election port configured above to elect the Leader node. Create a storage directory: # All three hosts execute the command mkdir -vp /usr/local/zookeeper-cluster/data/ Create and write the node ID to the myid file: # hadoop001 host echo "1" > /usr/local/zookeeper-cluster/data/myid # hadoop002 host echo "2" > /usr/local/zookeeper-cluster/data/myid # hadoop003 host echo "3" > /usr/local/zookeeper-cluster/data/myid 2.3 Start the cluster# On each of the three hosts, execute the following commands to start the service: /usr/app/zookeeper-cluster/zookeeper/bin/zkServer.sh start 2.4 Cluster Verification# After startup, use zkServer.sh status to view the status of each node in the cluster. As shown in the figure: the three node processes are started successfully, and hadoop002 is the leader node, hadoop001 and hadoop003 are follower nodes.
For more articles in the Big Data series, see GitHub Open Source Project: Getting Started with Big Data Summarize The above is the introduction of Zookeeper stand-alone environment and cluster environment construction. I hope it will be helpful to everyone. If you have any questions, please leave me a message and I will reply to you in time. I would also like to thank everyone for their support of the 123WORDPRESS.COM website! You may also be interested in:
|
<<: MySQL 8.0.12 winx64 decompression version installation graphic tutorial
This article uses examples to illustrate the func...
I have been playing around with charts for a whil...
I have recently been developing a visual operatio...
When doing web development, you may encounter the...
Preface: I received crazy slow query and request ...
When Mysql occupies too much CPU, where should we...
Adding the attribute selected = "selected&quo...
SMIL adds support for timing and media synchroniz...
Find the problem Recently, when I was filling in ...
Table of contents 1. Preparation 2. Define the gl...
The execution efficiency of MySQL database has a ...
The large-screen digital scrolling effect comes f...
Table of contents 1. Several syntaxes of Insert 1...
1. Autoflow attribute, if the length and width of...
Linux version upgrade: 1. First, confirm that the...