Foregoing: This document is based on the assumption that the three virtual machines can ping each other, the firewall is turned off, the hosts file is modified, SSH password-free login, host name modification, etc. one. Incoming Files 1. Create an installation directory 2. Open xftp, find the corresponding directory, and pass the required installation package into it View the installation package: cd /usr/local/soft two. Install JAVA 1. Check whether JDK is installed: java -version 2. If not installed, unzip the Java installation package: tar -zxvf jdk-8u181-linux-x64.tar.gz (Each person's installation package may be different, please refer to it yourself) 3. Rename jdk and check the current location: mv jdk1.8.0_181 java 4. Configure the jdk environment: vim /etc/profile.d/jdk.sh export JAVA_HOME=/usr/local/soft/java export PATH=$PATH:$JAVA_HOME/bin export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/rt.jar 5. Update environment variables and verify: source /etc/profile three. Install Hadoop 1. Unzip the hadoop installation package: tar -zxvf hadoop-3.1.1.tar.gz 2. Check and rename: mv hadoop-3.1.1 hadoop 3. Configure the hadoop configuration file 3.1 Modify the core-site.xml configuration file: vim hadoop/etc/hadoop/core-site.xml <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/local/soft/hadoop/tmp</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.trash.interval</name> <value>1440</value> </property> 3.2 Modify the hdfs-site.xml configuration file: vim hadoop/etc/hadoop/hdfs-site.xml <property> <name>dfs.namenode.secondary.http-address</name> <value>node1:50090</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/soft/hadoop/tmp/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/soft/hadoop/tmp/dfs/data</value> </property> 3.3 Modify the workers configuration file: vim hadoop/etc/hadoop/workers 3.4 Modify the hadoop-env.sh file: vim hadoop/etc/hadoop/hadoop-env.sh export JAVA_HOME=/usr/local/soft/java 3.5 Modify the yarn-site.xml file: vim hadoop/etc/hadoop/yarn-site.xml <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> 3.6 Update the configuration file: source hadoop/etc/hadoop/hadoop-env.sh 3.7 Modify the start-dfs.sh configuration file: im hadoop/sbin/start-dfs.sh export HDFS_NAMENODE_SECURE_USER=root export HDFS_DATANODE_SECURE_USER=root export HDFS_SECONDARYNAMENODE_USER=root export HDFS_NAMENODE_USER=root export HDFS_DATANODE_USER=root export HDFS_SECONDARYNAMENODE_USER=root export YARN_RESOURCEMANAGER_USER=root export YARN_NODEMANAGER_USER=root 3.8 Modify the stop-dfs.sh configuration file: vim hadoop/sbin/stop-dfs.sh export HDFS_NAMENODE_SECURE_USER=root export HDFS_DATANODE_SECURE_USER=root export HDFS_SECONDARYNAMENODE_USER=root export HDFS_NAMENODE_USER=root export HDFS_DATANODE_USER=root export HDFS_SECONDARYNAMENODE_USER=root export YARN_RESOURCEMANAGER_USER=root export YARN_NODEMANAGER_USER=root 3.9 Modify the start-yarn.sh configuration file: vim hadoop/sbin/start-yarn.sh export YARN_RESOURCEMANAGER_USER=root export HADOOP_SECURE_DN_USER=root export YARN_NODEMANAGER_USER=root 3.10 Modify the stop-yarn.sh configuration file: vim hadoop/sbin/stop-yarn.sh export YARN_RESOURCEMANAGER_USER=root export HADOOP_SECURE_DN_USER=root export YARN_NODEMANAGER_USER=root 3.11 Cancel printing warning information: vim hadoop/etc/hadoop/log4j.properties log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR Four. Synchronize configuration information: 1. Synchronize node1: scp -r soft root@node1:/usr/local/ Synchronize node2: scp -r soft root@node2:/usr/local/ 2. Wait for all transfers to complete and configure the profile file: vim /etc/profile.d/hadoop.sh #SET HADOOP export HADOOP_HOME=/usr/local/soft/hadoop export HADOOP_INSTALL=$HADOOP_HOME export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin 3. Continue the transfer For node1: scp /etc/profile.d/jdk.sh root@node1:/etc/profile.d/ scp /etc/profile.d/hadoop.sh root@node1:/etc/profile.d/ For node2: scp /etc/profile.d/jdk.sh root@node2:/etc/profile.d/ scp /etc/profile.d/hadoop.sh root@node2:/etc/profile.d/ 4. Execute on all three virtual machines source /etc/profile source /usr/local/soft/hadoop/etc/hadoop/hadoop-env.sh (Only one is shown) 5. Format the HDFS file system: hdfs namenode -format (only on the master) five. Start the cluster cd /usr/local/soft/hadoop/sbin/ ./start-all.sh After startup, enter jps on the three virtual machines respectively The results are as follows: Google browser test under Windows: http://192.168.204.120:8088/cluster (enter your own master's IP address) http://192.168.204.120:9870 Hadoop test (MapReduce execution calculation test): hadoop jar /usr/local/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar wordcount /input /output View the running results: The above hadoop configuration is completed. Summarize The above is the complete distributed installation guide of hadoop3.1.1 under centos6.8 introduced by the editor. I hope it will be helpful to everyone. If you have any questions, please leave me a message and the editor will reply to you in time. I would also like to thank everyone for their support of the 123WORDPRESS.COM website! You may also be interested in:
|
<<: Example of MySQL auto-increment ID exhaustion
>>: JavaScript implements mouse control of free moving window
Table of contents 1. fill() syntax 2. Use of fill...
In Linux, we usually use the mv command to rename...
Table of contents How to install and configure To...
Table of contents Constructing payment methods us...
The TextBox with the ReadOnly attribute will be di...
mysql-5.7.9 finally provides shutdown syntax: Pre...
This article shares the specific code of JavaScri...
Table of contents 1. Introduction 1.1 Babel Trans...
How to configure custom path aliases in Vue In ou...
After starting Docker, let's take a look at t...
Preface: Recently, I encountered a management sys...
mysql 5.7.21 winx64 installation and configuratio...
Table of contents CSS3 Box Model a. CSS3 filter b...
This error is often encountered by novices. This ...
When using Maven to manage projects, how to uploa...