1. Set up HOST on the host Macbook The previous document has set the static IP address of the virtual machine, and you can log in through the IP address in the future. However, for convenience, let's set it up. First, modify the hosts file on Mac so that you don't have to enter the IP address when ssh. sudo vim /etc/hosts Or sudo vim /private/etc/hosts These two files are actually one, linked by link. Note that you need to add sudo and run it as an administrator, otherwise you will not be able to save the file. ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost 50.116.33.29 sublime.wbond.net 127.0.0.1 windows10.microdone.cn # Added by Docker Desktop # To allow the same kube context to work on the host and the container: 127.0.0.1 kubernetes.docker.internal 192.168.56.100 hadoop100 192.168.56.101 hadoop101 192.168.56.102 hadoop102 192.168.56.103 hadoop103 192.168.56.104 hadoop104 # End of section 2. Copy the virtual machine Then we need to copy the virtual machine configured last time into multiple machines to form a cluster. First, close the virtual machine, right-click on it, and select Copy. The following dialog box will appear. I choose to regenerate the Mac addresses of all network cards in order to simulate a completely different computer environment. 3. Modify the HOST and IP address of each machine After copying, remember to log in to the virtual machine and modify the static IP address according to the method mentioned above to avoid IP address conflicts. vi /etc/sysconfig/network-scripts/ifcfg-enp0s3 vi /etc/sysconfig/network-scripts/ifcfg-enp0s8 In addition, it is best to set the HOSTNAME in each Linux virtual machine so that these virtual machines can use the hostname when communicating with each other. You need to set the hostnames of several machines one by one. [root@hadoop101 ~]# hostnamectl set-hostname hadoop107 [root@hadoop101 ~]# hostname hadoop107 4. xcall allows server clusters to run commands simultaneously Because we have several machines at the same time, it would be troublesome if we log in one by one to operate. We can write a shell script and issue commands from one of them in the future, so it will be much more convenient for all machines to execute. Here is an example. I have five virtual machines: hadopp100, hadopp101, hadopp102, hadopp103, and hadopp104. I hope to use hadopp100 as a bastion to unify the control of all other machines. Create a file named xcall in /user/local/bin with the following content: touch /user/local/bin/xcall chmod +x /user/local/bin/xcall vi/user/local/bin/xcall #!/bin/bash pcount=$# if((pcount==0));then echo no args; exit; fi echo ---------running at localhost-------- $@ for((host=101;host<=104;host++));do echo ---------running at hadoop$host------- ssh hadoop$host $@ done ~ For example, I use this xcall script to call the pwd name on all machines to view the current directory, and it will prompt for the password to execute in turn. [root@hadoop100 ~]# xcall pwd ---------running at localhost-------- /root ---------running at hadoop101------- root@hadoop101's password: /root ---------running at hadoop102------- root@hadoop102's password: /root ---------running at hadoop103------- root@hadoop103's password: /root ---------running at hadoop104------- root@hadoop104's password: /root [root@hadoop100 ~]# 5. scp and rsync Then let's talk about the scp tool. scp can remotely copy data between Linux. If you want to copy the entire directory, add -r. [root@hadoop100 ~]# ls anaconda-ks.cfg [root@hadoop100 ~]# scp anaconda-ks.cfg hadoop104:/root/ root@hadoop104's password: anaconda-ks.cfg 100% 1233 61.1KB/s 00:00 [root@hadoop100 ~]# You can also use rsync, scp will copy the data regardless of the situation on the target machine. Rsync compares the files first and then copies the files if they have changed. If the file you want to copy remotely is large, it is faster to use rsync. Unfortunately, rsync is not installed by default on centOS and needs to be installed first. In the previous article, our virtual machine can already connect to the Internet, so online installation is sufficient. [root@hadoop100 ~]# xcall sudo yum install -y rsync For example, synchronize the Java SDK on the hadoop100 machine to 102: [root@hadoop100 /]# rsync -r /opt/modules/jdk1.8.0_121/ hadoop102:/opt/modules/jdk1.8.0_121/ OK, now that the basic tools and cluster environment have been set up, you can start learning Hadoop. Summarize The above is the method I introduced to you to use VirtualBox to simulate Linux cluster. I hope it will be helpful to you. If you have any questions, please leave me a message and I will reply to you in time. I would also like to thank everyone for their support of the 123WORDPRESS.COM website! You may also be interested in:
|
<<: MySQL 5.7.23 decompression version installation tutorial with pictures and text
>>: WeChat applet implements a simple dice game
Currently, layui officials have not provided the ...
Preface: In interviews for various technical posi...
This time we set up an rtmp live broadcast server...
Database Table A: CREATE TABLE task_desc_tab ( id...
<br />Web Design and Production Test Part I ...
Table of contents 1. Deconstruction Tips 2. Digit...
This article records the installation tutorial of...
Table of contents 1. Dep 2. Understand obverser 3...
This article introduces how to build a high-avail...
1. From father to son Define the props field in t...
Table of contents 1. Generate a certificate 2. En...
Empty link: That is, there is no link with a targ...
Library Management Create a library create databa...
There is an interview question that requires: a th...
One port changes In version 3.2.0, the namenode p...