How to use VirtualBox to simulate a Linux cluster

How to use VirtualBox to simulate a Linux cluster

1. Set up HOST on the host Macbook

The previous document has set the static IP address of the virtual machine, and you can log in through the IP address in the future. However, for convenience, let's set it up. First, modify the hosts file on Mac so that you don't have to enter the IP address when ssh.

sudo vim /etc/hosts

Or sudo vim /private/etc/hosts

These two files are actually one, linked by link. Note that you need to add sudo and run it as an administrator, otherwise you will not be able to save the file.

##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
50.116.33.29 sublime.wbond.net
127.0.0.1 windows10.microdone.cn
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal

192.168.56.100 hadoop100
192.168.56.101 hadoop101
192.168.56.102 hadoop102
192.168.56.103 hadoop103
192.168.56.104 hadoop104
# End of section

2. Copy the virtual machine

Then we need to copy the virtual machine configured last time into multiple machines to form a cluster. First, close the virtual machine, right-click on it, and select Copy. The following dialog box will appear. I choose to regenerate the Mac addresses of all network cards in order to simulate a completely different computer environment.

3. Modify the HOST and IP address of each machine

After copying, remember to log in to the virtual machine and modify the static IP address according to the method mentioned above to avoid IP address conflicts.

vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
vi /etc/sysconfig/network-scripts/ifcfg-enp0s8

In addition, it is best to set the HOSTNAME in each Linux virtual machine so that these virtual machines can use the hostname when communicating with each other. You need to set the hostnames of several machines one by one.

[root@hadoop101 ~]# hostnamectl set-hostname hadoop107
[root@hadoop101 ~]# hostname
hadoop107

4. xcall allows server clusters to run commands simultaneously

Because we have several machines at the same time, it would be troublesome if we log in one by one to operate. We can write a shell script and issue commands from one of them in the future, so it will be much more convenient for all machines to execute. Here is an example. I have five virtual machines: hadopp100, hadopp101, hadopp102, hadopp103, and hadopp104. I hope to use hadopp100 as a bastion to unify the control of all other machines. Create a file named xcall in /user/local/bin with the following content:

touch /user/local/bin/xcall

chmod +x /user/local/bin/xcall

vi/user/local/bin/xcall


#!/bin/bash
pcount=$#
if((pcount==0));then
echo no args;
exit;
fi

echo ---------running at localhost--------
$@
for((host=101;host<=104;host++));do
echo ---------running at hadoop$host-------
ssh hadoop$host $@
done
~

For example, I use this xcall script to call the pwd name on all machines to view the current directory, and it will prompt for the password to execute in turn.

[root@hadoop100 ~]# xcall pwd
---------running at localhost--------
/root
---------running at hadoop101-------
root@hadoop101's password:
/root
---------running at hadoop102-------
root@hadoop102's password:
/root
---------running at hadoop103-------
root@hadoop103's password:
/root
---------running at hadoop104-------
root@hadoop104's password:
/root
[root@hadoop100 ~]#

5. scp and rsync

Then let's talk about the scp tool. scp can remotely copy data between Linux. If you want to copy the entire directory, add -r.

[root@hadoop100 ~]# ls
anaconda-ks.cfg
[root@hadoop100 ~]# scp anaconda-ks.cfg hadoop104:/root/
root@hadoop104's password:
anaconda-ks.cfg 100% 1233 61.1KB/s 00:00
[root@hadoop100 ~]#

You can also use rsync, scp will copy the data regardless of the situation on the target machine. Rsync compares the files first and then copies the files if they have changed. If the file you want to copy remotely is large, it is faster to use rsync. Unfortunately, rsync is not installed by default on centOS and needs to be installed first. In the previous article, our virtual machine can already connect to the Internet, so online installation is sufficient.

[root@hadoop100 ~]# xcall sudo yum install -y rsync

For example, synchronize the Java SDK on the hadoop100 machine to 102:

[root@hadoop100 /]# rsync -r /opt/modules/jdk1.8.0_121/ hadoop102:/opt/modules/jdk1.8.0_121/

OK, now that the basic tools and cluster environment have been set up, you can start learning Hadoop.

Summarize

The above is the method I introduced to you to use VirtualBox to simulate Linux cluster. I hope it will be helpful to you. If you have any questions, please leave me a message and I will reply to you in time. I would also like to thank everyone for their support of the 123WORDPRESS.COM website!
If you find this article helpful, please feel free to reprint it and please indicate the source. Thank you!

You may also be interested in:
  • Implementation of building Kubernetes cluster with VirtualBox+Ubuntu16
  • Teach you to build a local kubernets cluster in virtualBox

<<:  MySQL 5.7.23 decompression version installation tutorial with pictures and text

>>:  WeChat applet implements a simple dice game

Recommend

Layim in javascript to find friends and groups

Currently, layui officials have not provided the ...

Answers to several high-frequency MySQL interview questions

Preface: In interviews for various technical posi...

How to use Nginx to carry rtmp live server

This time we set up an rtmp live broadcast server...

Mysql table creation foreign key error solution

Database Table A: CREATE TABLE task_desc_tab ( id...

Web design and production test questions and reference answers

<br />Web Design and Production Test Part I ...

Six weird and useful things about JavaScript

Table of contents 1. Deconstruction Tips 2. Digit...

Implement a simple data response system

Table of contents 1. Dep 2. Understand obverser 3...

CentOS 7 builds hadoop 2.10 high availability (HA)

This article introduces how to build a high-avail...

Detailed explanation of Vue's seven value transfer methods

1. From father to son Define the props field in t...

How to enable TLS and CA authentication in Docker

Table of contents 1. Generate a certificate 2. En...

A brief discussion on the role of HTML empty links

Empty link: That is, there is no link with a targ...

Summary of common Mysql DDL operations

Library Management Create a library create databa...

Common considerations for building a Hadoop 3.2.0 cluster

One port changes In version 3.2.0, the namenode p...