How to build a multi-node Elastic stack cluster on RHEL8 /CentOS8

How to build a multi-node Elastic stack cluster on RHEL8 /CentOS8

Elastic stack, commonly known as ELK stack, is a set of open source products including Elasticsearch, Logstash, and Kibana. The Elastic Stack is developed and maintained by Elastic. Using the Elastic stack, system logs can be sent to Logstash, which is a data collection engine that accepts logs or data from any possible source and normalizes the logs. The logs are then forwarded to Elasticsearch for analysis, indexing, searching, and storage, and finally represented as visual data using Kibana. Using Kibana, we can also create interactive charts based on user queries.

In this article, we will demonstrate how to setup a multi-node elastic stack cluster on RHEL 8 / CentOS 8 server. Here are the details of my Elastic Stack cluster:

Elasticsearch:

  • Three servers, minimal installation of RHEL 8 / CentOS 8
  • IP & Hostname – 192.168.56.40 ( elasticsearch1.linuxtechi.local ), 192.168.56.50 ( elasticsearch2.linuxtechi.local ), 192.168.56.60 (elasticsearch3.linuxtechi.local)

Logstash:

  • Two servers, minimal installation of RHEL 8 / CentOS 8
  • IP & Host – 192.168.56.20 ( logstash1.linuxtechi.local ), 192.168.56.30 ( logstash2.linuxtechi.local )

Kibana:

One server with minimal RHEL 8 / CentOS 8IP & Hostname – 192.168.56.10 ( kibana.linuxtechi.local )

Filebeat:

  • One server with minimal installation of CentOS 7
  • IP & Hostname – 192.168.56.70 ( web-server )

Let's start by setting up our Elasticsearch cluster.

Setting up a 3-node Elasticsearch cluster

As I have already said, set up the nodes of the Elasticsearch cluster, log in to each node, set the hostname and configure the yum/dnf repository

Use the hostnamectl command to set the hostname on each node:

[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch1.linuxtechi. local"
[root@linuxtechi ~]# exec bash
[root@linuxtechi ~]#
[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch2.linuxtechi. local"
[root@linuxtechi ~]# exec bash
[root@linuxtechi ~]#
[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch3.linuxtechi. local"
[root@linuxtechi ~]# exec bash
[root@linuxtechi ~]#

For CentOS 8 system, we do not need to configure any OS package repository, for RHEL 8 server, if you have a valid subscription, then just subscribe with Red Hat to get the package repository. If you want to configure a local yum/dnf repository for OS packages, refer to the following URL:

How to setup local Yum / DNF repository on RHEL 8 server using DVD or ISO file

Configure the Elasticsearch package repository on all nodes. Create an elastic.repo file in the /etc/yum.repo.d/ folder with the following content:

~]# vi /etc/yum.repos.d/elastic.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Save the file and exit.

Import the Elastic public signing key using the rpm command on all three nodes.

~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Add the following lines in /etc/hosts file of all three nodes:

192.168.56.40 elasticsearch1.linuxtechi.local
192.168.56.50 elasticsearch2.linuxtechi.local
192.168.56.60 elasticsearch3.linuxtechi.local

Install Java on all three nodes using the yum / dnf command:

[root@linuxtechi ~]# dnf install java-openjdk -y
[root@linuxtechi ~]# dnf install java-openjdk -y
[root@linuxtechi ~]# dnf install java-openjdk -y

Install Elasticsearch on all three nodes using the yum / dnf command:

root@linuxtechi ~]# dnf install elasticsearch -y
[root@linuxtechi ~]# dnf install elasticsearch -y
[root@linuxtechi ~]# dnf install elasticsearch -y

Note: If the operating system firewall is enabled and running on each Elasticsearch node, allow the following ports to be open using firewall-cmd command:

~]# firewall-cmd --permanent --add-port=9300/tcp
~]# firewall-cmd --permanent --add-port=9200/tcp
~]# firewall-cmd --reload

To configure Elasticsearch, edit the file /etc/elasticsearch/elasticsearch.yml on all nodes and add the following content:

~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: opn-cluster
node.name: elasticsearch1.linuxtechi.local
network.host: 192.168.56.40
http.port: 9200
discovery.seed_hosts: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"]
cluster.initial_master_nodes: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"]

Note: On each node, fill in the correct host name in node.name and the correct IP address in network.host , and keep other parameters unchanged.

Now start and enable the Elasticsearch service on all three nodes using the systemctl command:

~]# systemctl daemon-reload
~]# systemctl enable elasticsearch.service
~]# systemctl start elasticsearch.service

Use the following ss command to verify that the elasticsearch node starts listening on port 9200:

[root@linuxtechi ~]# ss -tunlp | grep 9200
tcp LISTEN 0 128 [::ffff:192.168.56.40]:9200 *:* users:(("java",pid=2734,fd=256))
[root@linuxtechi ~]#

Verify the Elasticsearch cluster status using the following curl command:

[root@linuxtechi ~]# curl http://elasticsearch1.linuxtechi.local:9200
[root@linuxtechi ~]# curl -X GET http://elasticsearch2.linuxtechi.local:9200/_cluster/health?pretty

The output of the command is as follows:

The above output shows that we have successfully created a 3-node Elasticsearch cluster and the cluster status is green.

Note: If you want to change the JVM heap size, you can edit the file /etc/elasticsearch/jvm.options and change the following parameters according to your environment

  • -Xms1g
  • -Xmx1g

Now let's move on to the Logstash nodes.

Install and configure Logstash

Perform the following steps on both Logstash nodes.

Log in to both nodes and use hostnamectl command to set the hostname:

[root@linuxtechi ~]# hostnamectl set-hostname "logstash1.linuxtechi.local"
[root@linuxtechi ~]# exec bash
[root@linuxtechi ~]#
[root@linuxtechi ~]# hostnamectl set-hostname "logstash2.linuxtechi.local"
[root@linuxtechi ~]# exec bash
[root@linuxtechi ~]#

Add the following entries in the /etc/hosts file of both logstash nodes:

~]# vi /etc/hosts
192.168.56.40 elasticsearch1.linuxtechi.local
192.168.56.50 elasticsearch2.linuxtechi.local
192.168.56.60 elasticsearch3.linuxtechi.local

Save the file and exit.

Configure the Logstash repository on both nodes. Create a file logstash.repo in the folder /ete/yum.repo.d/ with the following content:

~]# vi /etc/yum.repos.d/logstash.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Save and exit the file, and run the rpm command to import the signature key:

~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Install Java OpenJDK on both nodes using the yum / dnf command:

~]# dnf install java-openjdk -y

Run yum / dnf command from both nodes to install logstash:

[root@linuxtechi ~]# dnf install logstash -y
[root@linuxtechi ~]# dnf install logstash -y

Now configure logstash. Perform the following steps on both logstash nodes to create a logstash configuration file. First, copy the logstash sample file under /etc/logstash/conf.d/ :

# cd /etc/logstash/
# cp logstash-sample.conf conf.d/logstash.conf

Edit the configuration file and update the following:

# vi conf.d/logstash.conf
input {
 beats {
 port => 5044
 }
}
output {
 elasticsearch
 hosts => ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"]
 index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
 #user => "elastic"
 #password => "changeme"
 }
}

Under the output section, specify the FQDN of all three Elasticsearch nodes in the hosts parameter and keep the other parameters unchanged.

Use firewall-cmd command to allow the logstash port "5044" in the operating system firewall:

~ # firewall-cmd --permanent --add-port=5044/tcp
~ # firewall-cmd –reload

Now, run the following systemctl command on each node to start and enable the Logstash service:

~]# systemctl start logstash
~]# systemctl eanble logstash

Use the ss command to verify that the logstash service starts listening on port 5044:

[root@linuxtechi ~]# ss -tunlp | grep 5044
tcp LISTEN 0 128 *:5044 *:* users:(("java",pid=2416,fd=96))
[root@linuxtechi ~]#

The above output indicates that logstash has been successfully installed and configured. Let's move on to Kibana installation.

Install and configure Kibana

Log in to the Kibana node and use hostnamectl command to set the hostname:

[root@linuxtechi ~]# hostnamectl set-hostname "kibana.linuxtechi.local"
[root@linuxtechi ~]# exec bash
[root@linuxtechi ~]#

Edit the /etc/hosts file and add the following line:

192.168.56.40 elasticsearch1.linuxtechi.local
192.168.56.50 elasticsearch2.linuxtechi.local
192.168.56.60 elasticsearch3.linuxtechi.local

Set up the Kibana repository using the following command:

[root@linuxtechi ~]# vi /etc/yum.repos.d/kibana.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[root@linuxtechi ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Run the yum / dnf command to install kibana:

[root@linuxtechi ~]# yum install kibana -y

Configure Kibana by editing the /etc/kibana/kibana.yml file:

[root@linuxtechi ~]# vim /etc/kibana/kibana.yml
…………
server.host: "kibana.linuxtechi.local"
server.name: "kibana.linuxtechi.local"
elasticsearch.hosts: ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"]
…………

Enable and start the kibana service:

root@linuxtechi ~]# systemctl start kibana
[root@linuxtechi ~]# systemctl enable kibana

Allow Kibana port '5601' on the system firewall:

[root@linuxtechi ~]# firewall-cmd --permanent --add-port=5601/tcp
success
[root@linuxtechi ~]# firewall-cmd --reload
success
[root@linuxtechi ~]#

Access the Kibana interface using the following URL: http://kibana.linuxtechi.local:5601

From the dashboard, we can check the status of the Elastic Stack cluster.

This proves that we have successfully installed and setup a multi-node Elastic Stack cluster on RHEL 8 /CentOS 8.

Now let’s send some logs from other Linux servers to logstash node through filebeat . In my case, I have a CentOS 7 server and I will push all the important logs of this server to logstash through filebeat .

Log in to the CentOS 7 server and use the yum/rpm command to install the filebeat package:

[root@linuxtechi ~]# rpm -ivh https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm
Retrieving https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm
Preparing... ################################# [100%]
Updating / installing...
 1:filebeat-7.3.1-1 ################################### [100%]
[root@linuxtechi ~]#

Edit the /etc/hosts file and add the following:

192.168.56.20 logstash1.linuxtechi.local
192.168.56.30 logstash2.linuxtechi.local

Now configure filebeat so that it can send logs to logstash nodes using load balancing technology, edit the file /etc/filebeat/filebeat.yml and add the following parameters:

In filebeat.inputs: section, change enabled: false to enabled: true and under paths parameter, specify the location of the log files that we can send to logstash; comment out output.elasticsearch and host parameters; remove the comments from output.logstash: and hosts: and add two logstash nodes in hosts parameter, as well as set loadbalance: true .

[root@linuxtechi ~]# vi /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
 enabled: true
 paths:
 - /var/log/messages
 - /var/log/dmesg
 - /var/log/maillog
 - /var/log/boot.log
#output.elasticsearch:
 # hosts: ["localhost:9200"]
output.logstash:
 hosts: ["logstash1.linuxtechi.local:5044", "logstash2.linuxtechi.local:5044"]
 loadbalance: true

Use the following two systemctl commands to start and enable filebeat service:

[root@linuxtechi ~]# systemctl start filebeat
[root@linuxtechi ~]# systemctl enable filebeat

Now go to the Kibana UI and verify that the new index is visible.

Select the Management option from the left sidebar and click Index Management under Elasticsearch:

As we can see above, the index is now visible, let's create the index model now.

Click on "Index Patterns" in the Kibana section, it will prompt us to create a new model, click on "Create Index Pattern", and specify the pattern name as "filebeat":

Click Next.

Select Timestamp as the time filter for the index pattern and click Create index pattern:

Now click to view the real-time filebeat index model:

This indicates that the Filebeat agent has been configured successfully and we can see real-time logs on the Kibana dashboard.

That’s all for this article, don’t hesitate to share your feedback and comments on these steps that helped you in setting up Elastic Stack cluster on RHEL 8 / CentOS 8 systems.

via: https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/

Summarize

The above is the method I introduced to you to establish a multi-node Elastic stack cluster on RHEL8/CentOS8. I hope it will be helpful to you. If you have any questions, please leave me a message and I will reply to you in time. I would also like to thank everyone for their support of the 123WORDPRESS.COM website!
If you find this article helpful, please feel free to reprint it and please indicate the source. Thank you!

You may also be interested in:
  • Detailed steps to deploy a multi-node Citus cluster in CentOS
  • Docker Stack deployment method steps for web cluster

<<:  MySQL 8.0.13 installation and configuration method graphic tutorial

>>:  The solution record of Vue failing to obtain the element for the first time

Recommend

Steps for Django to connect to local MySQL database (pycharm)

Step 1: Change DATABASES in setting.py # Configur...

Problem record of using vue+echarts chart

Preface echarts is my most commonly used charting...

Implementation of iview permission management

Table of contents iview-admin2.0 built-in permiss...

How to completely uninstall iis7 web and ftp services in win7

After I set up the PHP development environment on...

MySQL 5.6.23 Installation and Configuration Environment Variables Tutorial

This article shares the installation and configur...

Docker View the Mount Directory Operation of the Container

Only display Docker container mount directory inf...

Vue gets token to implement token login sample code

The idea of ​​using token for login verification ...

How to use resize to implement image switching preview function

Key Points The CSS resize property allows you to ...

A brief analysis of the differences between undo, redo and binlog in MySQL

Table of contents Preface 【undo log】 【redo log】 【...

Element table header row height problem solution

Table of contents Preface 1. Cause of the problem...

Simple implementation of mini-vue rendering

Table of contents Preface Target first step: Step...

Detailed explanation of jQuery's copy object

<!DOCTYPE html> <html lang="en"...