Elastic stack, commonly known as ELK stack, is a set of open source products including Elasticsearch, Logstash, and Kibana. The Elastic Stack is developed and maintained by Elastic. Using the Elastic stack, system logs can be sent to Logstash, which is a data collection engine that accepts logs or data from any possible source and normalizes the logs. The logs are then forwarded to Elasticsearch for analysis, indexing, searching, and storage, and finally represented as visual data using Kibana. Using Kibana, we can also create interactive charts based on user queries. In this article, we will demonstrate how to setup a multi-node elastic stack cluster on RHEL 8 / CentOS 8 server. Here are the details of my Elastic Stack cluster: Elasticsearch:
Logstash:
Kibana: One server with minimal RHEL 8 / CentOS 8IP & Hostname – 192.168.56.10 ( Filebeat:
Let's start by setting up our Elasticsearch cluster. Setting up a 3-node Elasticsearch cluster As I have already said, set up the nodes of the Elasticsearch cluster, log in to each node, set the hostname and configure the yum/dnf repository Use the [root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch1.linuxtechi. local" [root@linuxtechi ~]# exec bash [root@linuxtechi ~]# [root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch2.linuxtechi. local" [root@linuxtechi ~]# exec bash [root@linuxtechi ~]# [root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch3.linuxtechi. local" [root@linuxtechi ~]# exec bash [root@linuxtechi ~]# For CentOS 8 system, we do not need to configure any OS package repository, for RHEL 8 server, if you have a valid subscription, then just subscribe with Red Hat to get the package repository. If you want to configure a local yum/dnf repository for OS packages, refer to the following URL: How to setup local Yum / DNF repository on RHEL 8 server using DVD or ISO file Configure the Elasticsearch package repository on all nodes. Create an ~]# vi /etc/yum.repos.d/elastic.repo [elasticsearch-7.x] name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md Save the file and exit. Import the Elastic public signing key using the ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch Add the following lines in 192.168.56.40 elasticsearch1.linuxtechi.local 192.168.56.50 elasticsearch2.linuxtechi.local 192.168.56.60 elasticsearch3.linuxtechi.local Install Java on all three nodes using the [root@linuxtechi ~]# dnf install java-openjdk -y [root@linuxtechi ~]# dnf install java-openjdk -y [root@linuxtechi ~]# dnf install java-openjdk -y Install Elasticsearch on all three nodes using the root@linuxtechi ~]# dnf install elasticsearch -y [root@linuxtechi ~]# dnf install elasticsearch -y [root@linuxtechi ~]# dnf install elasticsearch -y Note: If the operating system firewall is enabled and running on each Elasticsearch node, allow the following ports to be open using ~]# firewall-cmd --permanent --add-port=9300/tcp ~]# firewall-cmd --permanent --add-port=9200/tcp ~]# firewall-cmd --reload To configure Elasticsearch, edit the file ~]# vim /etc/elasticsearch/elasticsearch.yml cluster.name: opn-cluster node.name: elasticsearch1.linuxtechi.local network.host: 192.168.56.40 http.port: 9200 discovery.seed_hosts: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"] cluster.initial_master_nodes: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"] Note: On each node, fill in the correct host name in Now start and enable the Elasticsearch service on all three nodes using the ~]# systemctl daemon-reload ~]# systemctl enable elasticsearch.service ~]# systemctl start elasticsearch.service Use the following [root@linuxtechi ~]# ss -tunlp | grep 9200 tcp LISTEN 0 128 [::ffff:192.168.56.40]:9200 *:* users:(("java",pid=2734,fd=256)) [root@linuxtechi ~]# Verify the Elasticsearch cluster status using the following [root@linuxtechi ~]# curl http://elasticsearch1.linuxtechi.local:9200 [root@linuxtechi ~]# curl -X GET http://elasticsearch2.linuxtechi.local:9200/_cluster/health?pretty The output of the command is as follows: The above output shows that we have successfully created a 3-node Elasticsearch cluster and the cluster status is green. Note: If you want to change the JVM heap size, you can edit the file
Now let's move on to the Logstash nodes. Install and configure Logstash Perform the following steps on both Logstash nodes. Log in to both nodes and use [root@linuxtechi ~]# hostnamectl set-hostname "logstash1.linuxtechi.local" [root@linuxtechi ~]# exec bash [root@linuxtechi ~]# [root@linuxtechi ~]# hostnamectl set-hostname "logstash2.linuxtechi.local" [root@linuxtechi ~]# exec bash [root@linuxtechi ~]# Add the following entries in the ~]# vi /etc/hosts 192.168.56.40 elasticsearch1.linuxtechi.local 192.168.56.50 elasticsearch2.linuxtechi.local 192.168.56.60 elasticsearch3.linuxtechi.local Save the file and exit. Configure the Logstash repository on both nodes. Create a file ~]# vi /etc/yum.repos.d/logstash.repo [elasticsearch-7.x] name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md Save and exit the file, and run the ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch Install Java OpenJDK on both nodes using the ~]# dnf install java-openjdk -y Run [root@linuxtechi ~]# dnf install logstash -y [root@linuxtechi ~]# dnf install logstash -y Now configure logstash. Perform the following steps on both logstash nodes to create a logstash configuration file. First, copy the logstash sample file under # cd /etc/logstash/ # cp logstash-sample.conf conf.d/logstash.conf Edit the configuration file and update the following: # vi conf.d/logstash.conf input { beats { port => 5044 } } output { elasticsearch hosts => ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"] index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" #user => "elastic" #password => "changeme" } } Under the Use ~ # firewall-cmd --permanent --add-port=5044/tcp ~ # firewall-cmd –reload Now, run the following ~]# systemctl start logstash ~]# systemctl eanble logstash Use the [root@linuxtechi ~]# ss -tunlp | grep 5044 tcp LISTEN 0 128 *:5044 *:* users:(("java",pid=2416,fd=96)) [root@linuxtechi ~]# The above output indicates that logstash has been successfully installed and configured. Let's move on to Kibana installation. Install and configure Kibana Log in to the Kibana node and use [root@linuxtechi ~]# hostnamectl set-hostname "kibana.linuxtechi.local" [root@linuxtechi ~]# exec bash [root@linuxtechi ~]# Edit the 192.168.56.40 elasticsearch1.linuxtechi.local 192.168.56.50 elasticsearch2.linuxtechi.local 192.168.56.60 elasticsearch3.linuxtechi.local Set up the Kibana repository using the following command: [root@linuxtechi ~]# vi /etc/yum.repos.d/kibana.repo [elasticsearch-7.x] name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md [root@linuxtechi ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch Run the [root@linuxtechi ~]# yum install kibana -y Configure Kibana by editing the [root@linuxtechi ~]# vim /etc/kibana/kibana.yml ………… server.host: "kibana.linuxtechi.local" server.name: "kibana.linuxtechi.local" elasticsearch.hosts: ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"] ………… Enable and start the kibana service: root@linuxtechi ~]# systemctl start kibana [root@linuxtechi ~]# systemctl enable kibana Allow Kibana port '5601' on the system firewall: [root@linuxtechi ~]# firewall-cmd --permanent --add-port=5601/tcp success [root@linuxtechi ~]# firewall-cmd --reload success [root@linuxtechi ~]# Access the Kibana interface using the following URL: http://kibana.linuxtechi.local:5601 From the dashboard, we can check the status of the Elastic Stack cluster. This proves that we have successfully installed and setup a multi-node Elastic Stack cluster on RHEL 8 /CentOS 8. Now let’s send some logs from other Linux servers to logstash node through Log in to the CentOS 7 server and use the yum/rpm command to install the filebeat package: [root@linuxtechi ~]# rpm -ivh https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm Retrieving https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm Preparing... ################################# [100%] Updating / installing... 1:filebeat-7.3.1-1 ################################### [100%] [root@linuxtechi ~]# Edit the 192.168.56.20 logstash1.linuxtechi.local 192.168.56.30 logstash2.linuxtechi.local Now configure In [root@linuxtechi ~]# vi /etc/filebeat/filebeat.yml filebeat.inputs: - type: log enabled: true paths: - /var/log/messages - /var/log/dmesg - /var/log/maillog - /var/log/boot.log #output.elasticsearch: # hosts: ["localhost:9200"] output.logstash: hosts: ["logstash1.linuxtechi.local:5044", "logstash2.linuxtechi.local:5044"] loadbalance: true Use the following two [root@linuxtechi ~]# systemctl start filebeat [root@linuxtechi ~]# systemctl enable filebeat Now go to the Kibana UI and verify that the new index is visible. Select the Management option from the left sidebar and click Index Management under Elasticsearch: As we can see above, the index is now visible, let's create the index model now. Click on "Index Patterns" in the Kibana section, it will prompt us to create a new model, click on "Create Index Pattern", and specify the pattern name as "filebeat": Click Next. Select Timestamp as the time filter for the index pattern and click Create index pattern: Now click to view the real-time filebeat index model: This indicates that the Filebeat agent has been configured successfully and we can see real-time logs on the Kibana dashboard. That’s all for this article, don’t hesitate to share your feedback and comments on these steps that helped you in setting up Elastic Stack cluster on RHEL 8 / CentOS 8 systems. via: https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/ Summarize The above is the method I introduced to you to establish a multi-node Elastic stack cluster on RHEL8/CentOS8. I hope it will be helpful to you. If you have any questions, please leave me a message and I will reply to you in time. I would also like to thank everyone for their support of the 123WORDPRESS.COM website! You may also be interested in:
|
<<: MySQL 8.0.13 installation and configuration method graphic tutorial
>>: The solution record of Vue failing to obtain the element for the first time
Step 1: Change DATABASES in setting.py # Configur...
Preface echarts is my most commonly used charting...
Table of contents iview-admin2.0 built-in permiss...
After I set up the PHP development environment on...
mysql download, install and configure 5.7.20 / 5....
This article shares the installation and configur...
Only display Docker container mount directory inf...
The idea of using token for login verification ...
Key Points The CSS resize property allows you to ...
Table of contents Master-Master Synchronization S...
Table of contents Preface 【undo log】 【redo log】 【...
Table of contents Preface 1. Cause of the problem...
I encountered such a problem when doing the writte...
Table of contents Preface Target first step: Step...
<!DOCTYPE html> <html lang="en"...