Best Practices for Deploying ELK7.3.0 Log Collection Service with Docker

Best Practices for Deploying ELK7.3.0 Log Collection Service with Docker

Write at the beginning

This article only covers ELK 7.3.0 deployment!

Deployment environment:

system CentOS 7
Docker Docker version 19.03.5
CPU 2 cores
Memory 2.5G
disk 30G (recommended setting, insufficient disk may cause es to report an error)
Filebeat v7.3.0, single node
ElasticSearch v7.3.0, two copies
Kibana v7.3.0, single node
Logstash v7.3.1, single node

ELK distributed cluster deployment solution

ELK distributed cluster deployment solution

The memory permission of the elasticsearch user in Linux is too small. At least 262144 is required. An error message is displayed (max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]). Therefore, modify the system configuration first.

# Modify the configuration sysctl.conf
vi /etc/sysctl.conf
# Add the following configuration:
vm.max_map_count=262144
# Reload:
sysctl -p
# Finally, restart elasticsearch to start successfully.

The environment is deployed using Docker. In order to use Docker commands more conveniently, we install the bash-completion auto-completion plug-in:

# Install the dependent tool bash-complete
yum install -y bash-completion
ource /usr/share/bash-completion/completions/docker
source /usr/share/bash-completion/bash_completion

Deployment order: ES --> Kibana --> Logstash --> Filebeat

ElasticSearch 7.3.0 deployment

Master node deployment

Create configuration files and data storage directories

mkdir -p {/mnt/es1/master/data,/mnt/es1/master/logs}
vim /mnt/es1/master/conf/es-master.yml

es-master.yml configuration

# Cluster name cluster.name: es-cluster
#Node namenode.name: es-master
# Whether it can become a master nodenode.master: true
# Whether to allow the node to store data, enabled by default node.data: false
# Network binding network.host: 0.0.0.0
# Set the http port for external services http.port: 9200
# Set the TCP port for communication between nodes transport.port: 9300
# Cluster discovery discovery.seed_hosts:
 - 172.17.0.2:9300
 - 172.17.0.3:9301
# Manually specify the name or IP address of all nodes that can become master nodes. These configurations will be calculated in the first election cluster.initial_master_nodes:
 - 172.17.0.2
# Support cross-domain access http.cors.enabled: true
http.cors.allow-origin: "*"
# Security authentication xpack.security.enabled: false
#http.cors.allow-headers: "Authorization"
bootstrap.memory_lock: false
bootstrap.system_call_filter: false

#Solve cross-domain issues#http.cors.enabled: true
#http.cors.allow-origin: "*"
#http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE
#http.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User"

It will be a little slow when pulling the image, so be patient!

# Pull the image. You can build the container directly and ignore this step. docker pull elasticsearch:7.3.0

# Build the container## Map 5601 to the port reserved for Kibanadocker run -d -e ES_JAVA_OPTS="-Xms256m -Xmx256m" \
-p 9200:9200 -p 9300:9300 -p 5601:5601 \
-v /mnt/es1/master/conf/es-master.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /mnt/es1/master/data:/usr/share/elasticsearch/data \
-v /mnt/es1/master/logs:/usr/share/elasticsearch/logs \
-v /etc/localtime:/etc/localtime \
--name es-master elasticsearch:7.3.0

/etc/localtime:/etc/localtime: The host and container time are synchronized.

Deploy from the node

Create configuration files and data storage directories

mkdir -p {/mnt/es1/slave1/data,/mnt/es1/slave1/logs}
vim /mnt/es1/slave1/conf/es-slave1.yml

es-slave1.yml configuration

# Cluster name cluster.name: es-cluster
#Node namenode.name: es-slave1
# Whether it can become a master nodenode.master: true
# Whether to allow the node to store data, enabled by default node.data: true
# Network binding network.host: 0.0.0.0
# Set the http port for external services http.port: 9201
# Set the TCP port for communication between nodes transport.port: 9301
# Cluster discovery discovery.seed_hosts:
 - 172.17.0.2:9300
 - 172.17.0.3:9301
# Manually specify the name or IP address of all nodes that can become master nodes. These configurations will be calculated in the first election cluster.initial_master_nodes:
 - 172.17.0.2
# Support cross-domain access http.cors.enabled: true
http.cors.allow-origin: "*"
# Security authentication xpack.security.enabled: false
#http.cors.allow-headers: "Authorization"
bootstrap.memory_lock: false
bootstrap.system_call_filter: false

It will be a little slow when pulling the image, so be patient!

# Pull the image. You can build the container directly and ignore this step. docker pull elasticsearch:7.3.0

# Build container docker run -d -e ES_JAVA_OPTS="-Xms256m -Xmx256m" \
-p 9201:9200 -p 9301:9300 \
-v /mnt/es1/slave1/conf/es-slave1.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /mnt/es1/slave1/data:/usr/share/elasticsearch/data \
-v /mnt/es1/slave1/logs:/usr/share/elasticsearch/logs \
-v /etc/localtime:/etc/localtime \
--name es-slave1 elasticsearch:7.3.0

Modify the configuration and restart the container

# View the master and slave container IP
docker inspect es-master
docker inspect es-slave1 

Modify the discovery.seed_hosts and cluster.initial_master_nodes in the ES configuration files es-master.yml and es-slave1.yml to the corresponding IP! Restart the container:

docker restart es-master

docker restart es-slave1

# View es logs docker logs -f --tail 100f es-master

Visit http://IP:9200/_cat/nodes to check the ES cluster information. You can see that the master and slave nodes are deployed successfully:

Commonly used APIs for node deployment:

API Function
http://IP:9200 View ES version information
http://IP:9200/_cat/nodes

View all shards

http://IP:9200/_cat/indices View All Indexes

Kibana 7.3.0 deployment

Create Kibana Configuration File

vim /mnt/kibana.yml
#
## ** THIS IS AN AUTO-GENERATED FILE **
##
#
## Default Kibana configuration for docker target
server.name: kibana
#Configure Kibana's remote access server.host: "0.0.0.0"
#Configure es access address elasticsearch.hosts: [ "http://127.0.0.1:9200" ]
#Chinese interface i18n.locale: "zh-CN"

#xpack.monitoring.ui.container.elasticsearch.enabled: true

View the es-master container ID

docker ps | grep es-master 

Deploy Kibana

Note that you need to change 40eff5876ffd in the command to the es-master container ID, pull the image, and wait patiently!

# Pull the image. You can build the container directly and ignore this step. docker pull docker.elastic.co/kibana/kibana:7.3.0

# Build container## --network=container means sharing container network docker run -it -d \
-v /mnt/kibana.yml:/usr/share/kibana/config/kibana.yml \
-v /etc/localtime:/etc/localtime \
-e ELASTICSEARCH_URL=http://172.17.0.2:9200 \
--network=container:40eff5876ffd \
--name kibana docker.elastic.co/kibana/kibana:7.3.0

Check the Kibana container log. If you see the log shown in the following figure, it means the startup is successful.

docker logs -f --tail 100f kibana 

Visit http://IP:5601, and 503 may appear. Wait a while and the access will be OK. If you can access the Kibana console, it means that Kibana has been successfully installed and has established a connection with es-master.

Logstash 7.3.1 deployment

Writing the Logstash Configuration File

vim /mnt/logstash-filebeat.conf

input {
  # Source beats
  beats {
    # Port port => "5044"
  }
}
# Analysis and filtering plugins, multiple filters are possible {
  grok {
	# Where grok expressions are stored patterns_dir => "/grok"
	
	# grok expression rewrite # match => {"message" => "%{SYSLOGBASE} %{DATA:message}"}
	
	# Delete the native message field overwrite => ["message"]

  # Define your own format match => {
		"message" => "%{URIPATH:request} %{IP:clientip} %{NUMBER:response:int} \"%{WORD:sources}\" (?:%{URI:referrer}|-) \[%{GREEDYDATA:agent}\] ​​\{%{GREEDYDATA:params}\}"
	}
  }
 # Query classification plugin geoip {
    source => "message"
  }
}
output {
	# Select elasticsearch
	elasticsearch
		# es cluster hosts => ["http://172.17.0.2:9200"]
      #username => "root"
      #password => "123456"

		# Index format index => "omc-block-server-%{[@metadata][version]}-%{+YYYY.MM.dd}"

		# Set to true to indicate that if you have a custom template called logstash, your custom template will overwrite the default template logstash
		template_overwrite => true
	}
}

Deploy Logstash

# Pull the image, you can build the container directly and ignore this step docker pull logstash:7.3.1 

# Build container# xpack.monitoring.enabled turns on X-Pack's security and monitoring services# xpack.monitoring.elasticsearch.hosts sets the ES address, 172.17.0.2 is the es-master container IP
# Docker allows you to execute some commands when the container starts. logsatsh -f means running logstash by specifying the configuration file. /usr/share/logstash/config/logstash-sample.conf is the directory file in the container. docker run -p 5044:5044 -d \
-v /mnt/logstash-filebeat.conf:/usr/share/logstash/config/logstash-sample.conf \
-v /etc/localtime:/etc/localtime \
-e elasticsearch.hosts=http://172.17.0.2:9200 \
-e xpack.monitoring.enabled=true \
-e xpack.monitoring.elasticsearch.hosts=http://172.17.0.2:9200 \
--name logstash logstash:7.3.1 -f /usr/share/logstash/config/logstash-sample.conf

Here you need to pay attention to the es cluster address. Here we only configure the es-master IP (172.17.0.2). Detailed Logstash configuration. If you see the following log, the installation is successful:

Filebeat 7.3.0 deployment

Filebeat is not a necessary component. We can also use Logstash to transfer logs.

For example, to merge all logs that do not begin with "20", you can use the following Logstash configuration:

input {
  # Source beats
  beats {
    # Port port => "5044"
  }
  file {
    type => "server-log"
    path => "/logs/*.log"
    start_position => "beginning"
    codec=>multiline{
        // Regular expression, all logs with prefix "20". If your logs have prefixes like "[2020-06-15", you can replace it with "^["
        pattern => "^20"
        // Whether to negate the regular rule negate => true
        // previous means merge into the previous line, next means merge into the next line what => "previous"
    }

  }
}

Note that Filebeat must be deployed on the same server as the application. Here the application is deployed using docker, and /mnt/omc-dev/logs is the mapping directory of the application log file. If you also deploy the service through docker, please remember to map out the log file through [-v /mnt/omc-dev/logs:/app/logs]!

Create a Filebeat configuration file

## /mnt/omc-dev/logs is the application log directory. The application deployment directory must be mapped out. mkdir -p {/mnt/omc-dev/logs,/mnt/filebeat/logs,/mnt/filebeat/data}
vim /mnt/filebeat/filebeat.yml
filebeat.inputs:
- type: log
 enabled: true
 paths:
  # All .log files in the current directory - /home/project/spring-boot-elasticsearch/logs/*.log
 multiline.pattern: '^20'
 multiline.negate: true
 multiline.match: previous

logging.level: debug

filebeat.config.modules:
 path: ${path.config}/modules.d/*.yml
 reload.enabled: false

setup.template.settings:
 index.number_of_shards: 1

setup.dashboards.enabled: false

setup.kibana:
 host: "http://172.17.0.2:5601"

# Not directly transferred to ES
#output.elasticsearch:
# hosts: ["http://es-master:9200"]
# index: "filebeat-%{[beat.version]}-%{+yyyy.MM.dd}"

output.logstash:
 hosts: ["172.17.0.5:5044"]

#scan_frequency: 1s
close_inactive: 12h
backoff: 1s
max_backoff: 1s
backoff_factor: 1
flush.timeout: 1s

processors:
 - add_host_metadata: ~
 - add_cloud_metadata: ~

Note that you need to modify the Logstash IP and port.

# Pull the image, you can build the container directly and ignore this step docker pull docker.elastic.co/beats/filebeat:7.3.0

# Build container## --link logstash Connect the specified container to the current connection. You can set an alias to avoid the situation where the container cannot be connected due to dynamic changes caused by the ip method. logstash is the container name. docker run -d -v /mnt/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml \
-v /mnt/omc-dev/logs:/home/project/spring-boot-elasticsearch/logs \
-v /mnt/filebeat/logs:/usr/share/filebeat/logs \
-v /mnt/filebeat/data:/usr/share/filebeat/data \
-v /etc/localtime:/etc/localtime \
--link logstash --name filebeat docker.elastic.co/beats/filebeat:7.3.0

Check the logs. We set the log level of Filebeat to debug in the configuration file, so all the collected information will be displayed.

docker logs -f --tail 100f filebeat 

You can see that by querying the ES index, three more indexes are found. Through the daily index segmentation we configured, because my environment has been running for three days, there are three omc service indexes (omc is a scheduled task service, you can also write a simple scheduled task for testing).

Next, we create a Kibana index pattern and perform log queries:

After the index is created, you can query logs by index pattern in the Discover view.

The article ends here. If you have other services that need to be introduced, you only need to mount the logs to the specified directory. Of course, if the service is deployed on other servers, you need to deploy Filebeat on the server and ensure network connectivity between servers~~

Finally, here is an open source ELK automated Docker deployment project: https://github.com/deviantony/docker-elk.git

--------------------------------------------------------

Updated on June 28, 2020

Recently, a physical memory surge problem caused by Logstash occurred.

Let me briefly explain the main issues:

Currently, the daily log volume of a single service is about 2.2GB. Since there was no limit on Logstash memory in the early days, when a large amount of data comes in, Logstash occupies memory and IO crazily.

Recently, the application service traffic on the same server has increased, which eventually led to insufficient memory and an OutOfMemoryError problem.

Later, by optimizing the JVM memory (I won’t go into details, there are plenty of information on the Internet) and adding the Logstash response memory configuration, the earlier legacy issues were resolved.

Finally, add Logstash to Kibana for monitoring (of course you can also configure Logstash logs to ES):

https://blog.csdn.net/QiaoRui_/article/details/97667293

You may also be interested in:
  • How to deploy stand-alone Pulsar and clustered Redis using Docker (development artifact)
  • How to connect idea to docker to achieve one-click deployment
  • Implementation of deploying war package project using Docker
  • Example of how to deploy MySQL 8.0 using Docker
  • Detailed process of installing and deploying onlyoffice in docker

<<:  Vue uses el-table to dynamically merge columns and rows

>>:  Examples of using temporary tables in MySQL

Recommend

Introduction to the common API usage of Vue3

Table of contents Changes in the life cycle react...

Example code for implementing auto-increment sequence in mysql

1. Create a sequence table CREATE TABLE `sequence...

Example of implementing load balancing with Nginx+SpringBoot

Introduction to Load Balancing Before introducing...

Super detailed MySQL usage specification sharing

Recently, there have been many database-related o...

CSS3 realizes the glowing border effect

Operation effect: html <!-- This element is no...

12 Javascript table controls (DataGrid) are sorted out

When the DataSource property of a DataGrid control...

A quick solution to accidentally delete MySQL data (MySQL Flashback Tool)

Overview Binlog2sql is an open source MySQL Binlo...

6 Practical Tips for TypeScript Development

Table of contents 1. Determine the entity type be...

Installation and configuration tutorial of MongoDB under Linux

MongoDB Installation Choose to install using Yum ...

Detailed tutorial of pycharm and ssh remote access server docker

Background: Some experiments need to be completed...

VMWare Linux MySQL 5.7.13 installation and configuration tutorial

This article shares with you the tutorial of inst...

Markup validation for doctype

But recently I found that using this method will c...

How to install PHP7 Redis extension on CentOS7

Introduction In the previous article, we installe...