Docker Compose one-click ELK deployment method implementation

Docker Compose one-click ELK deployment method implementation

Install

Filebeat has completely replaced Logstash-Forwarder to become the new generation of log collector because it is lighter and more secure. The deployment solution architecture diagram based on Filebeat + ELK is as follows:

Software Version:

Serve Version illustrate
CentOS 7.6
Docker 18.09.5
Docker Compose 1.25.0
ELK 7.5.1
Filebeat 7.5.1

docker-compose file

version: "3"
services:
 es-master:
  container_name: es-master
  hostname: es-master
  image: elasticsearch:7.5.1
  restart: always
  ports:
   - 9200:9200
   - 9300:9300
  volumes:
   - ./elasticsearch/master/conf/es-master.yml:/usr/share/elasticsearch/config/elasticsearch.yml
   - ./elasticsearch/master/data:/usr/share/elasticsearch/data
   - ./elasticsearch/master/logs:/usr/share/elasticsearch/logs
  environment:
   - "ES_JAVA_OPTS=-Xms512m -Xmx512m"

 es-slave1:
  container_name: es-slave1
  image: elasticsearch:7.5.1
  restart: always
  ports:
   - 9201:9200
   - 9301:9300
  volumes:
   - ./elasticsearch/slave1/conf/es-slave1.yml:/usr/share/elasticsearch/config/elasticsearch.yml
   - ./elasticsearch/slave1/data:/usr/share/elasticsearch/data
   - ./elasticsearch/slave1/logs:/usr/share/elasticsearch/logs
  environment:
   - "ES_JAVA_OPTS=-Xms512m -Xmx512m"

 es-slave2:
  container_name: es-slave2
  image: elasticsearch:7.5.1
  restart: always
  ports:
   - 9202:9200
   - 9302:9300
  volumes:
   - ./elasticsearch/slave2/conf/es-slave2.yml:/usr/share/elasticsearch/config/elasticsearch.yml
   - ./elasticsearch/slave2/data:/usr/share/elasticsearch/data
   - ./elasticsearch/slave2/logs:/usr/share/elasticsearch/logs
  environment:
   - "ES_JAVA_OPTS=-Xms512m -Xmx512m"

 kibana:
  container_name: kibana
  hostname: kibana
  image: kibana:7.5.1
  restart: always
  ports:
   -5601:5601
  volumes:
   - ./kibana/conf/kibana.yml:/usr/share/kibana/config/kibana.yml
  environment:
   - elasticsearch.hosts=http://es-master:9200
  depends_on:
   -es-master
   -es-slave1
   -es-slave2

 # filebeat:
 # # Container name # container_name: filebeat
 # # Host name # hostname: filebeat
 # # Image: docker.elastic.co/beats/filebeat:7.5.1
 # # Restart mechanism# restart: always
 # # Persistent mounts # volumes:
 # - ./filebeat/conf/filebeat.yml:/usr/share/filebeat/filebeat.yml
 ## Mapped to the container [as a data source]
 # - ./logs:/home/project/spring-boot-elasticsearch/logs
 # - ./filebeat/logs:/usr/share/filebeat/logs
 # - ./filebeat/data:/usr/share/filebeat/data
 # # Connect the specified container to the current connection. You can set an alias to avoid the situation where the container cannot be connected due to dynamic changes in the IP address. # links:
 # - logstash
 # # Dependency service [optional]
 # depends_on:
 # - es-master
 # - es-slave1
 # - es-slave2

 logstash:
  container_name: logstash
  hostname: logstash
  image: logstash:7.5.1
  command: logstash -f ./conf/logstash-filebeat.conf
  restart: always
  volumes:
   # Mapping to the container - ./logstash/conf/logstash-filebeat.conf:/usr/share/logstash/conf/logstash-filebeat.conf
   - ./logstash/ssl:/usr/share/logstash/ssl
  environment:
   - elasticsearch.hosts=http://es-master:9200
   # Solve the logstash monitoring connection error - xpack.monitoring.elasticsearch.hosts=http://es-master:9200
  ports:
   -5044:5044
  depends_on:
   -es-master
   -es-slave1
   -es-slave2

Filebeat is commented out here, and Filebeat is planned to be deployed separately on each server that needs to collect logs.

Remember to set chmod 777 for Elasticsearch data and logs

es-master.yml

# Cluster name cluster.name: es-cluster
#Node namenode.name: es-master
# Whether it can become a master nodenode.master: true
# Whether to allow the node to store data, enabled by default node.data: false
# Network binding network.host: 0.0.0.0
# Set the http port for external services http.port: 9200
# Set the TCP port for communication between nodes transport.port: 9300
# Cluster discovery discovery.seed_hosts:
 -es-master
 -es-slave1
 -es-slave2
# Manually specify the name or IP address of all nodes that can become master nodes. These configurations will be calculated in the first election cluster.initial_master_nodes:
 -es-master
# Support cross-domain access http.cors.enabled: true
http.cors.allow-origin: "*"
# Security authentication xpack.security.enabled: false
#http.cors.allow-headers: "Authorization"

es-slave1.yml

# Cluster name cluster.name: es-cluster
#Node namenode.name: es-slave1
# Whether it can become a master nodenode.master: true
# Whether to allow the node to store data, enabled by default node.data: true
# Network binding network.host: 0.0.0.0
# Set the http port for external services http.port: 9201
# Set the TCP port for communication between nodes #transport.port: 9301
# Cluster discovery discovery.seed_hosts:
 -es-master
 -es-slave1
 -es-slave2
# Manually specify the name or IP address of all nodes that can become master nodes. These configurations will be calculated in the first election cluster.initial_master_nodes:
 -es-master
# Support cross-domain access http.cors.enabled: true
http.cors.allow-origin: "*"
# Security authentication xpack.security.enabled: false
#http.cors.allow-headers: "Authorization"

es-slave2.yml

# Cluster name cluster.name: es-cluster
#Node namenode.name: es-slave2
# Whether it can become a master nodenode.master: true
# Whether to allow the node to store data, enabled by default node.data: true
# Network binding network.host: 0.0.0.0
# Set the http port for external services http.port: 9202
# Set the TCP port for communication between nodes #transport.port: 9302
# Cluster discovery discovery.seed_hosts:
 -es-master
 -es-slave1
 -es-slave2
# Manually specify the name or IP address of all nodes that can become master nodes. These configurations will be calculated in the first election cluster.initial_master_nodes:
 -es-master
# Support cross-domain access http.cors.enabled: true
http.cors.allow-origin: "*"
# Security authentication xpack.security.enabled: false
#http.cors.allow-headers: "Authorization"

logstash-filebeat.conf

input {
  # Source beats
  beats {
    # Port port => "5044"
    ssl_certificate_authorities => ["/usr/share/logstash/ssl/ca.crt"]
    ssl_certificate => "/usr/share/logstash/ssl/server.crt"
    ssl_key => "/usr/share/logstash/ssl/server.key"
    ssl_verify_mode => "force_peer"
  }
}
# Analysis and filtering plugins, multiple filters are possible {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}"}
  }
  geoip
    source => "clientip"
  }
}
output {
  # Select elasticsearch
  elasticsearch
    hosts => ["http://es-master:9200"]
    index => "%{[fields][service]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }
}

filebeat.yml

filebeat.inputs:
 - type: log
  enabled: true
  paths:
   # All .log files in the current directory - /root/tmp/logs/*.log
  fields:
   service: "our31-java"
  multiline.pattern: ^\[
  multiline.negate: true
  multiline.match: after
 - type: log
  enabled: true
  paths:
   # All .log files in the current directory - /root/tmp/log/*.log
  fields:
   service: "our31-nginx"

filebeat.config.modules:
 path: ${path.config}/modules.d/*.yml
 reload.enabled: false

# setup.template.settings:
# index.number_of_shards: 1

# setup.dashboards.enabled: false

# setup.kibana:
# host: "http://localhost:5601"

# Not directly transferred to ES
#output.elasticsearch:
# hosts: ["http://es-master:9200"]
# index: "filebeat-%{[beat.version]}-%{+yyyy.MM.dd}"

setup.ilm.enabled: false

output.logstash:
 hosts: ["logstash.server.com:5044"]
 
 # Optional SSL. By default is off.
 # List of root certificates for HTTPS server verifications
 ssl.certificate_authorities: "./ssl/ca.crt"
 # Certificate for SSL client authentication
 ssl.certificate: "./ssl/client.crt"
 # Client Certificate Key
 ssl.key: "./ssl/client.key"

# processors:
# - add_host_metadata: ~
# - add_cloud_metadata: ~

Notice

Generate a certificate, configure SSL, and establish SSL between Filebeat and Logstash.

#Generate ca private key openssl genrsa 2048 > ca.key
 
#Use ca private key to create ca certificate openssl req -new -x509 -nodes -key ca.key -subj /CN=elkCA\ CA/OU=Development\ group/O=HomeIT\ SIA/DC=elk/DC=com > ca.crt
 
#Generate server csr certificate request file openssl req -newkey rsa:2048 -nodes -keyout server.key -subj /CN=logstash.server.com/OU=Development\ group/O=Home\ SIA/DC=elk/DC=com > server.csr
 
#Use CA certificate and private key to issue server certificate openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 > server.crt
 
#Generate client csr certificate request file openssl req -newkey rsa:2048 -nodes -keyout client.key -subj /CN=filebeat.client.com/OU=Development\ group/O=Home\ SIA/DC=elk/DC=com > client.csr
 
#Use CA certificate and private key to issue client certificate openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 > client.crt

Remember to put the certificate in the corresponding folder.

The domain name configured in output.logstash.hosts in Filebeat must match the certificate.

Dynamically generate indexes based on different servers, different services, and different dates

In the above picture, some custom attributes are added. These attributes will be passed to Logstash, which will take these attributes and dynamically create indexes in Elasticsearch, as shown below:

For detailed introduction, see the official documentation, metadata, and dynamic index generation.

I originally wanted to use indices to dynamically generate indexes here, but according to the official configuration, it did not succeed. Can anyone tell me why?

Use Nginx Http Basic Authorization to require Kibana to log in

First use the tool htpasswd to generate user information

$ yum -y install httpd-tools

Create a new password file

Create a new password file

Additional user information:

Add user information

Finally, configure Nginx:

server {
  ......
  
  auth_basic "Kibana Auth";
  auth_basic_user_file /usr/local/nginx/pwd/kibana/passwd;
  
  ......
}

How to start Filebeat separately

$ nohup ./filebeat 2>&1 &

Start Docker Compose

Execute in the directory where docker-compose.yml is located:

$ docker-compose up --build -d

This is the end of this article about the implementation of one-click ELK deployment with Docker Compose. For more information about Docker Compose ELK deployment, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • Docker-compose tutorial installation and quick start
  • Docker compose deploys SpringBoot project to connect to MySQL and the pitfalls encountered
  • Solve the problem of managing containers with Docker Compose
  • Docker Compose installation and usage steps
  • Docker Compose practice and summary
  • Detailed process of getting started with docker compose helloworld

<<:  MySQL uses covering index to avoid table return and optimize query

>>:  Beginners learn some HTML tags (2)

Recommend

Analysis of Hyper-V installation CentOS 8 problem

CentOS 8 has been released for a long time. As so...

Five ways to implement inheritance in js

Borrowing Constructors The basic idea of ​​this t...

Vue3 Documentation Quick Start

Table of contents 1. Setup 1. The first parameter...

Detailed explanation of JavaScript's garbage collection mechanism

Table of contents Why do we need garbage collecti...

Why does your height:100% not work?

Why doesn't your height:100% work? This knowl...

How to install mysql5.7.24 binary version on Centos 7 and how to solve it

MySQL binary installation method Download mysql h...

Best Practices for Deploying ELK7.3.0 Log Collection Service with Docker

Write at the beginning This article only covers E...

An example of the calculation function calc in CSS in website layout

calc is a function in CSS that is used to calcula...

A complete guide to the Docker command line (18 things you have to know)

Preface A Docker image consists of a Dockerfile a...

MySQL InnoDB MRR Optimization Guide

Preface MRR is the abbreviation of Multi-Range Re...

Detailed explanation of JS array methods

Table of contents 1. The original array will be m...

Solution to the problem of repeated triggering of functions in Vue project watch

Table of contents Problem description: Solution 1...

Win10 install Linux ubuntu-18.04 dual system (installation guide)

I installed a Linux Ubuntu system on my computer....

JavaScript imitates Xiaomi carousel effect

This article is a self-written imitation of the X...