Use Docker Compose to quickly deploy ELK (tested and effective)

Use Docker Compose to quickly deploy ELK (tested and effective)

1. Overview

1.1 Definition

For large-scale logs, centralized management is required. ELK provides a complete set of solutions, and all of them are open source software. They work together perfectly and efficiently meet the needs of many applications. ELK is the abbreviation of three technical products, including Elasticsearch, Logstash, and Kibana, which can be used as a log framework in the project.

1.2 Functional Description

Elasticsearch is an open source distributed search engine that provides three major functions: collecting, analyzing, and storing data.

Logstash is mainly a tool for collecting, analyzing, and filtering logs, and supports a large number of data acquisition methods.

Kibana is also an open source and free tool. Kibana can provide a log analysis-friendly web interface for Logstash and ElasticSearch, which can help aggregate, analyze and search important data logs.

Their functions are shown below:

In simple terms, application services produce logs, which are generated and output through Logger; Logstash receives logs generated by application services through http; Elasticsearch provides full-text search function for logs; and kibana provides a graphical interface for Elasticsearch.

2. Deploy ELK

This article is deployed on Linux and uses /opt as the root directory.

2.1 Create directories and files

1) Create a docker-elk directory and create files and other directories in this directory

mkdir /opt/docker_elk

2) Create a logstash configuration file

mkdir /opt/docker_elk/logstash
touch /opt/docker_elk/logstash/logstash.conf

3) Configure logstash.conf, the content is as follows

input {
  tcp {
    mode => "server"
    host => "0.0.0.0"
    port => 4560
    codec => json
  }
}
output {
  elasticsearch
    hosts => "es:9200"
    index => "logstash-%{+YYYY.MM.dd}"
  }
}

The port for input logs is specified as 4560 here, so the port exposed to the outside must also be 4560.

4) Create a docker-compose.yml file

mkdir /opt/docker_elk/docker-compose.yml

2.2 Configure docker-compose and start

Open docker-compose.yml,

cd /opt/docker_elk
vi docker_compose.yml

The configuration content is as follows:

version: '3.7'
services:
  elasticsearch:
    image: elasticsearch:7.6.2
    container_name: elasticsearch
    privileged: true
    user: root
    environment:
      #Set the cluster name to elasticsearch
      - cluster.name=elasticsearch 
      #Start in single node mode - discovery.type=single-node 
      #Set the jvm memory size - ES_JAVA_OPTS=-Xms512m -Xmx512m 
    volumes:
      - /opt/docker_elk/elasticsearch/plugins:/usr/share/elasticsearch/plugins
      - /opt/docker_elk/elasticsearch/data:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
      - 9300:9300

  logstash:
    image: logstash:7.6.2
    container_name: logstash
    ports:
       -4560:4560
    privileged: true
    environment:
      - TZ=Asia/Shanghai
    volumes:
      #Mount the logstash configuration file - /opt/docker_elk/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf 
    depends_on:
      - elasticsearch 
    links:
      #You can use the es domain name to access the elasticsearch service - elasticsearch:es 
    
  kibana:
    image: kibana:7.6.2
    container_name: kibana
    ports:
        -5601:5601
    privileged: true
    links:
      #You can use the es domain name to access the elasticsearch service - elasticsearch:es 
    depends_on:
      - elasticsearch 
    environment:
      #Set the address to access elasticsearch - elasticsearch.hosts=http://es:9200

Setting privileged to true here gives this container root permissions. Then start

docker-compose up -d

At startup, if Elasticsearch reports an error saying that the files under /usr/share/elasticsearch/data have no permissions, you need to grant read and write permissions to the host machine.

chmod 777 /opt/docker_elk/elasticsearch/data

If an error occurs during startup, you need to shut down and delete the container before restarting it. Turn off the delete command:

docker-compose down

2.3 Open kibana

1) Access the Kibana web interface at http://192.168.0.150:5601. Click Settings on the left to enter the Management interface

2) After clicking Index Patterns, click Create Index

3) Click Create Index.

4) Create an index named logstash-*.

5) Then select the @timestamp filter in the Next Step

6) After creation is complete, click Discover and select the index you just created.

3. Collect logs

This article uses the SpringBoot architecture to record log information to logstash.

3.1 Environmental Preparation

Create a new springboot project and import web dependencies

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>

In addition, you need to import the logstash dependencies:

<!--Integrate logstash-->
<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>6.6</version>
</dependency>

3.2 Logging with logback

Logback is the built-in log of SpringBoot, which can be used as long as the web dependency is imported.

1) Create a new test class and test method under the test package

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.junit.jupiter.api.Test;
import org.springframework.boot.test.context.SpringBootTest;

@SpringBootTest
public class AppTest {

    //Create log object Logger logger = LogManager.getLogger(this.getClass());

    @Test
    public void test1() {
        logger.info("logback's log information is coming");
        logger.error("Error message from logback is coming");
    }
}

2) Create a new logback-spring.xml in the required directory

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE configuration>
<configuration>
    <include resource="org/springframework/boot/logging/logback/defaults.xml"/>
    <include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
    <!--Application Name-->
    <property name="APP_NAME" value="springboot-logback-elk-demo"/>
    <!--Log file save path-->
    <property name="LOG_FILE_PATH" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/logs}"/>
    <contextName>${APP_NAME}</contextName>
    <!--Record logs to file appender every day-->
    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>${LOG_FILE_PATH}/${APP_NAME}-%d{yyyy-MM-dd}.log</fileNamePattern>
            <maxHistory>30</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>${FILE_LOG_PATTERN}</pattern>
        </encoder>
    </appender>
    <!--Output to logstash appender-->
    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <!--Accessible logstash log collection port-->
        <destination>192.168.86.128:4560</destination>
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>
    <root level="INFO">
        <appender-ref ref="CONSOLE"/>
        <appender-ref ref="FILE"/>
        <appender-ref ref="LOGSTASH"/>
    </root>
</configuration>

3) Start the test method and view the log information of kibana

When viewing information, it is recommended to filter out "message" information in the Available fields on the left. The "thread_name" field is optional. The filtered fields can also be seen on the left, and the information on the right is clearer.

It should be noted that in these logs, the time is the creation time when logstash collects the log, not the recording time of the original log.

3.3 Logging with log4j2

To use log4j2, you must exclude the log that comes with SpringBoot.

1) Exclude logback and import dependencies

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter</artifactId>
    <exclusions>
        <!-- When introducing log4j logs, you need to remove the default logback -->
        <exclusion>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-logging</artifactId>
        </exclusion>
    </exclusions>
</dependency>

<!-- Log management log4j2 -->
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-log4j2</artifactId>
    <version>2.1.0.RELEASE</version>
</dependency>

2) Create a new log4j2.xml in the resource directory

<?xml version="1.0" encoding="UTF-8"?>

<configuration status="info">
    <Properties>
        <!-- Declare the directory where the log files are stored -->
        <Property name="LOG_HOME">E:\logs</Property>
        <Property name="LOG_PATTERN"
                  value="%date{yyyy-MM-dd HH:mm:ss.SSS} %-5level [%thread][%class{36}:%line] - %msg%n"></Property>
    </Properties>

    <Appenders>
        <!--Output console configuration-->
        <Console name="Console" target="SYSTEM_OUT">
            <!--The console only outputs information of level and above (onMatch), and directly rejects other information (onMismatch)-->
            <ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/>
            <!-- Output log format -->
            <PatternLayout pattern="${LOG_PATTERN}"/>
        </Console>

        <!--This is the configuration for outputting logs to files. Every time the size exceeds size, the logs of this size will be automatically stored in the folder created by year-month and compressed as archives. -->
        <RollingFile name="RollingFile" fileName="${LOG_HOME}\app_${date:yyyy-MM-dd}.log"
                     filePattern="${LOG_HOME}\${date:yyyy-MM}\app_%d{yyyy-MM-dd}_%i.log">
            <ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/>
            <!-- Output log format -->
            <PatternLayout pattern="${LOG_PATTERN}"/>
            <!-- Log file size -->
            <SizeBasedTriggeringPolicy size="20MB"/>
            <!-- Maximum number of files to keep -->
            <DefaultRolloverStrategy max="30"/>
        </RollingFile>

        <!--Output to logstash appender-->
        <Socket name="Socket" host="192.168.86.128" port="4560" protocol="TCP">
            <!--Log format output to logstash-->
            <PatternLayout pattern="${LOG_PATTERN}"/>
        </Socket>
    </Appenders>

    <!--Then define Logger. Appender will take effect only if Logger is defined and introduced. The level in Root configures the log level, and other levels can be configured -->
    <Loggers>
        <Root level="info">
            <AppenderRef ref="Console"/>
            <AppenderRef ref="RollingFile"/>
            <AppenderRef ref="Socket"/>
        </Root>
    </Loggers>

</configuration>

The main red part above needs to specify the IP address of the logstash service and the port for recording logs.

3) Create a new test method in the test class

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.junit.jupiter.api.Test;
import org.springframework.boot.test.context.SpringBootTest;

@SpringBootTest
public class AppTest {

    //Create log object Logger logger = LogManager.getLogger(this.getClass());

    ...

    @Test
    public void test2() {
        logger.info("I am the log information of log4j2");
        logger.error("I am the error message of log4j2");
    }
}

4) Start the test method and view the log information of kibana

When viewing information, it is recommended to filter out "message" information in the Available fields on the left. The "thread_name" field is optional. The filtered fields can also be seen on the left, and the information seen on the right is clearer, including the time of the log itself. This is the configuration of the log configuration.

This is the end of this article about using Docker Compose to quickly deploy ELK (tested and effective). For more information about Docker Compose deployment of ELK, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • Docker Compose one-click ELK deployment method implementation
  • How to use Docker-compose to build an ELK cluster
  • Sample code for deploying ELK using Docker-compose
  • How to build an elk system using docker compose

<<:  svg+css or js to create tick animation effect

>>:  MySQL optimization query_cache_limit parameter description

Recommend

How to modify the MySQL character set

1. Check the character set of MySQL show variable...

js implements the algorithm for specifying the order and amount of red envelopes

This article shares the specific code of js to im...

Linux uses iptables to limit multiple IPs from accessing your server

Preface In the Linux kernel, netfilter is a subsy...

Detailed explanation based on event bubbling, event capture and event delegation

Event bubbling, event capturing, and event delega...

A brief discussion on MySQL index optimization analysis

Why are the SQL queries you write slow? Why do th...

Three Ways to Lock and Unlock User Accounts in Linux

If you already have some kind of password policy ...

vue-element-admin global loading waiting

Recent requirements: Global loading, all interfac...

OpenSSL implements two-way authentication tutorial (with server and client code)

1. Background 1.1 Problems A recent product testi...

JavaScript implements select all and unselect all operations

This article shares the specific code for JavaScr...

PNG Alpha Transparency in IE6 (Complete Collection)

Many people say that IE6 does not support PNG tra...

Sample code for highlighting search keywords in WeChat mini program

1. Introduction When you encounter a requirement ...

CSS text alignment implementation code

When making forms, we often encounter the situati...

Nginx anti-crawler strategy to prevent UA from crawling websites

Added anti-crawler policy file: vim /usr/www/serv...

Problems with Vue imitating Bibibili's homepage

Engineering Structure The project is divided into...