How to install ELK in Docker and implement JSON format log analysis

How to install ELK in Docker and implement JSON format log analysis

What is ELK?

ELK is a complete set of log collection and front-end display solutions provided by Elastic. It is the acronym of three products, namely ElasticSearch, Logstash and Kibana.

Logstash is responsible for log processing, such as log filtering and log formatting; ElasticSearch has powerful text search capabilities, so it serves as a log storage container; and Kibana is responsible for front-end display.

The ELK architecture is as follows:

Filebeat is added to collect logs from different clients and then pass them to Logstash for unified processing.

ELK Construction

Because ELK consists of three products, you can choose to install these three products in sequence.

Here we choose to install ELk using Docker.

Docker installation of ELk can also choose to download the images of these three products separately and run them, but this time we use the three-in-one image of elk to install it directly.

Therefore, first of all, you need to ensure that you have a Docker operating environment. For the construction of the Docker operating environment, please refer to: https://blog.csdn.net/qq13112...

Pull the image

After having the Docker environment, run the command on the server:

docker pull sebp/elk

This command downloads the elk three-in-one image from the Docker repository. The total size is more than 2G. If you find that the download speed is too slow, you can replace the Docker repository source address with the domestic source address.

After the download is complete, check the image:

docker images

Logstash Configuration

Create a new beats-input.conf in the /usr/config/logstash directory for log input:

input {
 beats {
  port => 5044
 }
}

Create a new output.conf for outputting logs from Logstash to ElasticSearch:

output {
 elasticsearch
  hosts => ["localhost"]
  manage_template => false
  index => "%{[@metadata][beat]}"
 }
}

index here is index after being output to ElasticSearch.

Running the container

After you have the image, you can start it directly:

docker run -d -p 5044:5044 -p 5601:5601 -p 9203:9200 -p 9303:9300 -v /var/data/elk:/var/lib/elasticsearch -v /usr/config/logstash:/etc/logstash/conf.d --name=elk sebp/elk

-d means running the container in the background;

-p means host port: container port, that is, mapping the port used in the container to a port on the host. The default ports of ElasticSearch are 9200 and 9300. Since there are already 3 ElasticSearch instances running on my machine, the mapping port is modified here.

-v means host file|folder:container file|folder. Here, the elasticsearch data in the container is mounted to the host's /var/data/elk to prevent data loss after the container is restarted; and the logstash configuration file is mounted to the host's /usr/config/logstash directory.

--name means to name the container. Naming is to make it easier to operate the container later.

If you have built ElasticSearch before, you will find that there are various errors in the construction process, but those errors do not occur in the process of building elk using docker.

Check the container after running:

docker ps

View the container logs:

docker logs -f elk

Enter the container:

docker exec -it elk /bin/bash

Restart the container after modifying the configuration:

docker restart elk

View Kinaba

Enter http://my_host:5601/ in your browser
You can see the kinaba interface. At this time, there is no data in ElasticSearch, and Filebeat needs to be installed to collect data into elk.

Filebeat setup

Filebeat is used to collect data and report it to Logstash or ElasticSearch. Download Filebeat on the server where logs need to be collected and decompress it for use.

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.1-linux-x86_64.tar.gz

tar -zxvf filebeat-6.2.1-linux-x86_64.tar.gz

Modify the configuration file

Enter filebeat and modify filebeat.yml.

filebeat.prospectors:
- type: log
 #Need to be set to true for the configuration to take effectenabled: true
 path:
  #Configure the log path to be collected - /var/log/*.log
 #You can add a tag and use it for classification later: ["my_tag"]
 #Corresponding to ElasticSearch type
 document_type: my_type
setup.kibana:
 #Here is the IP and port of kibana, i.e. kibana:5601
 host: ""
output.logstash:
 #Here is the IP and port of logstash, i.e. logstash:5044
 host: [""]
 #Need to be set to true, otherwise it will not take effect enabled: true
#If you want to collect data directly from Filebeat to ElasticSearch, you can configure the relevant configuration of output.elasticsearch

Run Filebeat

run:

./filebeat -e -c filebeat.yml -d "publish"

At this point, you can see that Filebeat will send the logs under the configured path to Logstash; then in elk, Logstash will send the data to ElasticSearch after processing it. But what we want to do is to perform data analysis through elk, so the data imported into ElasticSearch must be in JSON format.

This is the format of my previous single log:

 2019-10-22 10:44:03.441 INFO rmjk.interceptors.IPInterceptor Line:248 - {"clientType":"1","deCode":"0fbd93a286533d071","eaType":2,"eaid":191970823383420928,"ip":"xx.xx.xx.xx","model":"HONOR STF-AL10","osType":"9","path":"/applicationEnter","result":5,"session":"ef0a5c4bca424194b29e2ff31632ee5c","timestamp":1571712242326,"uid":"130605789659402240","v":"2.2.4"}

It was difficult to analyze after importing, so I thought of using grok in Logstash's filter to process the logs into JSON format and then import them into ElasticSearch. However, since the parameters in my logs were not fixed, I found it too difficult. So I switched to Logback, which formatted the logs directly into JSON and then sent them by Filebeat.

Logback Configuration

My project is Spring Boot, add dependencies to the project:

<dependency>
 <groupId>net.logstash.logback</groupId>
 <artifactId>logstash-logback-encoder</artifactId>
 <version>5.2</version>
</dependency>

Then add logback.xml to the resource directory in the project:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <!--
    illustrate:
    1. Log level and file log records are recorded in a hierarchical manner. The level corresponds to the log file name. Log information of different levels is recorded in different log files. For example, the error level is recorded in log_error_xxx.log or log_error.log (this file is the current log file), and log_error_xxx.log is an archive log.
      Log files are recorded by date. If the log file size is equal to or greater than 2M within the same day, it will be named in the order of 0, 1, 2, etc. For example, log-level-2013-12-21.0.log
      The same is true for other levels of logging.
    2. If the file path is used for development and testing, run the project in Eclipse, and search for the logs folder in the Eclipse installation path, using the relative path ../logs.
      If deployed under Tomcat, then in the logs file under Tomcat 3. Appender
      FILEERROR corresponds to the error level, and the file name is named in the form of log-error-xxx.log FILEWARN corresponds to the warn level, and the file name is named in the form of log-warn-xxx.log FILEINFO corresponds to the info level, and the file name is named in the form of log-info-xxx.log FILEDEBUG corresponds to the debug level, and the file name is named in the form of log-debug-xxx.log stdout outputs log information to the control for the convenience of development and testing -->
  <contextName>service</contextName>
  <property name="LOG_PATH" value="logs"/>
  <!--Set the system log directory-->
  <property name="APPDIR" value="doctor"/>

  <!-- Logger, date rolling record-->
  <appender name="FILEERROR" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <!-- The path and file name of the log file being recorded-->
    <file>${LOG_PATH}/${APPDIR}/log_error.log</file>
    <!-- The rolling strategy of the logger, record by date and size -->
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <!-- The path of the archived log file. For example, today is the 2013-12-21 log. The path of the currently written log file is specified by the file node. You can set this file and the file path specified by file to different paths, so that the current log file or the archived log file is placed in different directories.
      The log file for 2013-12-21 is specified by fileNamePattern. %d{yyyy-MM-dd} specifies the date format, %i specifies the index -->
      <fileNamePattern>${LOG_PATH}/${APPDIR}/error/log-error-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
      <!-- In addition to logging, the log file is also configured not to exceed 2M. If it exceeds 2M, the log file will start with index 0.
      Name the log file, for example log-error-2013-12-21.0.log -->
      <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
        <maxFileSize>2MB</maxFileSize>
      </timeBasedFileNamingAndTriggeringPolicy>
    </rollingPolicy>
    <!-- Append logs -->
    <append>true</append>
    <!-- Log file format -->
    <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
      <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level %logger Line:%-3L - %msg%n</pattern>
      <charset>utf-8</charset>
    </encoder>
    <!-- This log file only records info level -->
    <filter class="ch.qos.logback.classic.filter.LevelFilter">
      <level>error</level>
      <onMatch>ACCEPT</onMatch>
      <onMismatch>DENY</onMismatch>
    </filter>
  </appender>

  <!-- Logger, date rolling record-->
  <appender name="FILEWARN" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <!-- The path and file name of the log file being recorded-->
    <file>${LOG_PATH}/${APPDIR}/log_warn.log</file>
    <!-- The rolling strategy of the logger, record by date and size -->
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <!-- The path of the archived log file. For example, today is the 2013-12-21 log. The path of the currently written log file is specified by the file node. You can set this file and the file path specified by file to different paths, so that the current log file or the archived log file is placed in different directories.
      The log file for 2013-12-21 is specified by fileNamePattern. %d{yyyy-MM-dd} specifies the date format, %i specifies the index -->
      <fileNamePattern>${LOG_PATH}/${APPDIR}/warn/log-warn-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
      <!-- In addition to logging, the log file is also configured not to exceed 2M. If it exceeds 2M, the log file will start with index 0.
      Name the log file, for example log-error-2013-12-21.0.log -->
      <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
        <maxFileSize>2MB</maxFileSize>
      </timeBasedFileNamingAndTriggeringPolicy>
    </rollingPolicy>
    <!-- Append logs -->
    <append>true</append>
    <!-- Log file format -->
    <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
      <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level %logger Line:%-3L - %msg%n</pattern>
      <charset>utf-8</charset>
    </encoder>
    <!-- This log file only records info level -->
    <filter class="ch.qos.logback.classic.filter.LevelFilter">
      <level>warn</level>
      <onMatch>ACCEPT</onMatch>
      <onMismatch>DENY</onMismatch>
    </filter>
  </appender>

  <!-- Logger, date rolling record-->
  <appender name="FILEINFO" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <!-- The path and file name of the log file being recorded-->
    <file>${LOG_PATH}/${APPDIR}/log_info.log</file>
    <!-- The rolling strategy of the logger, record by date and size -->
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <!-- The path of the archived log file. For example, today is the 2013-12-21 log. The path of the currently written log file is specified by the file node. You can set this file and the file path specified by file to different paths, so that the current log file or the archived log file is placed in different directories.
      The log file for 2013-12-21 is specified by fileNamePattern. %d{yyyy-MM-dd} specifies the date format, %i specifies the index -->
      <fileNamePattern>${LOG_PATH}/${APPDIR}/info/log-info-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
      <!-- In addition to logging, the log file is also configured not to exceed 2M. If it exceeds 2M, the log file will start with index 0.
      Name the log file, for example log-error-2013-12-21.0.log -->
      <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
        <maxFileSize>2MB</maxFileSize>
      </timeBasedFileNamingAndTriggeringPolicy>
    </rollingPolicy>
    <!-- Append logs -->
    <append>true</append>
    <!-- Log file format -->
    <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
      <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level %logger Line:%-3L - %msg%n</pattern>
      <charset>utf-8</charset>
    </encoder>
    <!-- This log file only records info level -->
    <filter class="ch.qos.logback.classic.filter.LevelFilter">
      <level>info</level>
      <onMatch>ACCEPT</onMatch>
      <onMismatch>DENY</onMismatch>
    </filter>
  </appender>

  <appender name="jsonLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <!-- The path and file name of the log file being recorded-->
    <file>${LOG_PATH}/${APPDIR}/log_IPInterceptor.log</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>${LOG_PATH}/${APPDIR}/log_IPInterceptor.%d{yyyy-MM-dd}.log</fileNamePattern>
    </rollingPolicy>
    <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
      <jsonFactoryDecorator class="net.logstash.logback.decorate.CharacterEscapesJsonFactoryDecorator">
        <escape>
          <targetCharacterCode>10</targetCharacterCode>
          <escapeSequence>\u2028</escapeSequence>
        </escape>
      </jsonFactoryDecorator>
      <providers>
        <pattern>
          <pattern>
            {
            "timestamp":"%date{ISO8601}",
            "uid":"%mdc{uid}",
            "requestIp":"%mdc{ip}",
            "id":"%mdc{id}",
            "clientType":"%mdc{clientType}",
            "v":"%mdc{v}",
            "deCode":"%mdc{deCode}",
            "dataId":"%mdc{dataId}",
            "dataType":"%mdc{dataType}",
            "vid":"%mdc{vid}",
            "did":"%mdc{did}",
            "cid":"%mdc{cid}",
            "tagId":"%mdc{tagId}"
            }
          </pattern>
        </pattern>
      </providers>
    </encoder>
  </appender>
  <!-- Color Log -->
  <!-- Rendering class that color logs depend on -->
  <conversionRule conversionWord="clr" converterClass="org.springframework.boot.logging.logback.ColorConverter"/>
  <conversionRule conversionWord="wex"
          converterClass="org.springframework.boot.logging.logback.WhitespaceThrowableProxyConverter"/>
  <conversionRule conversionWord="wEx"
          converterClass="org.springframework.boot.logging.logback.ExtendedWhitespaceThrowableProxyConverter"/>
  <!-- Color log format -->
  <property name="CONSOLE_LOG_PATTERN"
       value="${CONSOLE_LOG_PATTERN:-%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}"/>
  <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
    <!--encoder default configuration is PatternLayoutEncoder-->
    <encoder>
      <pattern>${CONSOLE_LOG_PATTERN}</pattern>
      <charset>utf-8</charset>
    </encoder>
    <!--This log appender is for development use. Only the lowest level is configured. The log level output by the console is greater than or equal to this level of log information-->
    <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
      <level>debug</level>
    </filter>
  </appender>

  <!-- Specify the logging level for a package in the project when there is a log operation behavior-->
  <!-- rmjk.dao.mappe is the root package, that is, all log operations under this root package have the DEBUG permission -->
  <!-- The levels are [from high to low]: FATAL > ERROR > WARN > INFO > DEBUG > TRACE -->
  <logger name="rmjk.dao.mapper" level="DEBUG"/>
  <logger name="rmjk.service" level="DEBUG"/>
  <!--Show log-->
  <logger name="org.springframework.jdbc.core" additivity="false" level="DEBUG">
    <appender-ref ref="STDOUT"/>
    <appender-ref ref="FILEINFO"/>
  </logger>
  <!-- Print json log -->
  <logger name="IPInterceptor" level="info" additivity="false">
    <appender-ref ref="jsonLog"/>
  </logger>

  <!-- In a production environment, configure this level to an appropriate level to avoid too many log files or affecting program performance-->
  <root level="INFO">
    <appender-ref ref="FILEERROR"/>
    <appender-ref ref="FILEWARN"/>
    <appender-ref ref="FILEINFO"/>

    <!-- Please remove stdout and testfile in the production environment -->
    <appender-ref ref="STDOUT"/>
  </root>
</configuration>

The key points are:

<logger name="IPInterceptor" level="info" additivity="false">
   <appender-ref ref="jsonLog"/>
</logger>

Introduce slf4j in the file that needs to be printed:

 private static final Logger LOG = LoggerFactory.getLogger("IPInterceptor");

Put the information to be printed in MDC:

MDC.put("ip", ipAddress);
MDC.put("path", servletPath);
MDC.put("uid", paramMap.get("uid") == null ? "" : paramMap.get("uid").toString());

If LOG.info("msg") is used at this time, the printed content will be entered into the message of the log. The log format is as follows:

Modify Logstash configuration

Modify beats-input.conf in the /usr/config/logstash directory:

input {
 beats {
  port => 5044
  codec => "json"
 }
}

Only codec => "json" is added, but Logstash will parse the input content according to the JSON format.

Because the configuration has been modified, restart elk:

docker restart elk

In this way, when our logs are generated, we can import them into elk using Filebeat, and then we can use Kibana to analyze the logs.

The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM.

You may also be interested in:
  • Detailed explanation of using Docker to quickly deploy the ELK environment (latest version 5.5.1)
  • Sample code for deploying ELK using Docker-compose
  • How to quickly build ELK based on Docker
  • Example of using Docker to build an ELK log system
  • How to build an elk system using docker compose
  • Detailed explanation of how to use Docker to quickly deploy the ELK environment (latest version 5.5.1)
  • Detailed explanation of using ELK to build a Docker containerized application log center
  • Docker builds ELK Docker cluster log collection system

<<:  Mysql varchar type sum example operation

>>:  Several solutions for forgetting the MySQL password

Recommend

A detailed introduction to the Linux directory structure

When you first start learning Linux, you first ne...

How to use custom tags in html

Custom tags can be used freely in XML files and HT...

Detailed tutorial on installing VirtualBox and Ubuntu 16.04 under Windows system

1. Software Introduction VirtualBox VirtualBox is...

Detailed explanation of Linux text editor Vim

Vim is a powerful full-screen text editor and the...

What is em? Introduction and conversion method of em and px

What is em? em refers to the font height, and the ...

Common naming rules for CSS classes and ids

Public name of the page: #wrapper - - The outer e...

How to make ApacheBench support multi-url

Since the standard ab only supports stress testin...

Implementation ideas for docker registry image synchronization

Intro Previously, our docker images were stored i...

JavaScript to achieve simple provincial and municipal linkage

This article shares the specific code for JavaScr...

MySQL foreign key constraint (FOREIGN KEY) case explanation

MySQL foreign key constraint (FOREIGN KEY) is a s...

A detailed introduction to Linux file permissions

The excellence of Linux lies in its multi-user, m...

Vue implements multi-column layout drag

This article shares the specific code of Vue to i...

Instance method for mysql string concatenation and setting null value

#String concatenation concat(s1,s2); concatenate ...