Docker has many log plug-ins. The default is to use json-file. Only when json-file is used, sudo docker logs -f can be displayed. Enter the following command to view the docker log plug-in: $ sudo docker info | grep Logging Let me explain here that when the container is running, Docker will create a file related to the container on the host machine, and then transfer the logs generated by the container to this file. The docker logs -f command will find the contents of the file and display them on the terminal. We all know that docker logs -f will output all corresponding service logs to the terminal, no matter which node the service is deployed on. So now I have a question, will the container file corresponding to each node save the complete log backup of the service, or only save the logs generated by the container corresponding to the node service? Because this problem involves that if each node uses filebeat to listen to the container log file of the host machine, then if the container log of each node is a complete backup, the log will be repeated. If only the log of the container on the node is saved, it will not be repeated. The answer is to only keep the logs of the container on the node. The docker logs -f command just runs a layer of protocol on the overlay network model to aggregate the same container logs on other nodes. By default, docker's json-file is used. First, configure daemon: $ sudo dockerd \ --log-driver=json-file \ --log-opt labels=servicename To start the container, you need to add the following parameters: $ sudo docker service update --label servicename=test Or mark it directly in docker-compose.yml: version: "3" services: go-gin-demo: image: chenghuizhang/go-gin-demo:v3 ports: -8081:8081 networks: - overlay deploy: mode: replicated replicas: 3 labels: servicename: go-gin-demoxxxxxxx logging: options: labels: "servicename" networks: overlay: Install filebeat on each node, and configure filebeat.yml as follows: filebeat.prospectors: - type: log paths: # Container log directory - /var/lib/docker/containers/*/*.log # Because the log driver used by docker is json-file, the collected log format is json format. After setting it to true, filebeat will perform json_decode processing on the log json.keys_under_root: true tail_files: true output.logstash: hosts: ["172.17.10.114:5044"] Configure the index in logstash.conf: output { elasticsearch action => "index" hosts => ["172.17.10.114:9200"] # Get the log label index => "%{attrs.servicename}-%{+YYYY.MM.dd}" } } The Dockerfile file needs to print the logs output by the project to stdout and stderr. Otherwise, the json-file log driver will not collect the logs output in the container. sudo docker logs -f will not display the container logs in the terminal. The following command needs to be added to the Dockerfile: RUN ln -sf /dev/stdout /xx/xx.log \ # info && ln -sf /dev/stderr /xx/xx.log # error Or in the project's log4j configuration output console: <Appenders> <Console name="Console" target="SYSTEM_OUT"> <PatternLayout pattern="[%d{DEFAULT}]%m"/> </Console> </Appenders> If the log needs to record the container ID name and image name, you can add the following parameters when running the container: --log-opt tag="//" Finally, the json-file log plugin generates the logs that the container prints to the console in the local { "log":"[GIN-debug] [WARNING] Now Gin requires Go 1.6 or later and Go 1.7 will be required soon.", "stream":"stderr", "attrs":{ "tag":"chenghuizhang/go-gin-demo:v3@sha256:e6c0419d64e5eda510056a38cfb803750e4ac2f0f4862d153f7c4501f576798b/mygo.2.jhqptjugfti2t4emf55sehamo/647eaa4b3913", "servicename":"test" }, "time":"2019-01-29T10:08:59.780161908Z" } Format logs in logstash: filter { grok { patterns_dir => "/etc/logstash/conf.d/patterns" match => {"message" => "%{TIMESTAMP_ISO8601:time}%{SERVICENAME:attr.servicename}%{DOCKER_TAG:attr.tag}"} } The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM. You may also be interested in:
|
<<: The whole process of node.js using express to automatically build the project
>>: How to modify mysql permissions to allow hosts to access
MySQL partitioning is helpful for managing very l...
Cooper talked about the user's visual path, w...
When I was writing the login page today, I needed...
The principle is to first write a div with a butt...
Maybe everyone knows that js execution will block...
Preface: Lynis is a security audit and hardening ...
Since its release in 2013, Docker has been widely...
Routing configuration commands under Linux 1. Add...
Table of contents Understanding Asynchrony fetch(...
SQL Left Join, Right Join, Inner Join, and Natura...
Nginx supports three ways to configure virtual ho...
Install related dependencies npm i lib-flexible -...
Official documentation: So mysql should be starte...
This article shares the specific code of vue+vide...
Table of contents The role of cloneElement Usage ...