Detailed tutorial on installing Prometheus with Docker

Detailed tutorial on installing Prometheus with Docker

Docker deployment Prometheus instructions:

Monitoring terminal installation:
Prometheus Server (Prometheus monitoring main server)
Node Exporter (collects host hardware and operating system information)
cAdvisor (responsible for collecting information about containers running on the Host)
Grafana (displays the Prometheus monitoring interface)

Monitored installation:
Node Exporter (collects host hardware and operating system information)
cAdvisor (responsible for collecting information about containers running on the Host)

1. Install Node Exporter

  • All Server Installations
  • Node Exporter collects system information and is used to monitor system information such as CPU, memory, disk usage, disk read and write, etc.
  • –net=host, so that Prometheus Server can communicate directly with Node Exporter
docker run -d -p 9100:9100 \
-v "/proc:/host/proc" \
-v "/sys:/host/sys" \
-v "/:/rootfs" \
-v "/etc/localtime:/etc/localtime" \
--net=host \
prom/node-exporter \
--path.procfs /host/proc \
--path.sysfs /host/sys \
--collector.filesystem.ignored-mount-points "^/(sys|proc|dev|host|etc)($|/)"

[root@k8s-m1 ~]# docker ps|grep exporter
ee30add8d207 prom/node-exporter "/bin/node_exporter …" About a minute ago Up About a minute condescending_shirley

2. Install cAdvisor

  • All Server Installations
  • cAdvisor collects docker information to display docker's cpu, memory, upload and download information
  • –net=host, so that Prometheus Server can communicate directly with cAdvisor
docker run -d \
-v "/etc/localtime:/etc/localtime" \
--volume=/:/rootfs:ro \
--volume=/var/run:/var/run:rw \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/dev/disk/:/dev/disk:ro \
--publish=18104:8080 \
--detach=true \
--name=cadvisor \
--privileged=true \
google/cadvisor:latest

[root@k8s-m1 ~]# docker ps|grep cadvisor
cf6af6118055 google/cadvisor:latest "/usr/bin/cadvisor -…" 38 seconds ago Up 37 seconds 0.0.0.0:18104->8080/tcp cadvisor
You can enter the container to view:
[root@agent ~]# sudo docker exec -it containerid /bin/sh

3. Install Prometheus Server

Monitoring terminal installation

1) Edit the configuration file

  • First, create prometheus.yml locally. This is the configuration file for Prometheus.
  • Write the following content into the file
  • Change the listening address to your own local address
# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
    #Listening address - targets: ['localhost:9090','172.23.0.241:8088','172.23.0.241:9090']

2) Start the container

1> prometheus.yml configuration file

The external network IP needs to be configured in prometheus.yml. The internal network IP cannot be recognized in Grafana except for the local machine!

# my global configuration
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
    #Listening address (here is the server intranet ip)
    - targets: ['10.27.158.33:9090','10.27.158.33:9100','10.27.158.33:18104']
    - targets: ['10.29.46.54:9100','10.29.46.54:18104']
    - targets: ['10.27.163.172:9100','10.27.163.172:18104']

# - job_name: 'GitLab'
# metrics_path: '/-/metrics'
# static_configs:
# - targets: ['172.23.0.241:10101']

  - job_name: 'jenkins'
    metrics_path: '/prometheus/'
    scheme: http
    bearer_token: bearer_token
    static_configs:
    - targets: ['172.23.0.242:8080']

  - job_name: "Nginx"
    metrics_path: '/status/format/prometheus'
    static_configs:
    - targets: ['172.23.0.242:8088']

2>Start command

–net=host, so that Prometheus Server can communicate directly with Exporter and Grafana

docker run -d -p 9090:9090 \
-v /root/Prometheus/prometheus.yml:/etc/prometheus/prometheus.yml \
-v "/etc/localtime:/etc/localtime" \
--name prometheus \
--net=host \
prom/prometheus:latest

# Access after the Prometheus container is successfully started# PS: The server needs to open the external network port of eth0 before you can use the browser to access 9090 0.0.0.0
106.15.0.11:9090

4. Create and run Grafana

  • Monitoring server installation
  • For graphical display
docker run -d -i -p 3000:3000 \
-v "/etc/localtime:/etc/localtime" \
-e "GF_SERVER_ROOT_URL=http://grafana.server.name" \
-e "GF_SECURITY_ADMIN_PASSWORD=admin8888" \
--net=host \
grafana/grafana

# PS: The server needs to open the external network port of eth0 before it can be accessed with a browser: 3000 0.0.0.0
After Grafana starts, open the 172.23.0.241:3000 login interface in the browser and log in:
	Username: admin
	Password: admin8888

1) Add Prometheus server

insert image description here
insert image description here
insert image description here
insert image description here

Then make a graphical display for the added data source

insert image description here
insert image description here
insert image description here

5. Add monitoring template

  • It is a bit difficult to create a dashboard manually. You can use the power of Kaiyuan to visit [monitoring template address] https://grafana.com/grafana/dashboards and you will see many dashboards for monitoring Docker. Monitoring template address (multiple monitoring templates can be downloaded according to your needs)
  • Monitoring template address
  • Some dashboards can be imported directly after downloading, while others need to be modified before importing. You need to look at the dashboard overview.
  • Final effect

insert image description here

At this time, you can choose to compile the corresponding template, take the value from prometheus and transfer it to grafana. That's it. Very useful!

6. Key-value query

Through the indicator io_namespace_http_requests_total we can:

Query the total number of requests for the application sum(io_namespace_http_requests_total)
Query the number of HTTP requests per second sum(rate(io_wise2c_gateway_requests_total[5m]))
Query the top N URIs of current application requests
	topk(10, sum(io_namespace_http_requests_total) by (path))

Configure Prometheus to monitor Nginx

1. Two modules need to be installed for Nginx before Prometheus can be used for monitoring: nginx-module-vts, geoip

2. Idea: Whether it is compiled or installed by yum, you need to download the tar package of the same version. Based on the original installation options, add the above two module options, compile and install to replace the original nginx, and finally move the configuration files of the original nginx directory such as nginx.conf file and conf.d directory to the compiled and installed nignx directory, and finally start nginx.

Here is the official source installation:
1) Configure the official source

[root@web01 ~]# vim /etc/yum.repos.d/nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

2) Install dependencies

yum install -y gcc gcc-c++ autoconf pcre pcre-devel make automake wget httpd-tools vim tree

3) Install nginx

[root@web01 ~]# yum install -y nginx

4) Configure nginx

[root@web01 ~]# vim /etc/nginx/nginx.conf
user www;

5) Start the service

1. Method 1: Start directly. If there is an error ==》Major error, port 80 is occupied ==》Check the service HTTPD occupying the port, stop it, and restart nginx
[root@web01 ~]# systemctl start nginx
2. Method 2:
[root@web01 ~]# nginx

1. View the current Nginx installation options

[root@db01 nginx-1.12.2]# nginx -V
[root@db01 nginx-1.12.2]# ./configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx --with-compat --with-debug --with-file-aio --with-google_perftools_module --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_degradation_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_mp4_module --with-http_perl_module=dynamic --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-http_xslt_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-threads --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic' --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E'

2. Prepare the module

# Download and unzip new packages [root@k8s-n1 packages]# wget http://nginx.org/download/nginx-1.16.1.tar.gz
[root@k8s-n1 packages]# tar xf nginx-1.16.1.tar.gz

#Clone and download nginx-module-vts module [root@k8s-n1 packages]# git clone https://github.com/vozlt/nginx-module-vts

# .Install GeoIP module [root@k8s-n1 packages]# yum -y install epel-release geoip-devel

3. Stop Nginx service

# Stop nginx service [root@k8s-n1 packages]# nginx -s stop

#Backup the original nginx startup file[root@k8s-n1 packages]# which nginx
/usr/sbin/nginx
[root@k8s-n1 packages]# mv /usr/sbin/nginx /usr/sbin/nginx.bak

# Back up the original nignx directory [root@k8s-n1 packages]# mv /etc/nginx nginx-1.12.2.bak

4. Compile and install

1> Install required dependencies

When compiling and installing, the error `make: *** No rule to create target 'build' required by 'default' may appear. The error of stop` is caused by the lack of dependencies. # No matter what, install it and then compile it. Otherwise, you have to re-./configure after installing the dependencies. ~
yum install -y gcc gcc++ bash-completion vim lrzsz wget expect net-tools nc nmap tree dos2unix htop iftop iotop unzip telnet sl psmisc nethogs glances bc pcre-devel zlib zlib-devel openssl openssl-devel libxml2 libxml2-dev libxslt-devel gd gd-devel perl-devel perl-ExtUtils-Embed GeoIP GeoIP-devel GeoIP-data pcre-devel

2> Compile and install

  • Enter the nginx directory you just unzipped, compile and install
  • Based on the original installation parameters, add two parameters at the end

–add-module=/root/packages/nginx-module-vts
–with-http_geoip_module

[root@db01 nginx-1.12.2]# ./configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx --with-compat --with-debug --with-file-aio --with-google_perftools_module --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_degradation_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_mp4_module --with-http_perl_module=dynamic --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-http_xslt_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-threads --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic' --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E' --add-module=/root/package/nginx-module-vts --with-http_geoip_module
# Compile and install# -j multi-core compilation (it is not recommended to use this parameter for low configuration, it will get stuck~)
[root@k8s-n1 nginx-1.12.2]# make -j && make install

5. Configure Nginx

[root@k8s-n1 packages]# cp -r nginx-1.12.2.bak/conf.d/ /etc/nginx/
[root@k8s-n1 packages]# cp -r nginx-1.12.2.bak/nginx.conf /etc/nginx/
[root@k8s-n1 packages]# rm -f /etc/nginx/conf.d/default.conf

Configure Nginx Configuration File

HTTP layer

Server layer

	···
http {	
	···
    include /etc/nginx/conf.d/*.conf;

	##################### 1.http layer: add three lines of configuration###################### 
    vhost_traffic_status_zone;
    vhost_traffic_status_filter_by_host on;
    geoip_country /usr/share/GeoIP/GeoIP.dat;

	##################### 2.Server layer: Specify the server layer port number, port 8088 is recommended, if there is no conflict, just copy and paste#######################
    server {
        listen 8088;
        server_name localhost;
        # The following vhost configuration is written in this location location /status {
        vhost_traffic_status on; # Traffic status, the default is on, this line can be omitted vhost_traffic_status_display;
        vhost_traffic_status_display_format html;
        vhost_traffic_status_filter_by_set_key $uri uri::$server_name; #Visits per uri vhost_traffic_status_filter_by_set_key $geoip_country_code country::$server_name; #Requests per country/region vhost_traffic_status_filter_by_set_key $status $server_name; #http code statistics vhost_traffic_status_filter_by_set_key $upstream_addr upstream::backend; #Backend>Forwarding Statistics vhost_traffic_status_filter_by_set_key $remote_port client::ports::$server_name; #Request port statistics vhost_traffic_status_filter_by_set_key $remote_addr client::addr::$server_name; #Request IP statistics location ~ ^/storage/(.+)/.*$ {
            set $volume $1;
            vhost_traffic_status_filter_by_set_key $volume storage::$server_name; #Request path statistics}
        }
    }
   	###################### Server layer: You can create a new server, or modify the original configuration#######################
}

6. Start Nginx

[root@k8s-n1 packages]# nginx
[root@k8s-n1 packages]# netstat -lntp|grep nginx
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 62214/nginx: master 
tcp 0 0 0.0.0.0:8088 0.0.0.0:* LISTEN 62214/nginx: master 

Browser access:
	172.23.0.243:80 # nginx default official page 172.23.0.243:8088/status # nignx monitoring item page 

insert image description here

7. Monitoring with Prometheus

  • Configure prometheus.yml on the prometheus server and restart the prometheus container
  • metrics_path: defines the interface suffix type, the default is /metrics
  • That is, when we enter the ip+port, the browser will automatically add the /metrics suffix
[root@k8s-m1 ~]# vim prometheus.yml
···
scrape_configs:
  - job_name: "Nginx"
    metrics_path: '/status/format/prometheus'
    static_configs:
    - targets: ['172.23.0.243:8088']
···
[root@k8s-m1 ~]# docker restart prometheus

# At this time, enter the prometheus management page to query the monitoring items of nginx

8. The meaning of each monitoring item

Nginx-module-vts provides a variety of monitoring items. Understanding the meaning of monitoring items will help you generate the required charts.

# HELP nginx_vts_info Nginx info
# TYPE nginx_vts_info gauge
nginx_vts_info{hostname="hbhly_21_205",version="1.16.1"} 1
# HELP nginx_vts_start_time_seconds Nginx start time
# TYPE nginx_vts_start_time_seconds gauge
nginx_vts_start_time_seconds 1584268136.439
# HELP nginx_vts_main_connections Nginx connections
# TYPE nginx_vts_main_connections gauge

# Number of nginx connections by status nginx_vts_main_connections{status="accepted"} 9271
nginx_vts_main_connections{status="active"} 7
nginx_vts_main_connections{status="handled"} 9271
nginx_vts_main_connections{status="reading"} 0
nginx_vts_main_connections{status="requests"} 438850
nginx_vts_main_connections{status="waiting"} 6
nginx_vts_main_connections{status="writing"} 1
# HELP nginx_vts_main_shm_usage_bytes Shared memory [ngx_http_vhost_traffic_status] info
# TYPE nginx_vts_main_shm_usage_bytes gauge

# Memory usage nginx_vts_main_shm_usage_bytes{shared="max_size"} 1048575
nginx_vts_main_shm_usage_bytes{shared="used_size"} 24689
nginx_vts_main_shm_usage_bytes{shared="used_node"} 7
# HELP nginx_vts_server_bytes_total The request/response bytes
# TYPE nginx_vts_server_bytes_total counter
# HELP nginx_vts_server_requests_total The requests counter
# TYPE nginx_vts_server_requests_total counter
# HELP nginx_vts_server_request_seconds_total The request processing time in seconds
# TYPE nginx_vts_server_request_seconds_total counter
# HELP nginx_vts_server_request_seconds The average of request processing times in seconds
# TYPE nginx_vts_server_request_seconds gauge
# HELP nginx_vts_server_request_duration_seconds The histogram of request processing time
# TYPE nginx_vts_server_request_duration_seconds histogram
# HELP nginx_vts_server_cache_total The requests cache counter
# TYPE nginx_vts_server_cache_total counter

# Host in and out traffic nginx_vts_server_bytes_total{host="10.160.21.205",direction="in"} 22921464
nginx_vts_server_bytes_total{host="10.160.21.205",direction="out"} 1098196005

# Request count by status code 1** 2** 3** 4** 5**
nginx_vts_server_requests_total{host="10.160.21.205",code="1xx"} 0
nginx_vts_server_requests_total{host="10.160.21.205",code="2xx"} 86809
nginx_vts_server_requests_total{host="10.160.21.205",code="3xx"} 0
nginx_vts_server_requests_total{host="10.160.21.205",code="4xx"} 2
nginx_vts_server_requests_total{host="10.160.21.205",code="5xx"} 0
nginx_vts_server_requests_total{host="10.160.21.205",code="total"} 86811

# Response time nginx_vts_server_request_seconds_total{host="10.160.21.205"} 0.000
nginx_vts_server_request_seconds{host="10.160.21.205"} 0.000

#Statistics of cache by status nginx_vts_server_cache_total{host="10.160.21.205",status="miss"} 0
nginx_vts_server_cache_total{host="10.160.21.205",status="bypass"} 0
nginx_vts_server_cache_total{host="10.160.21.205",status="expired"} 0
nginx_vts_server_cache_total{host="10.160.21.205",status="stale"} 0
nginx_vts_server_cache_total{host="10.160.21.205",status="updating"} 0
nginx_vts_server_cache_total{host="10.160.21.205",status="revalidated"} 0
nginx_vts_server_cache_total{host="10.160.21.205",status="hit"} 0
nginx_vts_server_cache_total{host="10.160.21.205",status="scarce"} 0
nginx_vts_server_bytes_total{host="devapi.feedback.test",direction="in"} 3044526
nginx_vts_server_bytes_total{host="devapi.feedback.test",direction="out"} 41257028

# Statistics of the number of connections by state nginx_vts_server_requests_total{host="devapi.feedback.test",code="1xx"} 0
nginx_vts_server_requests_total{host="devapi.feedback.test",code="2xx"} 3983
nginx_vts_server_requests_total{host="devapi.feedback.test",code="3xx"} 0
nginx_vts_server_requests_total{host="devapi.feedback.test",code="4xx"} 24
nginx_vts_server_requests_total{host="devapi.feedback.test",code="5xx"} 11
nginx_vts_server_requests_total{host="devapi.feedback.test",code="total"} 4018
nginx_vts_server_request_seconds_total{host="devapi.feedback.test"} 327.173
nginx_vts_server_request_seconds{host="devapi.feedback.test"} 0.000

# nginx cache calculator, accurate to status and type
nginx_vts_server_cache_total{host="devapi.feedback.test",status="miss"} 0
nginx_vts_server_cache_total{host="devapi.feedback.test",status="bypass"} 0
nginx_vts_server_cache_total{host="devapi.feedback.test",status="expired"} 0
nginx_vts_server_cache_total{host="devapi.feedback.test",status="stale"} 0
nginx_vts_server_cache_total{host="devapi.feedback.test",status="updating"} 0
nginx_vts_server_cache_total{host="devapi.feedback.test",status="revalidated"} 0
nginx_vts_server_cache_total{host="devapi.feedback.test",status="hit"} 0
nginx_vts_server_cache_total{host="devapi.feedback.test",status="scarce"} 0
nginx_vts_server_bytes_total{host="testapi.feedback.test",direction="in"} 55553573
nginx_vts_server_bytes_total{host="testapi.feedback.test",direction="out"} 9667561188
nginx_vts_server_requests_total{host="testapi.feedback.test",code="1xx"} 0
nginx_vts_server_requests_total{host="testapi.feedback.test",code="2xx"} 347949
nginx_vts_server_requests_total{host="testapi.feedback.test",code="3xx"} 31
nginx_vts_server_requests_total{host="testapi.feedback.test",code="4xx"} 7
nginx_vts_server_requests_total{host="testapi.feedback.test",code="5xx"} 33
nginx_vts_server_requests_total{host="testapi.feedback.test",code="total"} 348020
nginx_vts_server_request_seconds_total{host="testapi.feedback.test"} 2185.177
nginx_vts_server_request_seconds{host="testapi.feedback.test"} 0.001
nginx_vts_server_cache_total{host="testapi.feedback.test",status="miss"} 0
nginx_vts_server_cache_total{host="testapi.feedback.test",status="bypass"} 0
nginx_vts_server_cache_total{host="testapi.feedback.test",status="expired"} 0
nginx_vts_server_cache_total{host="testapi.feedback.test",status="stale"} 0
nginx_vts_server_cache_total{host="testapi.feedback.test",status="updating"} 0
nginx_vts_server_cache_total{host="testapi.feedback.test",status="revalidated"} 0
nginx_vts_server_cache_total{host="testapi.feedback.test",status="hit"} 0
nginx_vts_server_cache_total{host="testapi.feedback.test",status="scarce"} 0
nginx_vts_server_bytes_total{host="*",direction="in"} 81519563
nginx_vts_server_bytes_total{host="*",direction="out"} 10807014221

# Request statistics by host nginx_vts_server_requests_total{host="*",code="1xx"} 0
nginx_vts_server_requests_total{host="*",code="2xx"} 438741
nginx_vts_server_requests_total{host="*",code="3xx"} 31
nginx_vts_server_requests_total{host="*",code="4xx"} 33
nginx_vts_server_requests_total{host="*",code="5xx"} 44
nginx_vts_server_requests_total{host="*",code="total"} 438849
nginx_vts_server_request_seconds_total{host="*"} 2512.350
nginx_vts_server_request_seconds{host="*"} 0.007

# Host cache statistics nginx_vts_server_cache_total{host="*",status="miss"} 0
nginx_vts_server_cache_total{host="*",status="bypass"} 0
nginx_vts_server_cache_total{host="*",status="expired"} 0
nginx_vts_server_cache_total{host="*",status="stale"} 0
nginx_vts_server_cache_total{host="*",status="updating"} 0
nginx_vts_server_cache_total{host="*",status="revalidated"} 0
nginx_vts_server_cache_total{host="*",status="hit"} 0
nginx_vts_server_cache_total{host="*",status="scarce"} 0
# HELP nginx_vts_upstream_bytes_total The request/response bytes
# TYPE nginx_vts_upstream_bytes_total counter
# HELP nginx_vts_upstream_requests_total The upstream requests counter
# TYPE nginx_vts_upstream_requests_total counter
# HELP nginx_vts_upstream_request_seconds_total The request Processing time including upstream in seconds
# TYPE nginx_vts_upstream_request_seconds_total counter
# HELP nginx_vts_upstream_request_seconds The average of request processing times including upstream in seconds
# TYPE nginx_vts_upstream_request_seconds gauge
# HELP nginx_vts_upstream_response_seconds_total The only upstream response processing time in seconds
# TYPE nginx_vts_upstream_response_seconds_total counter
# HELP nginx_vts_upstream_response_seconds The average of only upstream response processing times in seconds
# TYPE nginx_vts_upstream_response_seconds gauge
# HELP nginx_vts_upstream_request_duration_seconds The histogram of request processing time including upstream
# TYPE nginx_vts_upstream_request_duration_seconds histogram
# HELP nginx_vts_upstream_response_duration_seconds The histogram of only upstream response processing time
# TYPE nginx_vts_upstream_response_duration_seconds histogram

# Upstream traffic statistics nginx_vts_upstream_bytes_total{upstream="::nogroups",backend="10.144.227.162:80",direction="in"} 12296
nginx_vts_upstream_bytes_total{upstream="::nogroups",backend="10.144.227.162:80",direction="out"} 13582924
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.144.227.162:80",code="1xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.144.227.162:80",code="2xx"} 25
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.144.227.162:80",code="3xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.144.227.162:80",code="4xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.144.227.162:80",code="5xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.144.227.162:80",code="total"} 25
nginx_vts_upstream_request_seconds_total{upstream="::nogroups",backend="10.144.227.162:80"} 1.483
nginx_vts_upstream_request_seconds{upstream="::nogroups",backend="10.144.227.162:80"} 0.000
nginx_vts_upstream_response_seconds_total{upstream="::nogroups",backend="10.144.227.162:80"} 1.484
nginx_vts_upstream_response_seconds{upstream="::nogroups",backend="10.144.227.162:80"} 0.000
nginx_vts_upstream_bytes_total{upstream="::nogroups",backend="10.152.218.149:80",direction="in"} 12471
nginx_vts_upstream_bytes_total{upstream="::nogroups",backend="10.152.218.149:80",direction="out"} 11790508
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.152.218.149:80",code="1xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.152.218.149:80",code="2xx"} 24
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.152.218.149:80",code="3xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.152.218.149:80",code="4xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.152.218.149:80",code="5xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.152.218.149:80",code="total"} 24
nginx_vts_upstream_request_seconds_total{upstream="::nogroups",backend="10.152.218.149:80"} 1.169
nginx_vts_upstream_request_seconds{upstream="::nogroups",backend="10.152.218.149:80"} 0.000
nginx_vts_upstream_response_seconds_total{upstream="::nogroups",backend="10.152.218.149:80"} 1.168
nginx_vts_upstream_response_seconds{upstream="::nogroups",backend="10.152.218.149:80"} 0.000
nginx_vts_upstream_bytes_total{upstream="::nogroups",backend="10.160.21.205:8081",direction="in"} 3036924
nginx_vts_upstream_bytes_total{upstream="::nogroups",backend="10.160.21.205:8081",direction="out"} 33355357
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8081",code="1xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8081",code="2xx"} 3971
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8081",code="3xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8081",code="4xx"} 24
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8081",code="5xx"} 11
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8081",code="total"} 4006
nginx_vts_upstream_request_seconds_total{upstream="::nogroups",backend="10.160.21.205:8081"} 326.427
nginx_vts_upstream_request_seconds{upstream="::nogroups",backend="10.160.21.205:8081"} 0.000
nginx_vts_upstream_response_seconds_total{upstream="::nogroups",backend="10.160.21.205:8081"} 300.722
nginx_vts_upstream_response_seconds{upstream="::nogroups",backend="10.160.21.205:8081"} 0.000
nginx_vts_upstream_bytes_total{upstream="::nogroups",backend="10.160.21.205:8082",direction="in"} 55536408
nginx_vts_upstream_bytes_total{upstream="::nogroups",backend="10.160.21.205:8082",direction="out"} 9650089427
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8082",code="1xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8082",code="2xx"} 347912
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8082",code="3xx"} 31
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8082",code="4xx"} 7
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8082",code="5xx"} 33
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8082",code="total"} 347983
nginx_vts_upstream_request_seconds_total{upstream="::nogroups",backend="10.160.21.205:8082"} 2183.271
nginx_vts_upstream_request_seconds{upstream="::nogroups",backend="10.160.21.205:8082"} 0.001
nginx_vts_upstream_response_seconds_total{upstream="::nogroups",backend="10.160.21.205:8082"} 2180.893
nginx_vts_upstream_response_seconds{upstream="::nogroups",backend="10.160.21.205:8082"} 0.001

9. Target expression query in Prometheus UI

1) Typical monitoring indicators obtained from CAdvisor

Indicator name type meaning
container_cpu_load_average_10s gauge Average CPU load of the container in the past 10 seconds
container_cpu_usage_seconds_total counter Cumulative usage time of the container on each CPU core (in seconds)
container_cpu_system_seconds_total counter System CPU cumulative usage time (unit: seconds)
container_cpu_user_seconds_total counter User CPU cumulative usage time (unit: seconds)
container_fs_usge_bytes gauge The amount of file system usage in the container (in bytes)
container_network_receive_bytes_total counter Total amount of data received by the container network (unit: bytes)
container_network_transmit_bytes_total counter Total amount of data transferred by container network (unit: bytes)

2) Container related

# Container CPU usage sum(irate(container_cpu_usage_seconds_total{image!=""}[1m])) without (cpu)

# Container memory usage (unit: bytes)
container_memory_usage_bytes{image!=""}

# Container network receiving rate (unit: bytes/second)
sum(rate(container_network_receive_bytes_total{image!=""}[1m])) without (interface)

# Container network transmission rate sum(rate(container_network_transmit_bytes_total{image!=""}[1m])) without (interface)

# Container file system read rate sum(rate(container_fs_reads_bytes_total{image!=""}[1m])) without (device)

# Container file system write rate (unit: bytes/second)
sum(rate(container_fs_writes_bytes_total{image!=""}[1m])) without (device)

3) HTTP related

# Total number of HTTP requests prometheus_http_requests_total

#HTTP request duration seconds bucket prometheus_http_request_duration_seconds_bucket

# HTTP request duration count in seconds prometheus_http_request_duration_seconds_count

# The sum of the duration of HTTP requests in seconds prometheus_http_request_duration_seconds_sum

# HTTP response size in bytes prometheus_http_response_size_bytes_bucket

# HTTP response size byte count count prometheus_http_response_size_bytes_count

# The sum of the HTTP response size bytes prometheus_http_response_size_bytes_sum

4) Nginx related

# Nginxvts filter bytes total nginx_vts_filter_bytes_total

# Nginx VTS filter cache total nginx_vts_filter_cache_total

# Nginx VTS filter request seconds nginx_vts_filter_request_seconds

# Nginx VTS filter request total seconds nginx_vts_filter_request_seconds_total

# Total number of Nginx VTS filter requests nginx_vts_filter_requests_total

# nginx information nginx_vts_info

# Nginx VTS main connection nginx_vts_main_connections

# Nginx VTS main SHM usage bytes nginx_vts_main_shm_usage_bytes

# Nginx VTS server total bytes nginx_vts_server_bytes_total

# Nginx VTS server cache total nginx_vts_server_cache_total

# Nginx_vts server request seconds nginx_vts_server_request_seconds

# Nginx_vts server request total seconds nginx_vts_server_request_seconds_total

# Total number of Nginx_vts service requests nginx_vts_server_requests_total

# Nginx VTS start time in seconds nginx_vts_start_time_seconds

10. Install blackbox_exporter

  • Blackbox collects service status information, such as determining whether the service http request returns 200 and then alarming
  • blackbox_exporter is one of the exporters officially provided by Prometheus, which can provide monitoring data collection for http, dns, tcp, and icmp
Function:
HTTP test definition Request Header information judgment Http status / Http Responses Header / Http Body content TCP test business component port status monitoring application layer protocol definition and monitoring ICMP test host detection mechanism	
POST test interface connectivity	
SSL certificate expiration time# Download and decompress [root@11 Prometheus]# wget https://github.com/prometheus/blackbox_exporter/releases/download/v0.14.0/blackbox_exporter-0.14.0.linux-amd64.tar.gz
[root@11 Prometheus]# tar -xvf blackbox_exporter-0.14.0.linux-amd64.tar.gz
[root@11 Prometheus]# mv blackbox_exporter-0.14.0.linux-amd64 /usr/local/blackbox_exporter

# Check whether the installation is successful [root@11 Prometheus]# /usr/local/blackbox_exporter/blackbox_exporter --version
blackbox_exporter, version 0.14.0 (branch: HEAD, revision: bba7ef76193948a333a5868a1ab38b864f7d968a)
  build user: root@63d11aa5b6c6
  build date: 20190315-13:32:31
  go version: go1.11.5

# Join systemd management [root@11 Prometheus]# cat /usr//lib/systemd/system/blackbox_exporter.service
[Unit]
Description=blackbox_exporter
 
[Service]
User=root
Type=simple
ExecStart=/usr/local/blackbox_exporter/blackbox_exporter --config.file=/usr/local/blackbox_exporter/blackbox.yml
Restart=on-failure
[root@11 Prometheus]# 

# Start [root@11 Prometheus]# systemctl daemon-reload
[root@11 Prometheus]# systemctl enable --now blackbox_exporter

11.Docker deploys nginx-module-vts module

Since the nginx installed by yum does not have the nginx-module-vts module by default, you need to download the corresponding nginx source code and recompile it.

Docker builds Consul cluster (unfinished)

1. Start the first consul service: consul1

docker run --name consul1 -d -p 8500:8500 -p 8300:8300 -p 8301:8301 -p 8302:8302 -p 8600:8600 --restart=always consul:latest agent -server -bootstrap-expect 2 -ui -bind=0.0.0.0 -client=0.0.0.0

# Get the IP address of consul server1 docker inspect --format '{{ .NetworkSettings.IPAddress }}' consul1
172.17.0.2

# PS:
    8500 http port, used for http interface and web ui
    8300 server rpc port, consul servers in the same data center communicate through this port 8301 serf lan port, consul clients in the same data center communicate through this port 8302 serf wan port, consul servers in different data centers communicate through this port 8600 dns port, used for service discovery -bbostrap-expect 2: A cluster must have at least two servers to elect a cluster leader
    -ui: Run the web console -bind: Listen to the network port, 0.0.0.0 means all network ports. If not specified, the default is 127.0.0.1, and the container cannot communicate -client: Limit access to certain network ports

2. Start the second consul service: consul2, and join consul1 (using the join command)

docker run -d --name consul2 -d -p 8501:8500 consul agent -server -ui -bind=0.0.0.0 -client=0.0.0.0 -join 172.17.0.2

docker run -d -p 8501:8500 --restart=always -v /XiLife/consul/data/server3:/consul/data -v /XiLife/consul/conf/server2:/consul/config -e CONSUL_BIND_INTERFACE='eth0' --privileged=true --name=consu2 consul agent -server -ui -node=consul2 -client='0.0.0.0' -datacenter=xdp_dc -data-dir /consul/data -config-dir /consul/config -join=172.17.0.2

3. Start the third consul service: consul3, and join consul1

docker run --name consul3 -d -p 8502:8500 consul agent -server -ui -bind=0.0.0.0 -client=0.0.0.0 -join 172.17.0.2

4. View the running container (consul cluster status)

[root@k8s-m1 consul]# docker exec -it consul1 consul members
Node Address Status Type Build Protocol DC Segment
013a4a7e74d2 172.17.0.4:8301 alive server 1.10.0 2 dc1 <all>
3c118fa83d47 172.17.0.3:8301 alive server 1.10.0 2 dc1 <all>
4b5123c97c2b 172.17.0.5:8301 alive server 1.10.0 2 dc1 <all>
a7d272ad157a 172.17.0.2:8301 alive server 1.10.0 2 dc1 <all>

5. Service registration and removal

  • Next, we need to register the service to Consul, which can be added through the standard API interface provided by Consul
  • Then register a test service first. The test data is the local node-exporter service information. The service address and port are the default addresses for providing indicator data by node-exporter . Execute the following command
# Register 241's node-exporter service information curl -X PUT -d '{"id": "node-exporter","name": "node-exporter-172.23.0.241","address": "172.23.0.241","port": 9100,"tags": ["prometheus"],"checks": [{"http": "http://172.23.0.241:9100/metrics", "interval": "5s"}]}' http://172.23.0.241:8500/v1/agent/service/register

#Register the node-exporter service information of 242. Change all the IP addresses above to 242, and keep the port unchanged.

If you want to deregister a service, you can use the following API command to do so, for example, to deregister node-exporter service added above:

curl -X PUT http://172.23.0.241:8500/v1/agent/service/deregister/node-exporter

Appendix: Upgrading the Centos6 kernel

rpm -Uvh https://hkg.mirror.rackspace.com/elrepo/kernel/el6/x86_64/RPMS/elrepo-release-6-12.el6.elrepo.noarch.rpm

Yum source error solution: Cannot find the mirror source cd /etc/yum.repos.d
mv CentOS-Base.repo CentOS-Base.repo.backup
wget http://mirrors.163.com/.help/CentOS6-Base-163.repo
mv CentOS6-Base-163.repo CentOS-Base.repo
yum clean all
wget -O /etc/yum.repos.d/CentOS-Base.repo http://file.kangle.odata.cc/repo/Centos-6.repo
wget -O /etc/yum.repos.d/epel.repo http://file.kangle.odata.cc/repo/epel-6.repo
yum makecache

This is the end of this article about deploying Prometheus with Docker. For more information about deploying Prometheus with Docker, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • Detailed explanation of the process of building Prometheus+Grafana based on docker
  • Deploy grafana+prometheus configuration using docker

<<:  Recommend some useful learning materials for newbies in web design

>>:  How to hide a certain text in HTML?

Recommend

Solutions to common problems using Elasticsearch

1. Using it with redis will cause Netty startup c...

SQL group by to remove duplicates and sort by other fields

need: Merge identical items of one field and sort...

Sample code for displaying a scroll bar after the HTML page is zoomed out

Here is a record of how to make a scroll bar appe...

Will CSS3 really replace SCSS?

When it comes to styling our web pages, we have t...

How to monitor and delete timed out sessions in Tomcat

Preface I accidentally discovered that the half-h...

CSS modular solution

There are probably as many modular solutions for ...

Detailed explanation of the new features of ES9: Async iteration

Table of contents Asynchronous traversal Asynchro...

Detailed explanation of the principle and usage of MySQL views

This article uses examples to illustrate the prin...

Vue Basic Tutorial: Conditional Rendering and List Rendering

Table of contents Preface 1.1 Function 1.2 How to...

MySQL 8.0.11 Community Green Edition Installation Steps Diagram for Windows

In this tutorial, we use the latest MySQL communi...

Building an image server with FastDFS under Linux

Table of contents Server Planning 1. Install syst...

js native carousel plug-in production

This article shares the specific code for the js ...

An article to help you understand Js inheritance and prototype chain

Table of contents Inheritance and prototype chain...