The Docker container that has been running shows that the memory has been exhausted, and the container memory is exhausted without restarting. By checking the background, it is found that the process does not occupy much memory. The memory is monitored by cadvisor and calculated using the cadvisor page calculation method, so I decided to do some research on docker's memory calculation. docker version: Client: Version: 1.12.6 API version: 1.24 Go version: go1.6.4 Git commit: 78d1802 Built: Tue Jan 10 20:20:01 2017 OS/Arch: linux/amd64 Server: Version: 1.12.6 API version: 1.24 Go version: go1.6.4 Git commit: 78d1802 Built: Tue Jan 10 20:20:01 2017 OS/Arch: linux/amd64 kubernetes version: Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2+coreos.0", GitCommit:"4c0769e81ab01f47eec6f34d7f1bb80873ae5c2b", GitTreeState:"clean", BuildDate:"2017-10-25T16:24:46Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2+coreos.0", GitCommit:"4c0769e81ab01f47eec6f34d7f1bb80873ae5c2b", GitTreeState:"clean", BuildDate:"2017-10-25T16:24:46Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} 1. Create a pod yaml file, use the busybox image for testing, and set a limit of 2 cores and 2G memory for the image[docker@k8s busybox]$ cat busybox.yaml apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - image: registry.dcos:8021/public/busybox:latest command: - sleep - "3600" imagePullPolicy: IfNotPresent name: busybox resources: limits: cpu: "2" memory: 2Gi requests: cpu: 100m memory: 64Mi restartPolicy: Always 2. Generate busybox service through kubectl command[docker@k8s busybox]$ kubectl create -f busybox.yaml pod "busybox" created 3. Enter the /sys/fs/cgroup/memory directory of the container, and check the following files using ls-rw-r--r-- 1 root root 0 May 31 03:18 cgroup.clone_children --w--w--w- 1 root root 0 May 31 03:18 cgroup.event_control -rw-r--r-- 1 root root 0 May 31 03:18 cgroup.procs -rw-r--r-- 1 root root 0 May 31 03:18 memory.failcnt --w------ 1 root root 0 May 31 03:18 memory.force_empty -rw-r--r-- 1 root root 0 May 31 03:18 memory.kmem.failcnt -rw-r--r-- 1 root root 0 May 31 03:18 memory.kmem.limit_in_bytes -rw-r--r-- 1 root root 0 May 31 03:18 memory.kmem.max_usage_in_bytes -r--r--r-- 1 root root 0 May 31 03:18 memory.kmem.slabinfo -rw-r--r-- 1 root root 0 May 31 03:18 memory.kmem.tcp.failcnt -rw-r--r-- 1 root root 0 May 31 03:18 memory.kmem.tcp.limit_in_bytes -rw-r--r-- 1 root root 0 May 31 03:18 memory.kmem.tcp.max_usage_in_bytes -r--r--r-- 1 root root 0 May 31 03:18 memory.kmem.tcp.usage_in_bytes -r--r--r-- 1 root root 0 May 31 03:18 memory.kmem.usage_in_bytes -rw-r--r-- 1 root root 0 May 31 03:18 memory.limit_in_bytes -rw-r--r-- 1 root root 0 May 31 03:18 memory.max_usage_in_bytes -rw-r--r-- 1 root root 0 May 31 03:18 memory.memsw.failcnt -rw-r--r-- 1 root root 0 May 31 03:18 memory.memsw.limit_in_bytes -rw-r--r-- 1 root root 0 May 31 03:18 memory.memsw.max_usage_in_bytes -r--r--r-- 1 root root 0 May 31 03:18 memory.memsw.usage_in_bytes -rw-r--r-- 1 root root 0 May 31 03:18 memory.move_charge_at_immigrate -r--r--r-- 1 root root 0 May 31 03:18 memory.numa_stat -rw-r--r-- 1 root root 0 May 31 03:18 memory.oom_control ---------- 1 root root 0 May 31 03:18 memory.pressure_level -rw-r--r-- 1 root root 0 May 31 03:18 memory.soft_limit_in_bytes -r--r--r-- 1 root root 0 May 31 03:18 memory.stat -rw-r--r-- 1 root root 0 May 31 03:18 memory.swappiness -r--r--r-- 1 root root 0 May 31 03:18 memory.usage_in_bytes -rw-r--r-- 1 root root 0 May 31 03:18 memory.use_hierarchy -rw-r--r-- 1 root root 0 May 31 03:18 notify_on_release -rw-r--r-- 1 root root 0 May 31 03:18 tasks We mainly focus on several files
The contents of the memory.stat file
View memory.limit_in_bytes file /sys/fs/cgroup/memory # cat memory.limit_in_bytes 2147483648 The memory limit of the computing container is 2 GB, which is the same as the memory limit defined in the YAML file. View the memory.usag_in_bytes file /sys/fs/cgroup/memory # cat memory.usage_in_bytes 2739376 Use docker stats container id to view the memory usage of the container, which is consistent with the data of memory.usage_in_bytes. 4. Use the dd command to quickly generate a 1.5g large file~ # dd if=/dev/zero of=test bs=1M count=1500 1500+0 records in 1500+0 records out 1572864000 bytes (1.5GB) copied, 1.279989 seconds, 1.1GB/s Use docker stats container id to view the memory occupied by the container again View the memory.usage_in_bytes file /sys/fs/cgroup/memory # cat memory.usage_in_bytes 1619329024 It is found that the memory occupied by the container has reached 1.5g. Check memory.stat /sys/fs/cgroup/memory # cat memory.stat cache 1572868096 rss 147456 rss_huge 0 mapped_file 0 dirty 1572868096 writeback 0 swap 0 pgpgin 384470 pgpgout 433 pgfault 607 pgmajfault 0 inactive_anon 77824 active_anon 12288 inactive_file 1572864000 active_file 4096 unevictable 0 hierarchical_memory_limit 2147483648 hierarchical_memsw_limit 4294967296 total_cache 1572868096 total_rss 147456 total_rss_huge 0 total_mapped_file 0 total_dirty 1572868096 total_writeback 0 total_swap 0 total_pgpgin 384470 total_pgpgout 433 total_pgfault 607 total_pgmajfault 0 total_inactive_anon 77824 total_active_anon 12288 total_inactive_file 1572864000 total_active_file 4096 total_unevictable 0 The cache field in the memory.stat file adds 1.5g, and the inactive_file field is 1.5g, so the file cache generated by dd is calculated on the inactive_file. This results in the container memory monitoring being high, because the cache is reusable and does not reflect the memory occupied by the process. In general, the monitoring memory can be calculated according to the formula: active_anon + inactive_anon = anonymous memory + file cache for tmpfs + swap cache Therefore active_anon + inactive_anon ≠ rss, because rss does not include tmpfs. active_file + inactive_file = cache - size of tmpfs So the actual memory usage is calculated as: real_used = memory.usage_in_bytes - (active_file + inactive_file) 5. Stress testing(1) Prepare the tomcat image and jmeter stress testing tool. The tomcat yaml file is as follows apiVersion: extensions/v1beta1 kind: Deployment metadata: name: tomcat-deployment spec: replicas: 1 template: metadata: labels: app: tomcat spec: containers: - name: tomcat image: registy.dcos:8021/public/tomcat:8 ports: - containerPort: 8080 resources: limits: cpu: "1" memory: 300Mi --- apiVersion: v1 kind: Service metadata: labels: name: tomcat name: tomcat namespace: default spec: ports: - name: tomcat port: 8080 protocol: TCP targetPort: 8080 type: NodePort selector: app: tomcat In the yaml file, limit the memory used by the tomcat image to 300Mi and execute the command to generate the file. Use docker stats to view the memory usage of the tomcat container without load. (2) Extract the service nodePort of tomcat [docker@ecs-5f72-0006 ~]$ kubectl get svc tomcat -o=custom-columns=nodePort:.spec.ports[0].nodePort nodePort 31401 (3) Log in to the jmeter official website to download the stress testing tool Run the jmeter tool on Windows, go to the bin directory and click to run jmeter, and configure jmeter as follows: After configuring the test options, click the Start button to start the stress test. Check the container memory usage through docker stats and find that the limit has been reached. Using kubectl get pods to check the running status of the pod, we found that Tomcat was killed because the memory exceeded the limit. SummarizeThere have always been problems with docker stats memory monitoring. Docker includes cache/buffer in memory calculations, which causes misunderstanding. The calculation method of Docker memory is the same as that of Linux memory usage, which also includes cache/buffer. However, cache is reusable and is often used on I/O requests, using memory to relieve data that may be accessed again to improve system performance. On the official github, many people have submitted issues about memory monitoring. It was not until the Docker 17.06 version that docker stats solved this problem. However, this only makes the display of docker stats look normal. When entering the container to check the memory usage, the cache is still included. If the data collected by cadvisor is used directly, the cache is still included. By stress testing Docker, we finally found that when the stress test reaches the memory limit of the program, the pod restarts. This also explains why when we use Docker monitoring, even if the memory occupies 99%+, there is no pod restart. A considerable part of the memory is occupied by cache. The above is my personal experience. I hope it can give you a reference. I also hope that you will support 123WORDPRESS.COM. If there are any mistakes or incomplete considerations, please feel free to correct me. You may also be interested in:
|
<<: Basic usage of exists, in and any in MySQL
>>: jQuery uses the canvas tag to draw the verification code
Is it the effect below? If so, please continue re...
Table of contents 1. First look at COUNT 2. The d...
Mysql sets boolean type 1. Tinyint type We create...
Docker daemon socket The Docker daemon can listen...
Preface: It’s the end of the year, isn’t it time ...
The previous article has installed the docker ser...
Table of contents 1. Panorama II. Background 1. R...
Setup is used to write combined APIs. The interna...
1. Install Docker. Reference URL: Docker Getting ...
Docker private image library Docker private image...
HTML forms are used to collect different types of...
Original URL: http://segmentfault.com/blog/ciaocc/...
There is no mysql by default in the yum source of...
Table of contents Preface The need for online XML...
This article shares the specific code of jQuery t...