1) Introduction to cache mechanism In the Linux system, in order to improve the performance of the file system, the kernel uses a portion of physical memory to allocate a buffer for caching system operations and data files. When the kernel receives a read or write request, it first goes to the cache to see if there is the requested data. If there is, it returns directly. If not, it directly operates the disk through the driver. CPU context switching: The CPU gives each process a certain service time. When the time slice is used up, the kernel reclaims the processor from the running process, saves the current running state of the process, and then loads the next task. This process is called context switching. In essence, it is the process switching between the terminated process and the process to be run. 2) Check the cache and memory usage [root@localhost ~]# free -m total used free shared buffers cached Mem: 7866 7725 141 19 74 6897 -/+ buffers/cache: 752 7113 Swap: 16382 32 16350 From the command results above, we can see that the total memory is 8G, 7725M is used, and 141M is remaining. Many people see it this way. Free memory = free (141) + buffers (74) + cached (6897) Used memory = total (7866) - free memory From this, we can calculate that the free memory is 7112M and the used memory is 754M. This is the actual usage rate. You can also refer to the -/+ buffers/cache line information, which is also the correct memory usage rate. 3) Cache distinction between buffers and cached The kernel allocates the buffer size while ensuring that the system can use physical memory and read and write data normally. Buffers are used to cache metadata and pages, which can be understood as system cache, for example, vi opens a file. cached is used to cache files, which can be understood as data block cache. For example, when the dd if=/dev/zero of=/tmp/test count=1 bs=1G test writes a file, it will be cached in the buffer. The next time this test command is executed, the writing speed will be significantly faster. 4) Swap usage Swap means swap partition. Usually, the virtual memory we talk about is a partition divided from the hard disk. When the physical memory is not enough, the kernel will release some programs in the buffers (cache) that have not been used for a long time, and then temporarily put these programs into Swap. That is to say, Swap will only be used when the physical memory and cache memory are not enough. Swap cleanup: swapoff -a && swapon -a Note: There is a prerequisite for this cleanup. The free memory must be larger than the swap space already used. 5) How to release cache memory a) Clean up pagecache # echo 1 > /proc/sys/vm/drop_caches or # sysctl -w vm.drop_caches=1 b) Clean up dentries (directory cache) and inodes # echo 2 > /proc/sys/vm/drop_caches or # sysctl -w vm.drop_caches=2 c) Clean pagecache, dentries and inodes # echo 3 > /proc/sys/vm/drop_caches or # sysctl -w vm.drop_caches=3 The above three methods are all temporary ways to release the cache. To release the cache permanently, you need to configure in the /etc/sysctl.conf file: vm.drop_caches=1/2/3, and then sysctl -p will take effect! In addition, you can use the sync command to clean up the file system cache, which will also clean up zombie objects and the memory they occupy. # sync The above operations will not harm the system in most cases, but will only help free up unused memory. But if data is being written while these operations are being performed, then the data is actually being cleared from the file cache before it reaches the disk, which can have adverse effects. So how can we avoid this from happening? Therefore, we have to mention the file /proc/sys/vm/vfs_cache_pressure here, which tells the kernel what priority should be used when cleaning the inoe/dentry cache.
Before releasing the memory, use the sync command to synchronize to ensure the integrity of the file system and write all unwritten system buffers to disk, including modified i-nodes, delayed block I/O, and read-write mapping files. Otherwise, unsaved files may be lost during the cache release process. /proc is a virtual file system that can be used as a means of communication with the kernel entity through reading and writing operations. In other words, the behavior of the current kernel can be adjusted by modifying the files in /proc. That is to say, we can free up memory by adjusting /proc/sys/vm/drop_caches. The value of drop_caches can be a number between 0 and 3, representing different meanings: 0: Do not release (system default value) The above is all the knowledge about clearing cache in Linux system. Thank you for your learning and support for 123WORDPRESS.COM. You may also be interested in:
|
<<: How to set utf-8 encoding in mysql database
>>: JavaScript css3 to implement simple video barrage function
1. Prepare data The following operations will be ...
Mac node delete and reinstall delete node -v sudo...
In actual use, it is often necessary to share the...
Table of contents 01. Use useState when render is...
This article example shares the specific code of ...
Because colleagues in the company need Nginx log ...
Table of contents Phenomenon Root Cause Analysis ...
Introduction to temporary tables What is a tempor...
Table of contents Preface 1. Life cycle in Vue2 I...
Method 1: float:right In addition, floating will ...
Table of contents 1. mixin.scss 2. Single file us...
Table of contents summary Problem Description Ana...
I will use three days to complete the static page...
1. Introduction In the past, if you wanted to sta...
Table of contents Overview Vuex four major object...