CentOS 7.5 deploys Varnish cache server function

CentOS 7.5 deploys Varnish cache server function

1. Introduction to Varnish

Varnish is a high-performance open source reverse proxy server and HTTP cache server. Its functions are similar to those of the Squid server and can be used for HTTP caching. You can install Varnish on any web front end and configure it to cache content. Compared with traditional Squid, Varnish has many advantages such as higher performance, faster speed, and easier management. Some companies have already used it in production environments as an alternative to older versions of Squid to provide better caching effects at the same server cost. Varnish is also one of the optional services of CDN cache servers.

The main features of Varnish are as follows:

Cache location: Either memory or disk can be used. If you want to use disks, SSD is recommended for RAID1;

Log storage: Logs are also stored in memory. Storage strategy: fixed size, circular use;

Support the use of virtual memory;

There is a precise time management mechanism, namely, the time attribute control of the cache;

State engine architecture: Processing of different cache and proxy data is completed on different engines. Different control statements can be designed through a specific configuration language to determine the different ways to cache data at different locations, and to process the passing messages according to specific rules at specific locations;

Cache management: Manage cache data in binary heap format to ensure timely data cleanup.

Compared with Squid, Varnish is a reverse proxy cache server and both are open source. Varnish is very stable and has a fast access speed because Squid reads cached data from the hard disk, while Varnish stores the data in memory and reads the memory directly, avoiding frequent file swapping between memory and disk. Therefore, Varnish is relatively more efficient. Varnish can support more concurrent connections because Varnish releases TCP connections faster than Squid. Varnish can also use regular expressions to clear part of the cache in batches through the management port, which Squid cannot do. Squid is a single process using a single-core CPU, but Varnish opens multiple processes in the form of fork for processing, so all cores can be reasonably used to process corresponding requests.

The above mentioned many advantages of Varnish, but Varnish is not perfect. Its main disadvantages are as follows:

1. Once the Varnish process crashes or restarts, the cached data will be completely released from the memory, and all requests will be

Sending to the backend server will cause great pressure on the backend server in high concurrency situations;

2. When using Varnish, if a single URL request passes through a load balancer such as HA/F5, each request will fall on a different Varnish server, causing the request to be penetrated to the backend; and the same request is cached on multiple servers, which will also waste Varnish's cache resources and cause performance degradation;

Solutions to Varnish Disadvantages:

Regarding disadvantage 1: When the traffic is large, it is recommended to start using Varnish's memory cache mode, and it needs to be followed by multiple Squid/nginx servers. This is mainly to prevent a large number of requests from penetrating Varnish when the previous Varnish service or server is restarted. In this way, Squid/nginx can act as the second-layer CACHE, and it also makes up for the problem that the Varnish cache in memory will be released when it is restarted.

For the second disadvantage: you can do URL hashing on the load balancer to make a single URL request fixed to a Varnish server;

2. How Varnish works

The master process of Varnish is responsible for starting the work. The master process reads the configuration file, creates storage space according to the specified space size (for example, the administrator allocates 2G memory), and creates and manages child processes.

Then the child process will handle subsequent tasks. It will allocate some threads to perform different tasks, such as: accepting http requests, allocating storage space for cache objects, clearing expired cache objects, freeing up space, defragmentation, etc.

The http request processing process is as follows:

1. There is a thread dedicated to receiving http requests, which keeps listening to the request port. When a request comes, it is responsible for invoking a worker thread to process the request. The worker thread will analyze the URI of the http request, know what the request wants, and then look for the object in the cache. If so, the cache object will be returned directly to the user. If not, the request will be transferred to the back-end server for processing and wait for the result. After the worker thread gets the result content from the back-end, it will first save the content as a cache object in the cache space (to prepare for a quick response the next time the object is requested), and then return the content to the user

The cache allocation process is as follows:

When an object needs to be cached, the free cache area is searched for a free block of the most suitable size based on the size of the object. Once found, the object is placed in it. If the object does not fill the free block, the remaining space is used as a new free block. If there is no room in the free cache area, part of the cache must be deleted to make room. The deletion is based on the principle of least recently used.

The process of releasing the cache is as follows:

There is a thread responsible for releasing the cache. It regularly checks the life cycle of all objects in the cache. If an object has not been accessed within a specified period of time, it will delete the object and release the cache space it occupies. After releasing the space, it will check whether the adjacent memory space is free. If so, it will be integrated into a larger free block to achieve space fragmentation.

For more Varnish features, please visit the Varnish official website.

3. Deploy Varnish Cache Server

Environmental preparation:

Three centos 7.5 servers, with IP addresses 192.168.20.5, 20.4, and 20.3 respectively;

Among them, IP192.168.20.5 is the Varnish cache server, and the other two are backend web servers. Prepare different web page files for each (I change the web page content to its IP here) to verify its cache effect;

Download the Varnish source package I provided and upload it to the Varnish server.

1. Start deploying and installing Varnish:

[root@varnish ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo 
: : : : : : : : : : : : : : :
#
# This is an example VCL file for Varnish.
#
# It does not do anything by default, delegating control to the
# builtin VCL. The builtin VCL is called when there is no explicit
# return statement.
#
# See the VCL chapters in the Users Guide at https://www.varnish-cache.org/docs/
# and http://varnish-cache.org/trac/wiki/VCLExamples for more examples.

# Marker to tell the VCL compiler that this VCL has been adapted to the
# new 4.0 format.
vcl 4.0;
import directors;
import std;
# Default backend definition. Set this to point to your content server.
probe backend_healthcheck {
.url="/"; #Access the backend server root path.interval = 5s; #Request time interval.timeout = 1s; #Request timeout.window = 5; #Specify the number of polling times 5 times.threshold = 3; #If there are 3 failures, it means that the backend server is down}
backend web1 { #define the backend server.host = "192.168.20.4"; #IP or domain name of the host (i.e. the backend host) to be redirected.port = "80"; #specify the port number of the backend server.probe = backend_healthcheck; #Health check calls the content defined by backend_healthcheck}
backend web2 {
.host = "192.168.20.3";   
.port = "80";
.probe = backend_healthcheck;
}
acl purgers { #define access control list "127.0.0.1";
    "localhost";
    "192.168.20.0/24";
    !"192.168.20.4";
}
sub vcl_init { #Call vcl_init to initialize the subroutine and create the backend host group, i.e. directors
    new web_cluster = directors.round_robin(); #Use the new keyword to create a director object and use the round_robin algorithm web_cluster.add_backend(web1); #Add a backend server node web_cluster.add_backend(web2);
}
sub vcl_recv {
    set req.backend_hint = web_cluster.backend(); #Specify the backend node of the request web_cluster defined backend node if (req.method == "PURGE") { #Judge whether the client's request header is PURGE
        if (!client.ip ~ purgers) { #If yes, check whether the client's IP address is in the ACL access control list.
            return (synth(405, "Not Allowed.")); #If not, return a 405 status code to the client and return the defined page.
    }
    return (purge); #If it is defined by ACL, it will be handled by purge.
}
if (req.method != "GET" &&
    req.method != "HEAD" &&
    req.method != "PUT" &&
    req.method != "POST" &&
    req.method != "TRACE" &&
    req.method != "OPTIONS" &&
    req.method != "PATCH" &&
    req.method != "DELETE") { #Judge the client's request type return (pipe);
    }
if (req.method != "GET" && req.method != "HEAD") {
    return (pass); #If it is not GET or HEAD, pass it.
}
if (req.url ~ "\.(php|asp|aspx|jsp|do|ashx|shtml)($|\?)") {
    return (pass); #When the client accesses a file ending with .php, pass it to pass for processing.
}
if (req.http.Authorization) {
    return (pass); #When the page type requested by the client requires authentication, pass it to process}
if (req.http.Accept-Encoding) {
    if (req.url ~ "\.(bmp|png|gif|jpg|jpeg|ico|gz|tgz|bz2|tbz|zip|rar|mp3|mp4|ogg|swf|flv)$") {
    unset req.http.Accept-Encoding; #Cancel the compression type received by the client} elseif (req.http.Accept-Encoding ~ "gzip") {
        set req.http.Accept-Encoding = "gzip"; #If there is a gzip type, mark the gzip type.
    } elseif (req.http.Accept-Encoding ~ "deflate") {
        set req.http.Accept-Encoding = "deflate";
    } else {
    unset req.http.Accept-Encoding; #Other undefined pages also cancel the compression type received by the client.
    }
   }
if (req.url ~ "\.(css|js|html|htm|bmp|png|gif|jpg|jpeg|ico|gz|tgz|bz2|tbz|zip|rar|mp3|mp4|ogg|swf|flv)($|\?)") {
    unset req.http.cookie; #Cancel the client's cookie value.
    return (hash); #Forward the request to the hash subroutine, that is, check the local cache.
}
if (req.restarts == 0) { #Judge whether it is the first request from the client if (req.http.X-Forwarded-For) { #If it is the first request, set the client's IP address.
        set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip;
    } else {
    set req.http.X-Forwarded-For = client.ip;
    }
}
return (hash);
}
sub vcl_hash {
    hash_data(req.url); #View the page requested by the client and perform hashing
    if (req.http.host) {
        hash_data(req.http.host); #Set the client's host} else {
        hash_data(server.ip); #Set the server IP
    }
    return (lookup);
}
sub vcl_hit {
    if (req.method == "PURGE") { #If it is HIT and the client request type is PURGE, return the 200 status code and return the corresponding page.
        return (synth(200, "Purged."));
    }
    return (deliver);
}

sub vcl_miss {
  if (req.method == "PURGE") {
        return (synth(404, "Purged.")); #If it is a miss, return 404
    }
    return (fetch);
}
sub vcl_deliver {
    if (obj.hits > 0) {
        set resp.http.CXK = "HIT-from-varnish"; #Set http header X-Cache = hit
        set resp.http.X-Cache-Hits = obj.hits; #Return the number of commands} else {
    set resp.http.X-Cache = "MISS";
    }
    unset resp.http.X-Powered-By; #Cancel displaying the web versionunset resp.http.Server; #Cancel displaying the Varnish serviceunset resp.http.X-Drupal-Cache; #Cancel displaying the cached frameworkunset resp.http.Via; #Cancel displaying the file content sourceunset resp.http.Link; #Cancel displaying the HTML hyperlink addressunset resp.http.X-Varnish; #Cancel displaying the Varnish ID
    set resp.http.xx_restarts_count = req.restarts; #Set the number of client requests set resp.http.xx_Age = resp.http.Age; #Show the length of cached files #set resp.http.hit_count = obj.hits; #Show the number of cache hits #unset resp.http.Age;
    return (deliver);
}
sub vcl_pass {
    return (fetch); #Cache the data returned by the backend server locally}
sub vcl_backend_response {
    set beresp.grace = 5m; #cache additional grace timeif (beresp.status == 499 || beresp.status == 404 || beresp.status == 502) {
        set beresp.uncacheable = true; #When the backend server responds with a status code of 449, etc., do not cache}
    if (bereq.url ~ "\.(php|jsp)(\?|$)") {
        set beresp.uncacheable = true; #When it is a PHP page, it is not cached} else {
        if (bereq.url ~ "\.(css|js|html|htm|bmp|png|gif|jpg|jpeg|ico)($|\?)") {
        set beresp.ttl = 15m; #When it ends with the above, cache for 15 minutes unset beresp.http.Set-Cookie;
        } elseif (bereq.url ~ "\.(gz|tgz|bz2|tbz|zip|rar|mp3|mp4|ogg|swf|flv)($|\?)") {
            set beresp.ttl = 30m; #cache for 30 minutes unset beresp.http.Set-Cookie;
        } else {
            set beresp.ttl = 10m; #Lifetime 10 minutes unset beresp.http.Set-Cookie;
        }
    }
    return (deliver);
}
sub vcl_purge {
    return (synth(200,"success"));
}
sub vcl_backend_error {
    if (beresp.status == 500 ||
        beresp.status == 501 ||
        beresp.status == 502 ||
        beresp.status == 503 ||
        beresp.status == 504) {
        return (retry); #If the status code is one of the above, re-request}
}
sub vcl_fini {
    return (ok);
}
#After editing, save and exit.
[root@varnish varnish]# varnishd -f /usr/local/var/varnish/example.vcl -s malloc,200M -a 0.0.0.0:80
#Start the Varnish service and listen to port 80 of all local IP addresses. -f specifies the vcl file and -s specifies the capacity for storing the cache. [root@varnish ~]# varnishlog #After Varnish is started, you can execute this command to view its log.

Client access to test some functions (press "F12" before accessing with Google Chrome):

Press "F5" to refresh:

The header information specified in our configuration file is accessed, and the status code is 304.

Verify the ACL clear cache configuration:

Clear the cache on host 192.168.20.4 (Varnish is configured not to allow this IP to clear the cache):

[root@localhost ~]# curl -X "PURGE" 192.168.20.5 #Clear Varnish cache

You will get the following error message:

Clear the cache on the IP allowed by Varnish (host 192.168.20.3), and you will see the following success message:

Additional:

The complete uncommented configuration file above is as follows:

vcl 4.0;
import directors;
import std;
probe backend_healthcheck {
.url="/"; 
.interval = 5s;
.timeout = 1s;
.window = 5; 
.threshold = 3; 
}
backend web1 { 
.host = "192.168.20.4"; 
.port = "80"; 
.probe = backend_healthcheck; 
}
backend web2 {
.host = "192.168.20.3";   
.port = "80";
.probe = backend_healthcheck;
}
acl purgers { 
    "127.0.0.1";
    "localhost";
    "192.168.20.0/24";
    !"192.168.20.4";
}
sub vcl_init { 
    new web_cluster = directors.round_robin();
    web_cluster.add_backend(web1); 
    web_cluster.add_backend(web2);
}
sub vcl_recv {
    set req.backend_hint = web_cluster.backend(); 
    if (req.method == "PURGE") { 
        if (!client.ip ~ purgers) {  
            return (synth(405, "Not Allowed.")); 
    }
    return (purge); 
}
if (req.method != "GET" &&
    req.method != "HEAD" &&
    req.method != "PUT" &&
    req.method != "POST" &&
    req.method != "TRACE" &&
    req.method != "OPTIONS" &&
    req.method != "PATCH" &&
    req.method != "DELETE") {  
        return (pipe);
    }
if (req.method != "GET" && req.method != "HEAD") {
    return (pass);
}
if (req.url ~ "\.(php|asp|aspx|jsp|do|ashx|shtml)($|\?)") {
    return (pass); 
}
if (req.http.Authorization) {
    return (pass); 
}
if (req.http.Accept-Encoding) {
    if (req.url ~ "\.(bmp|png|gif|jpg|jpeg|ico|gz|tgz|bz2|tbz|zip|rar|mp3|mp4|ogg|swf|flv)$") {
    unset req.http.Accept-Encoding; 
    } elseif (req.http.Accept-Encoding ~ "gzip") {
        set req.http.Accept-Encoding = "gzip"; 
    } elseif (req.http.Accept-Encoding ~ "deflate") {
        set req.http.Accept-Encoding = "deflate";
    } else {
    unset req.http.Accept-Encoding;
    }
   }
if (req.url ~ "\.(css|js|html|htm|bmp|png|gif|jpg|jpeg|ico|gz|tgz|bz2|tbz|zip|rar|mp3|mp4|ogg|swf|flv)($|\?)") {
    unset req.http.cookie; 
    return (hash);  
}
if (req.restarts == 0) { 
    if (req.http.X-Forwarded-For) {  
        set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip;
    } else {
    set req.http.X-Forwarded-For = client.ip;
    }
}
return (hash);
}
sub vcl_hash {
    hash_data(req.url); 
    if (req.http.host) {
        hash_data(req.http.host); 
    } else {
        hash_data(server.ip); 
    }
    return (lookup);
}
sub vcl_hit {
    if (req.method == "PURGE") { 
        return (synth(200, "Purged."));
    }
    return (deliver);
}

sub vcl_miss {
  if (req.method == "PURGE") {
        return (synth(404, "Purged."));
    }
    return (fetch);
}
sub vcl_deliver {
    if (obj.hits > 0) {
        set resp.http.CXK = "HIT-from-varnish";
        set resp.http.X-Cache-Hits = obj.hits; 
    } else {
    set resp.http.X-Cache = "MISS";
    }
    unset resp.http.X-Powered-By; 
    unset resp.http.Server;  
    unset resp.http.X-Drupal-Cache; 
    unset resp.http.Via; 
    unset resp.http.Link; 
    unset resp.http.X-Varnish; 
    set resp.http.xx_restarts_count = req.restarts; 
    set resp.http.xx_Age = resp.http.Age; 
    #set resp.http.hit_count = obj.hits; 
    #unset resp.http.Age;
    return (deliver);
}
sub vcl_pass {
    return (fetch); 
}
sub vcl_backend_response {
    set beresp.grace = 5m; 
    if (beresp.status == 499 || beresp.status == 404 || beresp.status == 502) {
        set beresp.uncacheable = true;
    }
    if (bereq.url ~ "\.(php|jsp)(\?|$)") {
        set beresp.uncacheable = true; 
    } else {
        if (bereq.url ~ "\.(css|js|html|htm|bmp|png|gif|jpg|jpeg|ico)($|\?)") {
        set beresp.ttl = 15m; 
        unset beresp.http.Set-Cookie;
        } elseif (bereq.url ~ "\.(gz|tgz|bz2|tbz|zip|rar|mp3|mp4|ogg|swf|flv)($|\?)") {
            set beresp.ttl = 30m;
            unset beresp.http.Set-Cookie;
        } else {
            set beresp.ttl = 10m;
            unset beresp.http.Set-Cookie;
        }
    }
    return (deliver);
}
sub vcl_purge {
    return (synth(200,"success"));
}
sub vcl_backend_error {
    if (beresp.status == 500 ||
        beresp.status == 501 ||
        beresp.status == 502 ||
        beresp.status == 503 ||
        beresp.status == 504) {
        return (retry); 
    }
}
sub vcl_fini {
    return (ok);
}

In fact, if you want to implement Varnish's cache function, you can do so through the following basic definition (the following content in the example.vcl file is sufficient):

vcl 4.0;

import directors;
probe backend_healthcheck {
    .url = "/";
    .timeout = 1s;
    .interval = 5s;
    .window = 5;
    .threshold = 3;
}
backend web1 {
    .host = "192.168.20.3";
    .port = "80";
    .probe = backend_healthcheck;
}
backend web2 {
    .host = "192.168.20.4";
    .port = "80";
    .probe = backend_healthcheck;
}
sub vcl_init {
    new web_cluster = directors.round_robin();
    web_cluster.add_backend(web1);
    web_cluster.add_backend(web2);
}
sub vcl_recv {
    set req.backend_hint = web_cluster.backend();
}

Summarize

The above is the introduction of the Varnish cache server function deployed in CentOS 7.5. I hope it will be helpful to you. If you have any questions, please leave me a message and I will reply to you in time. I would also like to thank everyone for their support of the 123WORDPRESS.COM website!
If you find this article helpful, please feel free to reprint it and please indicate the source. Thank you!

You may also be interested in:
  • Using Varnish to separate static and dynamic websites based on CentOS 6.5
  • How to change the CentOS server time to Beijing time
  • Detailed steps to build a Git server on CentOS
  • Installation and configuration code of apache, php7 and mysql5.7 in CentOS7 server
  • CentOS 6.5 web server Apache installation and basic settings
  • Detailed steps to install and deploy a mail server (Postfix) under CentOS 7.2
  • Detailed explanation on how to build a DNS server on CentOS7.0
  • Detailed explanation of deploying Node.js project to Alibaba Cloud Server (CentOs)
  • CentOS7 + node.js + nginx + MySQL server building process

<<:  js to achieve the complete steps of Chinese to Pinyin conversion

>>:  Learn Vue middleware pipeline in one article

Recommend

Nginx service 500: Internal Server Error one of the reasons

500 (Internal Server Error) The server encountere...

mysql installer web community 5.7.21.0.msi installation graphic tutorial

This article example shares the specific code for...

Add a startup method to Linux (service/script)

Configuration file that needs to be loaded when t...

An article to help you understand Js inheritance and prototype chain

Table of contents Inheritance and prototype chain...

How to modify the ssh port number in Centos8 environment

Table of contents Preface start Preface The defau...

Centos builds chrony time synchronization server process diagram

My environment: 3 centos7.5 1804 master 192.168.1...

Detailed explanation of system input and output management in Linux

Management of input and output in the system 1. U...

How to mount a data disk on Tencent Cloud Server Centos

First, check whether the hard disk device has a d...

HTML version declaration DOCTYPE tag

When we open the source code of a regular website...

Canonical enables Linux desktop apps with Flutter (recommended)

Google's goal with Flutter has always been to...

Web Design Principles of Hyperlinks

<br />Related articles: 9 practical tips for...

Eight examples of how Vue implements component communication

Table of contents 1. Props parent component ---&g...