I believe that the Internet has become an increasingly indispensable part of people's lives. The application of rich clients such as ajax and flex has made people more "happily" experience many functions that were originally only available in C/S. For example, Google has already moved the most basic office applications to the Internet. Of course, while it is convenient, it will undoubtedly make the page speed slower and slower. I am a front-end developer. In terms of performance, according to a Yahoo survey, the backend only accounts for 5%, while the front-end accounts for as much as 95%, of which 88% can be optimized. The above is a diagram of the life cycle of a web2.0 page. The engineer vividly described it as being divided into four stages: "pregnancy, birth, graduation, and marriage." If we can be aware of the process when we click on a web link instead of a simple request-response, we can dig out a lot of details that can improve performance. Today I listened to a lecture by Taobao's Pony Ma on web performance research for the Yahoo development team. I felt I gained a lot from it and wanted to share it on my blog. I believe many people have heard of the 14 rules for optimizing website performance. More information can be found at developer.yahoo.com 1. Reduce the number of HTTP requests as much as possible [content]
First, make fewer HTTP requests as much as possible HTTP requests are time-consuming, so finding ways to reduce the number of requests can naturally increase the speed of the web page. Commonly used methods include merging CSS, JS (merging CSS and JS files in a page separately), Image maps, and CSS sprites. Of course, perhaps the reason for splitting CSS and JS files into multiple ones is due to considerations such as CSS structure and sharing. Alibaba's Chinese website's approach at the time was to develop separately, and then merge js and css in the background. This way, it was still a single request for the browser, but it could still be restored to multiple ones during development, which was convenient for management and repeated references. Yahoo even recommends writing the homepage's CSS and JS directly into the page file instead of referencing them externally. Because the homepage has too many visits, doing this can also reduce the number of requests by two. In fact, many domestic portals do this. CSS sprites mean simply merging the background images on the page into one, and then taking the background through the value defined by the CSS background-position property. Taobao and Alibaba Chinese sites currently do this. If you are interested, you can take a look at the background pictures of Taobao and Alibaba. http://www.csssprites.com/ This is a tool website that can automatically merge the pictures you uploaded and give the corresponding background-position coordinates. And output the results in png and gif formats. Article 2: Use a Content Delivery Network To be honest, I don't know much about CDN. Simply put, by adding a new network architecture to the existing Internet, the website content is published to the cache server closest to the user. Through DNS load balancing technology, the user's source is determined and the nearest cache server is accessed to obtain the required content. Users in Hangzhou access content on servers near Hangzhou, and users in Beijing access content on servers near Beijing. This can effectively reduce the time it takes for data to be transmitted over the network and increase the speed. For more detailed information, please refer to the explanation of CDN on Baidu Encyclopedia. Yahoo! distributes static content to CDN to reduce user impact time by 20% or more. CDN technology diagram: CDN network diagram:
Now more and more pictures, scripts, css, and flash are embedded in web pages. When we access them, we will inevitably make many http requests. In fact, we can cache these files by setting the Expires header. Expire actually specifies the cache time of a specific type of file in the browser through a header message. Most images and flash do not need to be modified frequently after they are released. After caching, the browser will no longer need to download these files from the server but can read them directly from the cache, which will greatly speed up the speed of accessing the page again. A typical HTTP 1.1 protocol returns the following header information: This can be done by setting Cache-Control and Expires through server-side scripts. For example, set it to expire after 30 days in PHP: <!--pHeader("Cache-Control: must-revalidate");$offset = 60 * 60 * 24 * 30;$ExpStr = "Expires: " . gmdate("D, d MYH:i:s", time() + $offset) . " GMT";Header($ExpStr);-->It can also be done by configuring the server itself. I am not very clear about these, haha. If you want to know more, you can refer to http://www.web-caching.com/ As far as I know, the current Expires time of Alibaba Chinese site is 30 days. However, there were some problems during this period, especially the setting of script expiration time should be carefully considered, otherwise it may take a long time for the client to "perceive" such changes after the corresponding script function is updated. I encountered this problem before when I was doing the [suggest project]. Therefore, you should carefully consider what should be cached and what should not be cached. 4. Enable Gzip compression: Gzip Components The idea of Gzip is to compress the file on the server side before transmitting it. This can significantly reduce the size of the file transfer. After the transmission is completed, the browser will decompress the compressed content again and execute it. Most current browsers support gzip "well". Not only browsers can recognize it, but also major "crawlers" can also recognize it, so SEOers can rest assured. Moreover, the compression ratio of gzip is very large, generally about 85%, which means that a 100K page on the server side can be compressed to about 25K before being sent to the client. For the specific Gzip compression principle, you can refer to the article "Gzip Compression Algorithm" on csdn. Yahoo specifically emphasizes that all text content should be gzip compressed: html (php), js, css, xml, txt... Our website does a good job in this regard and gets an A. Our homepage was not A before, because there were a lot of js placed by advertising codes on the homepage. The js on the websites of the owners of these advertising codes were not compressed by gzip, which would also drag down our website. The above three points mostly belong to the server-side content, and I only have a superficial understanding of them. I hope you can point out any mistakes I made. Article 5. Put Stylesheets at the Top Why put CSS at the top of the page? Because browsers such as IE and Firefox will not render anything until all CSS is fully transmitted. The reason is as simple as Xiao Ma said. css, the full name is Cascading Style Sheets. Cascading means that the latter CSS can cover the previous CSS, and the higher-level CSS can cover the lower-level CSS. In [css! This hierarchical relationship was briefly mentioned at the bottom of this article. Here we only need to know that CSS can be overwritten. Since the previous one can be overwritten, it is undoubtedly reasonable for the browser to render it after it is completely loaded. In many browsers, such as IE, the problem with putting the style sheet at the bottom of the page is that it prohibits the sequential display of web page content. The browser blocks the display to avoid redrawing page elements, so the user can only see a blank page. Firefox will not block the display, but it means that some page elements may need to be repainted while the stylesheet is downloaded, which can cause flickering issues. So we should load css as soon as possible Following this meaning, if we look into it more carefully, there are actually areas that can be optimized. For example, the two CSS files included on this site are <link rel="stylesheet" rev="stylesheet" href="http://www.space007.com/themes/google/style/google.css" type="text/css" media="screen" /> and <link rel="stylesheet" rev="stylesheet" href="http://www.space007.com/css/print.css" type="text/css" media="print" />. From the media, you can see that the first CSS is for the browser, and the second CSS file is for the print style. From the user's behavioral habits, the action of printing a page must occur after the page is displayed. Therefore, a better approach should be to dynamically add CSS for printing devices to this page after the page is loaded, which can increase the speed a little bit. (Ha ha) Article 6. Put Scripts at the Bottom There are two purposes for placing the script at the bottom of the page: 1. To prevent the execution of the script from blocking the download of the page. During the page loading process, when the browser reads the js execution statement, it will interpret it completely and then read the following content. If you don't believe it, you can write a JS infinite loop to see if the things below the page will still appear. (The execution of setTimeout and setInterval is a bit like multi-threading, and the following content rendering will continue before the corresponding response time.) The logic behind this is that js may execute location.href or other functions that may completely interrupt the process of this page at any time. That being the case, of course, you have to wait until it is executed before loading it. Therefore, placing it at the end of the page can effectively reduce the loading time of the visual elements of the page. 2. The second problem caused by the script is that it blocks the number of parallel downloads. The HTTP/1.1 specification recommends that the number of parallel downloads per host in a browser should not exceed 2 (IE can only have 2, other browsers such as FF are set to 2 by default, but the new IE8 can have up to 6). So if you distribute the image files to multiple machines you can achieve more than 2 parallel downloads. But while the script file is downloading, the browser will not start other parallel downloads. Of course, for each website, the feasibility of loading scripts at the bottom of the page is still questionable. For example, the page of Alibaba Chinese website. There are inline js in many places, and the display of the page depends heavily on it. I admit that this is far from the concept of non-intrusive scripts, but many "historical problems" are not so easy to solve. Article 7. Avoid using Expressions in CSS However, this will add two layers of meaningless nesting, which is definitely not good. A better way is needed. 8. Make JavaScript and CSS External I think this is easy to understand. This should be done not only from the perspective of performance optimization, but also from the perspective of easy code maintainability. Writing CSS and JS in the page content can reduce 2 requests, but it also increases the size of the page. If css and js have been cached, there will not be two unnecessary http requests. Of course, as I said before, some special page developers will still choose inline CSS and JS files. Article 9. Reduce DNS Lookups On the Internet, there is a one-to-one correspondence between domain names and IP addresses. Domain names (kuqin.com) are easy to remember, but computers do not recognize them. In order for computers to "recognize" each other, they must be converted into IP addresses. Each computer on the network has a unique IP address. The conversion between domain names and IP addresses is called domain name resolution, also known as DNS query. A DNS resolution process takes 20-120 milliseconds. Before the DNS query is completed, the browser will not download anything under the domain name. Therefore, reducing the DNS query time can speed up the loading of pages. Yahoo recommends that the number of domain names contained in a page should be limited to 2-4. This requires a good planning of the page as a whole. We are not doing well in this regard at the moment, and many of the advertising delivery systems are holding us back. 10. Minify JavaScript The benefits of compressing js and css are obvious, reducing the number of page bytes. The smaller the capacity, the faster the page loads. In addition to reducing volume, compression can also provide a certain degree of protection. We have done a good job in this regard. Commonly used compression tools include JsMin, YUI compressor, etc. In addition, http://dean.edwards.name/packer/ also provides us with a very convenient online compression tool. You can see the difference in size between compressed and uncompressed js files on jQuery's webpage: Of course, one drawback of compression is that the readability of the code is lost. I believe that many friends who work on the front end have encountered this problem: the effect of looking at Google is very cool, but when you look at its source code, it is a bunch of characters crowded together, and even the function names have been replaced. It's so scary! Wouldn't it be very inconvenient to maintain your own code if it was like this? The current practice adopted by all Alibaba Chinese sites is to compress js and css on the server side when publishing. This makes it very convenient for us to maintain our own code. Article 11. Avoid Redirects I saw the article "Internet Explorer and Connection Limits" on ieblog not long ago. For example, when you enter http://www.kuqin.com/, the server will automatically generate a 301 server redirect to http://www.kuqin.com/. You can see it by looking at the address bar of the browser. This redirection naturally takes time. Of course, this is just an example, there are many other reasons for redirection, but what remains unchanged is that each additional redirection will increase the web request, so it should be minimized. Article 12: Remove Duplicate Scripts I think you know this without having to say it, not only from a performance perspective, but also from a code standards perspective. But we have to admit that many times we add some perhaps duplicate code just to satisfy the momentary needs. Perhaps a unified CSS framework and JS framework can solve our problem better. Xiaozhu’s point of view is very correct. It is not only important to avoid duplication, but also to make it reusable. Article 13. Configure ETags I don’t understand this either, haha. I found a relatively detailed explanation on inforQ, "Using ETags to Reduce Web Application Bandwidth and Load". Students who are interested can take a look. Article 14: Make Ajax Cacheable Does ajax need to be cached? When making an Ajax request, it is often necessary to add a timestamp to avoid caching. It's important to remember that "asynchronous" does not imply "instantaneous". Remember, even if AJAX are dynamically generated and only work for one user, they can still be cached. |
<<: An article to understand the execution process of MySQL query statements
>>: A few things you need to know about responsive layout
CSS sets Overflow to hide the scroll bar while al...
This time, we will try to package the running con...
MySQL 5.7 and above versions provide direct query...
When designing table structures, numeric types ar...
Use the FRAME property to control the style type ...
Table of contents 1. writable: writable 2. enumer...
introduce HTML provides the contextual structure ...
HTML tags explained 1. HTML tags Tag: !DOCTYPE De...
login.html part: <!DOCTYPE html> <html l...
mysql-5.7.17.msi installation, follow the screens...
1. The role of index Generally speaking, an index...
Table of contents The server runs jupyter noteboo...
Recently I saw an article on a public account tha...
There are many articles about ssh server configur...
In Docker's design, a container runs only one...