HTTP 1. X learning notes: an authoritative guide to Web Performance

itread01 2021-02-23 00:37:15
http learning notes authoritative guide

HTTP 1.0 The optimization strategy is very simple , In a word : Upgrade to HTTP 1.1. Finished !

Improvement HTTP The efficiency of HTTP 1.1 An important goal of the working group , Later this release also introduced a lot of important features to enhance performance , Some of them are well known :

  • Persistent connections to support connection reuse ;

  • Block transmission coding to support streaming response ;

  • Request pipeline to support parallel request processing ;

  • Byte service to support range based resource requests ;

  • Improved and better cache mechanism .

Of course , These are just part of it , We need to have a comprehensive discussion HTTP 1.1 All the enhanced features of , You have to use a book . Again , I recommend you to buy one 《HTTP Authoritative guide 》(David Gourley and Brian Totty co-author ) Keep it at hand . in addition , When it comes to good reference books ,Steve Souder Of 《 Efficient website building guide 》 This is summarized in 14 Rules , Half of it is for network optimization :

Reduce DNS Inquire about

Every domain name resolution requires a network round trip , Increase the delay of the request , Requests are blocked during queries .

Reduce HTTP Ask for

Any request is not as fast as no request , So to get rid of unnecessary resources on the page .


Geographically put the data close to the client , Can significantly reduce each TCP The network delay of the connection , Increased throughput .

newly added Expires Head and configure ETag Label

Related resources should be cached , To avoid repeated requests for the same resources on each page .Expires The header can be used to specify the cache time , In this time, resources can be obtained directly from the cache , Completely avoided HTTP Ask for .ETag And Last-Modified The first part provides a cache related mechanism , Equivalent to the last updated fingerprint or timestamp .

Gzip Resources

All text resources should be used Gzip Compression , And then transfer it between the client and the server . In general ,Gzip Can be reduced  60%~80% The file size of , It's also a relatively simple one ( Just configure an option on the server ), But the measures with better optimization effect .

avoid HTTP Redirect

HTTP Redirection is extremely time consuming , Especially if the client is directed to a completely different domain name , It can also lead to additional DNS Inquire about 、TCP Connection delay , wait .

Each of the above suggestions has been tested by time , Whether the book was published 2007 It's still today , It's all applicable . It's not a coincidence , It's because all of these proposals reflect two fundamental aspects : Eliminate and reduce unnecessary network latency , Minimize the number of bytes transmitted . These two fundamental problems are always the core of optimization , It works for any application .

But , To all HTTP 1.1 Features and best practices of , We can't say that . Because some HTTP 1.1 characteristic , Like request pipeline , Abortion due to lack of support , Other agreements limit , For example, the team leader responds to the blocking , It leads to more problems . Therefore ,Web Development community ( Always the most creative ), Many self-made optimization methods have been created and implemented : Domain name slot 、 Connect files 、 The mosaic 、 Embedded code , wait , There are dozens of them .

For the majority Web For developers , All of these are practical optimization tools : be familiar with 、 necessary , And universal . But , In reality , We should have a correct understanding of these technologies : They are all for the present HTTP 1.1 The limitations of the agreement . We shouldn't have worried about connecting to the files 、 The mosaic 、 Split domain names or embed resources . But unfortunately ,“ It shouldn't be ” It's not a pragmatic attitude : The reason why these optimization methods exist , It's all for a reason , The problem behind it is HTTP Before the next version of , We have to rely on them .

The advantages of persistent connectivity  

HTTP 1.1 One of the major improvements of the new model is the introduction of persistence HTTP Connect . Now let's demonstrate why this feature is so important to our optimization strategy .

For the sake of simplicity , We limit it to one at most TCP Connect , And only two small files ( Every <4 KB): One HTML file , One CSS Archives , Server response takes different time ( They are 40 ms and 20 ms).

  Let's assume that the unidirectional fiber delay from New York to London is 28 ms 

Every TCP There are three handshakes at the beginning of the connection , To go through a complete round trip between the client and the server . After that , Because HTTP Two communications of request and response and at least another round trip . Finally , Plus server processing time , To get the total time for each request .

Server processing time is unpredictable , Because this time varies with resources and back-end hardware . But , The point here is actually a new TCP It's wired HTTP The total time taken by the request , At least equal to two network round trips : Once for a handshake , Once for request and response . It's all non persistent HTTP The fixed time cost of conversation .

The faster the server processes , The greater the impact of fixed delay on the total time of each network request ! To test this , You can change the round trip time and server processing time in the previous example .”

Actually , At this time, the simplest optimization is to reuse the underlying wiring ! New pair HTTP Persistent connection support , You can avoid a second time TCP Three handshakes on the line 、 Eliminate another TCP Slow start round trip , Save a whole network delay .

Through persistence TCP Connect to get HTML and CSS Archives

In our two request examples , All in all, only one round trip time is saved . however , A more common case is a TCP The connection is going to transmit N Time HTTP Ask for , At this time :

  • There's no persistent connection , Each request causes two round-trip delays ;

  • There's a persistent connection , Only the first request causes two round trips to be delayed , Subsequent requests cause only one round trip delay .

With persistent connections enabled ,N The total latency saved for each request is (N -1)×RTT. Remember , As I said before , In modern times Web In application ,N The average value of is 90, And it continues to increase . therefore , Time saved by persistent connections , It will soon be measured in seconds ! This fully demonstrates persistence HTTP It's each Web The key optimization means of application .

HTTP The Conduit

persistent HTTP We can reuse the existing connection to complete multiple application requests , But multiple requests must be strictly FIFO (FIFO) Queue order of : Transfer request , Wait for the response to complete , Send the next request in the client queue .HTTP Pipeline is a very small but important optimization for the above workflow . Pipes allow us to put

FIFO Queue from client ( Request queue ) Migrate to server ( Response queue ).

To understand the benefits of doing so , Let's take a look at it again through persistence TCP Connect to get HTML and CSS File map . First , After the server processes the first request , There will be a complete round trip : First, the response is sent back , And then a second request . During this time, the server is idle . If the server can process the first request , Start processing the second request immediately ? even to the extent that , If the server can run in parallel or on multiple threads or use multiple worker programs , Processing two requests at the same time ?

By dispatching requests early , Not blocked by each response , It can eliminate the extra network round trip again . So , Two round trips from each request in the non persistent connection state , The whole request queue only needs two network round trips !


Now let's pause for a moment , Review the gains in performance optimization . At first , Two for each request TCP Connect , The total delay is 284 ms. After using persistent connections , Avoid a handshake round trip , The total delay is reduced to 228 ms. Finally , By using HTTP The Conduit , It also reduces one round trip between two requests , The total delay is reduced to 172 ms. So , From 284 ms To 172 ms, This 40% The improvement of the performance is due to the simple protocol optimization .

and , This 40% The efficiency improvement of the system is not fixed . This number is related to the network delay we selected and two examples of requests . I hope readers can try some different situations themselves , For example, the delay is higher 、 Ask for more information . After trying , You'll be surprised that the efficiency improvement is much higher than here . In fact , The higher the network latency , More requests , The more time you save . I think it's necessary for you to verify the result yourself . therefore , The larger the application , The greater the impact of network optimization .

But , It's not over . A discerning reader may have discovered , We can process requests in parallel on the server . In theory , There is no barrier to prevent the server from processing requests in the pipeline at the same time , So as to reduce it further 20 ms The delay of .

It is a pity , When we want to take this optimization measure , I found HTTP 1.x Some of the limitations of the agreement .HTTP 1.x You can only return the response in strict sequence . In particular ,HTTP 1.x Multiple response data on a single connection are not allowed to arrive interleaved ( Multiplexing ), So a response must be returned completely , The next response will start transmitting . To illustrate this , Let's take a look at how servers process requests in parallel ( Here's the picture ).


The figure above illustrates the following aspects :

  • HTML and CSS Requests arrive at the same time , But the first thing to deal with is HTML Ask for ;

  • The server processes two requests in parallel , Among them, it deals with HTML Time use 40 ms, Deal with CSS Time use 20 ms;

  • CSS The request is processed first , But buffered for transmission HTML Respond to ;

  • The transmission is over HTML After response , Then send... In the server buffer CSS Respond to .”

Even if the client sends two requests at the same time , and CSS The resources are ready first , The server will also send HTML Respond to , And then deliver CSS. This situation is often referred to as team head blocking , And often lead to suboptimal delivery : We can't make full use of the Internet connection , Causes server buffer overhead , This leads to unpredictable client delays . If the first request is suspended indefinitely , Or it can take a long time to finish , What to do ? stay HTTP 1.1 in , All subsequent requests will be blocked , Wait for it to finish .

In fact , Because multiplexing is not possible ,HTTP Pipes can cause HTTP Server 、 There are many subtle differences between agents and clients , The problem of missing records :

  • A slow response blocks all subsequent requests ;

  • When processing requests in parallel , The server must buffer the response in the pipeline , This takes up server resources , If there's a very big response , Then it is easy to form the attack surface of the server ;

  • Response failure may terminate TCP Connect , Forcing clients to resend requests for all subsequent resources from the page , This leads to repeated processing ;

  • Due to the possible existence of an intermediary agent , So test the pipe compatibility , It's important to ensure reliability ;

  • If the intermediate agent does not support pipelines , Then it might disconnect , It's also possible to concatenate all the requests .

Because of these and other similar problems , and HTTP 1.1 This is not stated in the standard ,HTTP The application of pipeline technology is very limited , Although there is no doubt about its advantages . today , Some browsers that support pipelines , It is usually used as a high-order configuration option , But most browsers disable it . In other words , If the browser is Web The main delivery tools for applications , It's still hard to expect to pass HTTP Pipeline to improve efficiency .

Use multiple TCP Connect

Because of HTTP 1.x Multiplexing is not supported , Browsers can queue all of them on the client side without thinking HTTP Ask for , And then through a persistent connection , Send these requests one by one . However , This approach is too slow in practice . Actually , Browser developers have no other way , We can only open multiple in parallel TCP Conversation . How many ? In reality , Most modern browsers , Including desktop and mobile browser , Each host is enabled 6 A connection .
Before further discussion , It's necessary to think about opening multiple TCP What does connecting mean . Of course , There are both positive and negative . Next, let's start each host up to 6 Take a separate connection as an example :

  • The client can dispatch up to 6 A request ;

  • The server can process up to 6 A request ;

  • The cumulative number of packets that can be sent in the first round trip (TCP cwnd) Grow to the original 6 times .

Without pipes , The maximum number of requests is the same as the number of open connections . Accordingly ,TCP Congestion windows are also multiplied by the number of open connections , This allows the client to bypass the TCP Slow start specified packet limit . It seems like a convenient solution . Let's look at the cost of that :

  • More sockets will occupy the client 、 Server and agent resources , Including memory buffers and CPU Clock cycles ;

  • Parallel TCP Streams compete for shared bandwidth ;

  • Due to processing multiple sockets , Implementation is more complex ;

  • Even in parallel TCP flow , The parallel ability of applications is also limited .

In practice ,CPU And memory footprint is not trivial , This will lead to an increase in resource usage on the client and server sides , The cost of operation and maintenance has increased . Similarly , Due to the increased complexity of client implementation , Development costs will also increase . Finally , When it comes to application parallelism , The benefits of this approach are very limited . It's not a long-term plan . After knowing this , It can be said that the reason why it is used today , There are three main reasons :

  • As a way to bypass the application protocol (HTTP) A stopgap measure to limit ;

  • As a way around TCP It's a stopgap measure for windows ;

  • As a way to bypass the client can not be used TCP Window zoom ” It's a stopgap measure .

The last two are for TCP The problem of ( Windows zoom and cwnd) It's best to upgrade to the latest OS The core is to solve .cwnd The value has recently increased to 10 Groups , And all the latest platforms can reliably support TCP Window zoom . That's good news, of course . But the bad news is , There's no better way to get around HTTP 1.x The problem of multiplexing is discussed .

As long as support is needed HTTP 1.x Client , We have to find a way to deal with it TCP The problem of flow . And that brings an obvious problem : Why do browsers specify that each host 6 It's a connection ? I'm afraid some readers have guessed , This number is the result of multi-party equilibrium : The bigger the number is , The more resources the client and server use , But it also brings higher request parallelism . Each host 6 A connection is just a number that everyone thinks is relatively safe . For some sites , That's enough , But for other sites , May not be able to meet the demand .  

Domain name slot

HTTP 1.x A gap in the protocol forces browser developers to introduce and maintain connection pools , At most per host 6 One TCP flow . The good thing is that the management of these connections is handled by the browser . As an application developer , You don't have to modify your app at all . On the bad side , Namely 6 Parallel connections may not be enough for your application .

According to HTTP Archive Statistics of , Currently, on average, every page contains 90 Multiple independent resources , If these resources all come from the same host , Then it will still lead to obvious queuing ( As shown in the figure below ). Actually , Why limit yourself to only one host ? We don't have to go through just one host ( for example Provide all the resources , Instead, you can manually distribute all resources to multiple subdomains :{shard1, shardn} Because the host name is different , You can break through the connection limit of the browser , Achieve higher parallel capabilities . The more domain name slots are used , The more parallel it is !

Since each host can only launch at the same time 6 Wrong column of resources caused by connection (s)

Of course , There is no free lunch , Domain name slot is no exception : Each new host name requires an extra DNS Inquire about , Each additional socket will consume more resources at both ends , And what's worse , Site authors have to manually separate these resources , And host them to multiple hosts .

In practice , Domain name slots are often abused , Leading to dozens of TCP Streams are not fully utilized , Many of them will never be avoided TCP Slow start , In the worst case, it will reduce efficiency . Besides , If you are using HTTPS, So because of TLS Extra network round trips caused by handshake , Will make the above costs even higher . Now , Please pay attention to the following :

  • First , hold TCP Make good use of ;

  • The browser will automatically open for you 6 A connection ;

  • The amount of resources 、 Both size and response time affect the optimal number of slots ;”

  • Client latency and bandwidth affect the optimal number of slots ;

  • Domain segmentation slot will be because of the extra DNS Inquiry and TCP Slow start affects performance .

Domain name segmentation slot is a reasonable but imperfect optimization method . Please start with the minimum number of slots ( Undivided slots ) Start , Then add the slot one by one and measure the impact of the slot on the application . In reality , There are not many sites that really improve their performance by opening more than a dozen connections at the same time , If you end up using a lot of slots , Then you'll find that reducing the number of resources or merging them into fewer requests , On the contrary, it can bring more benefits .

DNS Inquiry and TCP The extra consumption caused by slow start has the greatest impact on high latency clients . In other words , Move (3G、4G) Clients are often the most affected by excessive domain name segmentation slots !

Measure and control protocol overhead

HTTP 0.9 At the beginning, it was a simple one line ASCII Ask for , Used to get a hypertext file , The cost is minimal .HTTP 1.0 Added request and response headers , So that both parties can exchange meta information about requests and responses . Finally ,HTTP 1.1 Turn this format into a standard : Both the server and the client can easily expand the first part of the suite , And always in plain text , In order to ensure the same as before HTTP Version compatibility .

today , Every browser initiated HTTP Ask for , Will carry extra 500~800 Bitwise HTTP Metadata : User agent string 、 Rarely changed reception and transmission header 、 Cache instructions , wait . Sometimes ,500~800 Fewer bytes , Because it doesn't contain the largest piece :HTTP cookie. Modern applications often use cookie Do session management 、 Record personality options or complete analysis . Taken together , All this uncompressed HTTP Metadata is often given to each HTTP Request to increase protocol overhead by several thousand bytes .

HTTP The increase in the number of titles is not a bad thing for itself , Because most of them have a specific purpose . However , Because of all HTTP The first part is transmitted in plain text ( Without any compression ), This places a higher load on each request , This may cause serious performance problems in some applications . For example ,API Driven Web More and more applications , These applications need to serialize messages frequently ( Such as JSON) Communication in the form of . In these applications , Extra HTTP The overhead is often an order of magnitude greater than the data payload actually transmitted :

“$> curl --trace-ascii - -

The corresponding result :

== Info: Connected to> Send header, 218 bytes *POST /api HTTP/1.1User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 ...Host: www.igvita.comAccept: */*Content-Length: 15 *Content-Type: application/x-www-form-urlencoded=> Send data, 15 bytes (0xf){&quot;msg&quot;:&quot;hello&quot;}<= Recv header, 134 bytes *HTTP/1.1 204 No ContentServer: nginx/1.0.11Via: HTTP/1.1 GWADate: Thu, 20 Sep 2012 05:41:30 GMTCache-Control: max-age=0, no-cache
  1. HTTP Ask for the first :218 Byte

  2. Apply static load 15 Byte ({&quot;msg”:&quot;hello&quot;})

  3. Server's 204 Respond to :134 Byte

In the previous example , few 15 A character of JSON The message is 352 Bitwise HTTP The first one is wrapped with , All in plain text —— Protocol byte overhead accounts for 96%, And it's still not cookie At its best . Reduce the first data to be transmitted ( Highly repetitive and uncompressed ), It can save the delay time equivalent to one round trip , Significantly improved a lot Web The effectiveness of the application .

“Cookie It is a common performance bottleneck in many applications , Many developers ignore the extra burden it imposes on each request .

Connecting and merging

The fastest request is no request . No matter what protocol is used , No matter what type of application it is , Reducing the number of requests is always the best means of performance optimization . But , If you can't reduce the requests anyway , That's right HTTP 1.x for , Consider bundling multiple resources together , Get... Through a network request :

  • Connect : Put more than one JavaScript or CSS Files are combined into one file .

  • To put together : Combine multiple images into a larger composite image .

Yes JavaScript and CSS Say , Just keep the order , You can connect multiple files without affecting the behavior and execution of the code . Similarly , Multiple images can be combined into one “ Image Genie ”, Then use CSS Choose the appropriate part of the larger picture , Display in browser . Both technologies have two advantages .

  • Reduce protocol overhead : By combining files into one resource , It can eliminate the protocol overhead associated with files . As mentioned earlier , Every file is easy to incur KB The cost of level 1 uncompressed data .

  • Application layer pipeline : When it comes to transmitted bytes , The effect of both technologies seems to be enabled HTTP The Conduit : Data from multiple responses are wired one after another , Eliminates additional network latency . Actually , That's to raise the pipe to a higher level , It's in the app .

Both connectivity and mashup belong to content centric application layer optimization , They reduce network round trip overhead , You can get significant performance improvement . But , Implementing these technologies also requires additional processing 、 Deployment and coding ( For example, select the sub image of the image wizard CSS Code ), So it also brings extra complexity to the application . Besides , Package multiple resources together , It can also burden cache , Affect the execution speed of the page .

To understand why these technologies hurt performance , Consider a situation that is not uncommon : One contains about ten JavaScript and CSS The application of Archives , Merge all files into one in product state CSS Files and a JavaScript Archives .

  • All resources of the same type are in one URL( Cache key ) below .

  • The resource package may contain content that is not needed by the current page .

  • Updates to any files in the resource pack , They all need to download the whole resource package again , Results in higher byte overhead .

  • JavaScript and CSS Only after the transfer is complete can it be parsed and executed , This will slow down the execution of the application .

In practice , Most Web Applications don't have only one page , It's made up of multiple views . Each view has its own resources , At the same time, there is some overlap between resources : Public CSS、JavaScript And pictures . Actually , Combining all resources into one file often results in processing and loading unnecessary bytes . Although it can be regarded as a kind of pre acquisition , But the price is to slow down the initial startup .

For many applications , Updating resources brings more problems . Update image sprites or combinations JavaScript Somewhere in the file , It could lead to a retransmission of hundreds of KB Information . Because of the sacrifice of modularity and cache granularity , If packaging resources change too often , Especially when the resource package is too large , It's going to be more than it's going to be . If your application does get to this point , Then we can consider to put “ The core of stability ”, Like frameworks and Libraries , Move to a separate package .

Memory footprint can also be a problem . For the image wizard , The browser has to analyze the whole picture , Even if only a small part of it is actually shown , And always store the whole picture in memory . Browsers don't take out the parts that aren't displayed from memory !

Finally , Why is execution speed still affected ? We know , Browsers are incremental HTML Of , And for JavaScript and CSS Analysis and implementation of , You have to wait until the whole file is downloaded .JavaScript and CSS Processors don't allow incremental execution .

CSS and JavaScript File size and performance

CSS The bigger the file , The browser is building CSSOM The longer the blocking time you've experienced before , Thus delaying the first time the page is drawn . Similarly ,JavaScript The bigger the file , The greater the impact on execution speed ; Small files can be realized “ Incremental ” Execute . How big is a package file ? It is a pity , There's no ideal size . However , Google PageSpeed The team's tests show that ,30~50 KB( After compression ) It's each JavaScript The appropriate range of file size : It is large enough to reduce the network delay caused by small files , It also ensures incremental and hierarchical execution . The results may vary depending on the application type and the number of scripts .

All in all , Connecting and joining is in HTTP 1.x The agreement limits ( The pipeline is not universally supported , Multiple requests cost a lot ) Application layer optimization is feasible in reality . If used properly , These two technologies can bring obvious efficiency improvement , The cost is to increase the complexity of the application , And cause cache 、 to update 、 Execution speed , Even the problem of rendering pages . When applying these two optimizations , Pay attention to the measurement results , Consider the following problems according to the actual situation .

  • Will your app be blocked when downloading many small resources ?

  • Is it good for your application to combine some requests selectively ?

  • Does abandoning cache granularity have a negative impact on users ?

  • Whether the combination of pictures will take up too much memory ?

  • Whether the first rendering will encounter delay execution ?

It's an art to strike a balance between the answers to these questions .

Embedded resources

Embedding resources is another very popular optimization method , Embedding resources into files can reduce the number of requests . such as ,JavaScript and CSS Code , Through appropriate script and style Blocks can be placed directly on the page , And pictures and even audio or PDF Archives , All of them can be found through information URI(data:[mediatype][;base64],data ) Embedded in the page in a different way :

<img src=&quot;data:image/gif;base64,R0lGODlhAQABAIAAAAA AAAAAACH5BAAAAAAALAAAAAABAAEAAAICTAEAOw==&quot; alt=&quot;1x1 transparent (GIF) pixel&quot; />

Information URI Suitable for very small , Ideally , It's better to use resources only once . Resources embedded on the page , It should be part of the page , Can't be browsed 、CDN Or other cache agents as separate resource caches . In other words , If the same resource is embedded in multiple pages , Then the resource will be loaded as each page is loaded , This increases the overall size of each page . in addition , If the embedded resource is updated , Then all the previous pages will be declared invalid , And the client gets it from the server again .

Finally , Although CSS and JavaScript It's easy to embed text-based resources, such as text, directly into a page , It's not going to cost you anything , But non literal resources have to go through base64 Code , And that leads to a significant increase in spending : The size of the encoded resource is larger than the original size 33%!

base64 Coding uses 64 One ASCII Symbols and white spaces encode any stream of bytes as ASCII String . In the coding process ,base64 Will cause the encoded to change into the original 4/3, That is to increase 33% The byte overhead of .

In practice , A common rule of thumb is to only consider embedding 1~2 KB The following resources , Because resources smaller than this standard often lead to higher HTTP Spending . However , If the embedded resources change frequently , It will also increase the invalid cache rate of the host file . Embedding resources is not a perfect way . If your app is going to use very small 、 Individual files , When considering whether to embed , We can refer to the following suggestions :

  • If the file is small , And only individual pages use , Consider embedding ;

  • If the file is small , But it needs to be reused in multiple pages , You should think about centralized packaging ;

  • If small files need to be updated frequently , Don't embed it ;

  • By reducing HTTP cookie To minimize protocol overhead .


About HTTP Series articles :

  • HTTP summary

  • TCP Three handshakes and four waves ( Finite state machine )

  • Enter the URL from you , To see the web page —— Explain in detail what happened in the middle

  • In depth HTTPS ( Detailed Edition )

  • Random talk HTTP Connect

  • Random talk HTTP Performance optimization

  • HTTP Brief introduction of message format

  • In depth :HTTP/2


Reference books :

 Ilya Grigorik. Web An authoritative guide to effectiveness ( Turing programming bundle


  1. 【微前端】微前端最终章-qiankun指南以及微前端整体探索
  2. Vue-Cli 创建vue3项目
  3. Go in the front of the progress of u boot v7.0 U disk boot disk production tools
  4. 使用NTLM的windows身份验证的nginx反向代理
  5. Rust教程:针对JavaScript开发人员的Rust简介
  6. 使用 Serverless Framework 部署个人博客到腾讯云
  7. #研發解決方案#易車前端監控系統
  8. Vue changes localhost to IP address and cannot access
  9. JavaScript进阶学习
  10. HTML5 from entry to proficient, realize annual salary 10W +, zero basic students must see
  11. Vue:vuex状态数据持久化插件vuex-persistedstate
  12. Vue source code analysis - start
  13. Vue -- the child component calls the method of the parent component and passes parameters --- props
  14. React-Native 获取设备当前网络状态 NetInfo
  15. 高性能 Nginx HTTPS 调优 - 如何为 HTTPS 提速 30%
  16. How to learn HTML5? How can Xiaobai start HTML5 quickly?
  17. HTML + CSS detailed tutorial, this article is enough, but also quickly save
  18. JavaScript高级:JavaScript面向对象,JavaScript内置对象,JavaScript BOM,JavaScript封装
  19. Why Vue uses asynchronous rendering
  20. JavaScript高级:JavaScript面向对象,JavaScript内置对象,JavaScript BOM,JavaScript封装
  21. vue判断elementui中el-form是否更新变化,变化就提示是否保存,没变就直接离开
  22. 算法题:两数之和——JavaScript及Java实现
  23. 高性能 Nginx HTTPS 调优
  24. Why Vue uses asynchronous rendering
  25. day 31 jQuery进阶
  26. day 30 jQuery
  27. CSS whimsy -- using background to create all kinds of wonderful backgrounds
  28. Why are more and more people learning front end?
  29. What do you do with 4K front-end development?
  30. 8 years of front-end development knowledge precipitation (do not know how many words, keep writing it...)
  31. What is the annual salary of a good web front end?
  32. Front end novice tutorial! How to get started with web front end
  33. Will the front end have a future?
  34. Is the front end hard to learn?
  35. Seven new Vue combat skills to improve efficiency in 2021!
  36. Is front end learning difficult?
  37. How about the process of Web front-end development and self-study?
  38. Front end learning route from zero basis to proficient
  39. What is the basis of learning front end?
  40. What knowledge points need to be learned for self-study front end? How long can I become a front-end Engineer?
  41. An inexperienced front-end engineer, what are the common problems when writing CSS?
  42. HttpServletRequest get URL (parameter, path, port number, protocol, etc.) details
  43. Springboot starts http2
  44. Enabling http2.0 in spring boot
  45. JQuery:JQuery基本语法,JQuery选择器,JQuery DOM,综合案例 复选框,综合案例 随机图片
  46. Using JavaScript in Safari browser history.back () the page will not refresh after returning to the previous page
  47. vue.js Error in win10 NPM install
  48. In less than two months, musk made more than $1 billion, more than Tesla's annual profit
  49. Springboot starts http2
  50. Vue event bus
  51. JQuery easy UI tutorial: custom data grid Pagination
  52. Using okhttp and okhttpgo to obtain onenet cloud platform data
  53. Vue3 component (IX) Vue + element plus + JSON = dynamic rendering form control
  54. HTTP 1. X learning notes: an authoritative guide to Web Performance
  55. Vue3 component (IX) Vue + element plus + JSON = dynamic rendering form control
  56. HTTP 1. X learning notes: an authoritative guide to Web Performance
  57. JQuery:JQuery基本语法,JQuery选择器,JQuery DOM,综合案例 复选框,综合案例 随机图片
  58. Event bubble and capture in JavaScript
  59. The root element is missing solution
  60. Event bubble and capture in JavaScript