HTTP 1.0 The optimization strategy is very simple , Just a word : Upgrade to HTTP 1.1. Finished !

improvement HTTP The performance of HTTP 1.1 An important goal of the working group , Later this release also introduced a number of important features to enhance performance , Some of them are well known :

  • Persistent connections to support connection reuse ;

  • Block transmission coding to support streaming response ;

  • Request pipeline to support parallel request processing ;

  • Byte service to support range based resource requests ;

  • Improved and better caching mechanism .

Of course , These are just part of it , We need to have a comprehensive discussion HTTP 1.1 All the enhanced features of , You have to use a book . Again , I recommend you to buy one 《HTTP Authoritative guide 》(David Gourley and Brian Totty co-author ) Keep it at hand . in addition , When it comes to good reference books ,Steve Souder Of 《 High performance website construction guide 》 This is summarized in 14 Bar rule , Half for network optimization :

Reduce DNS Inquire about

Every domain name resolution requires a network round trip , Increase the delay of the request , Requests are blocked during queries .

Reduce HTTP request

Any request is not as fast as no request , So to get rid of unnecessary resources on the page .


Put data geographically close to the client , Can significantly reduce each TCP The network latency of the connection , Increased throughput .

add to Expires Head and configure ETag label

Related resources should be cached , To avoid repeated requests for the same resources on each page .Expires The header can be used to specify the cache time , In this time, resources can be obtained directly from the cache , Completely avoided HTTP request .ETag And Last-Modified The first part provides a caching related mechanism , Equivalent to the last updated fingerprint or timestamp .

Gzip resources

All text resources should be used Gzip Compress , And then transfer it between the client and the server . Generally speaking ,Gzip Can reduce the  60%~80% File size , It's also a relatively simple one ( Just configure an option on the server ), But the optimization effect is better .

avoid HTTP Redirect

HTTP Redirection is extremely time consuming , Especially if the client is directed to a completely different domain name , It can also lead to additional DNS Inquire about 、TCP Connection delay , wait .

Each of the above suggestions has been tested by time , Whether the book was published 2007 It's still today , It's all applicable . It's not a coincidence , It's because all of these proposals reflect two fundamental aspects : Eliminate and reduce unnecessary network latency , Minimize the number of bytes transferred . These two fundamental problems are always the core of optimization , It works for any application .

But , For all HTTP 1.1 Features and best practices , We can't say that . Because there are some HTTP 1.1 characteristic , Like request pipeline , Abortion due to lack of support , Other agreements limit , For example, the team leader responds to the blocking , It leads to more problems . So ,Web The development community ( Always the most creative ), Many self-made optimization methods have been created and implemented : Domain partition 、 Connection file 、 Put together icons 、 Embed code , wait , There are dozens of them .

To the majority Web For developers , All of these are practical optimization tools : be familiar with 、 necessary , And universal . But , In reality , We should have a correct understanding of these technologies : They're all about the present HTTP 1.1 The limitations of the agreement . We shouldn't have worried about connecting files 、 Put together icons 、 Split domain names or embed resources . But unfortunately ,“ Should not be ” It's not a pragmatic attitude : The reason why these optimization methods exist , It's all for a reason , The problem behind it is HTTP Before the next version of , We have to rely on them .

Advantages of persistent connections

HTTP 1.1 One of the major improvements of the new model is the introduction of persistence HTTP Connect . Now let's show why this feature is so important to our optimization strategy .

For the sake of simplicity , We limit it to one at most TCP Connect , And only get two small files ( Every <4 KB): One HTML file , One CSS file , It takes different times for the server to respond ( Respectively 40 ms and 20 ms).

Let's assume that the unidirectional fiber delay from New York to London is 28 ms

Every TCP There are three handshakes at the beginning of the connection , To go through a complete round trip between the client and the server . thereafter , because HTTP Two communications of request and response and at least another round trip . Last , Plus server processing time , To get the total time for each request .

Server processing time is unpredictable , Because this time varies with resources and back-end hardware . however , The point here is actually a new one TCP Connect to send HTTP The total time taken by the request , At least two network round trips : One time for shaking hands , Once for request and response . It's all non persistent HTTP The fixed time cost of conversation .

The faster the server processes , The greater the impact of fixed latency on the total time of each network request ! To test this , You can change the round trip time and server processing time in the previous example .”

actually , The simplest optimization is to reuse the underlying connections ! Add pair HTTP Persistent connection support , You can avoid a second time TCP Three handshakes when connecting 、 Eliminate another TCP Slow start round trip , Save a whole network delay .

Through persistence TCP Connect to get HTML and CSS file

In our two request examples , All in all, only one round trip time is saved . however , A more common case is a TCP Connect to send N Time HTTP request , At this time :

  • There is no persistent connection , Each request causes two round-trip delays ;

  • There are persistent connections , Only the first request causes two round trips to be delayed , Subsequent requests cause only one round trip delay .

With persistent connections enabled ,N The total latency saved for each request is (N -1)×RTT. Do you remember? , As I said before , In modern times Web Application ,N The average value of is 90, And it continues to increase . therefore , Time saved by persistent connections , It will soon be measured in seconds ! This fully demonstrates persistence HTTP Is each Web The key optimization means of application .

HTTP The Conduit

persistent HTTP We can reuse the existing connection to complete multiple application requests , But multiple requests must be strictly FIFO (FIFO) The queue order of : Send a request , Wait for the response to complete , Send the next request in the client queue .HTTP Pipeline is a very small but important optimization for the above workflow . Pipes allow us to put

FIFO Queue from client ( Request queue ) Migrate to the server ( Response queue ).

To understand the benefits of doing so , Let's take a look at it again through persistence TCP Connect to get HTML and CSS File sketch . First , After the server processes the first request , There will be a complete round trip : First, the response is sent back , And then a second request . During this time, the server is idle . If the server can handle the first request , Start processing the second request immediately ? even to the extent that , If the server can be parallel or multithreaded or use multiple worker processes , Processing two requests at the same time ?

By dispatching requests early , Not blocked by every response , You can eliminate additional network round trips again . such , Two round trips from each request in the non persistent connection state , It becomes the whole request queue, which only needs two network round trips !

Now let's pause for a moment , Review the gains in performance optimization . In limine , Two for each request TCP Connect , The total delay is 284 ms. After using persistent connections , Avoid a handshake round trip , The total delay is reduced to 228 ms. Last , By using HTTP The Conduit , It also reduces one round trip between two requests , The total delay is reduced to 172 ms. such , from 284 ms To 172 ms, this 40% The improvement of performance is due to simple protocol optimization .

and , this 40% The performance improvement is not fixed . This number has to do with the network latency we chose and two examples of requests . I hope readers can try some different situations themselves , For example, the delay is higher 、 Ask for more information . After trying , You'll be surprised that the performance improvement is much higher than here . in fact , The higher the network latency , The more requests , The more time you save . I think it's necessary for you to verify the result yourself . therefore , The larger the application , The greater the impact of network optimization .

however , That's not all . A discerning reader may have discovered , We can process requests in parallel on the server . Theoretically speaking , There are no barriers that prevent the server from processing requests in the pipeline at the same time , So as to reduce 20 ms Delay of .

It is a pity , When we want to take this optimization measure , Found out HTTP 1.x Some of the limitations of the agreement .HTTP 1.x The response can only be returned strictly serially . especially ,HTTP 1.x Multiple response data on a connection are not allowed to interleave ( Multiplexing ), So a response must be completely returned after , The next response will start transmitting . To illustrate this , Let's take a look at how servers process requests in parallel ( Here's the picture ).

The figure above illustrates the following aspects :

  • HTML and CSS Request to arrive at the same time , But the first thing to deal with is HTML request ;

  • The server processes two requests in parallel , It deals with HTML when 40 ms, Handle CSS when 20 ms;

  • CSS The request is processed first , But buffered for transmission HTML Respond to ;

  • Finished sending HTML After response , Then send... In the server buffer CSS Respond to .”

Even if the client sends two requests at the same time , and CSS Resources are ready first , The server will also send HTML Respond to , And then deliver CSS. This situation is often referred to as team head blocking , And often lead to suboptimal delivery : Can't make full use of the network connection , Causing server buffer overhead , This leads to unpredictable client delays . If the first request is suspended indefinitely , Or it can take a long time to finish , What shall I do? ? stay HTTP 1.1 in , All subsequent requests will be blocked , Wait for it to finish .

In the actual , Since multiplexing is not possible ,HTTP Pipes can cause HTTP The server 、 There are many subtle differences between agents and clients , There are no documented problems :

  • A slow response blocks all subsequent requests ;

  • When processing requests in parallel , The server must buffer the response in the pipeline , So it takes up server resources , If there's a very big response , Then it is easy to form the attack surface of the server ;

  • Response failure may terminate TCP Connect , Forcing clients to resend requests for all subsequent resources from the page , This leads to repeated processing ;

  • Because there may be intermediate agents , So check pipe compatibility , It's important to ensure reliability ;

  • If the intermediate agent does not support pipes , Then it might break the connection , It's also possible to concatenate all the requests .

Because of these and other similar problems , and HTTP 1.1 This is not stated in the standard ,HTTP The application of pipeline technology is very limited , Although there is no doubt about its advantages . today , Some browsers that support pipes , It's usually used as an advanced configuration option , But most browsers disable it . let me put it another way , If the browser is Web The main delivery tools for applications , It's still hard to expect to pass HTTP Pipes to improve performance .

The use of multiple TCP Connect

because HTTP 1.x Multiplexing is not supported , Browsers can queue all of them on the client side without thinking HTTP request , And then through a persistent connection , Send these requests one by one . However , This approach is too slow in practice . actually , Browser developers have no other way , We can only open multiple in parallel TCP conversation . How many? ? In reality , Most modern browsers , Including desktop and mobile browser , Support each host to open 6 A connection .
Before further discussion , It's necessary to think about opening multiple TCP What connection means . Of course , There are both positive and negative . Next, let's open up to each host 6 Take a separate connection as an example :

  • The client can dispatch up to 6 A request ;

  • The server can handle up to 6 A request ;

  • The cumulative number of packets that can be sent in the first round trip (TCP cwnd) Grow to the original 6 times .

Without pipes , The maximum number of requests is the same as the number of open connections . Accordingly ,TCP The congestion window is also multiplied by the number of open connections , This allows the client to bypass the TCP Slow start specified packet limit . It seems like a convenient solution . Let's look at the cost of that :

  • More sockets will occupy the client 、 Server and agent resources , Includes memory buffers and CPU Clock cycle ;

  • parallel TCP Streams compete for shared bandwidth ;

  • Because of processing multiple sockets , Implementation is more complex ;

  • Even in parallel TCP flow , The parallel ability of applications is also limited .

In practice ,CPU And memory footprint is not trivial , This will lead to an increase in the resource consumption of the client and server , The operation and maintenance cost is increased . Similarly , Due to the increased complexity of client implementation , Development costs will also increase . Last , When it comes to application parallelism , The benefits of this approach are very limited . It's not a long-term plan . After knowing this , It can be said that the reason why it is used today , There are three main reasons :

  • As a way to bypass the application protocol (HTTP) A stopgap measure to limit ;

  • As a way around TCP An expedient measure of congestion window in middle and low starting point ;

  • As a way to bypass the client can not be used TCP Window zoom ” It's a stopgap measure .

The last two are for TCP The problem of ( Window zoom and cwnd) It's best to upgrade to the latest OS Kernel to solve .cwnd The value has recently increased to 10 Groups , And all the latest platforms can reliably support TCP Window zoom . That's good news, of course . But the bad news is , There's no better way to get around HTTP 1.x The problem of multiplexing is discussed .

As long as you have to support HTTP 1.x client , We have to find a way to deal with it TCP The problem of flow . And that brings an obvious problem : Why do browsers specify that each host 6 There's a connection ? I'm afraid some readers have guessed , This number is the result of multi-party equilibrium : The larger the number , The more resources the client and server use , But it also brings higher request parallelism . Each host 6 A connection is just a number that everyone thinks is relatively safe . For some sites , That's enough , But for other sites , May not be able to meet the demand .

Domain partition

HTTP 1.x A gap in the protocol forces browser developers to introduce and maintain connection pools , At most per host 6 individual TCP flow . The good thing is that the management of these connections is handled by the browser . As an application developer , You don't have to modify your app at all . On the bad side , Namely 6 Parallel connections may not be enough for your application .

according to HTTP Archive The statistics of , Currently, on average, every page contains 90 Multiple independent resources , If these resources all come from the same host , Then it will still lead to obvious queuing ( As shown in the figure below ). actually , Why limit yourself to only one host ? We don't have to go through just one host ( for example Provide all the resources , Instead, you can manually distribute all resources to multiple subdomains :{shard1, shardn} Because the host name is different , You can break the connection limit of the browser , Achieve higher parallel capabilities . The more domain name partitions are used , The more parallel it is !

Since each host can only launch at the same time 6 Wrong column of resources due to connection

Of course , There is no such thing as a free lunch , Domain partition is no exception : Each new host name requires an extra DNS Inquire about , Each additional socket will consume more resources at both ends , And what's worse , Site authors have to manually separate these resources , And host them to multiple hosts .

In practice , Domain partition is often abused , Leading to dozens of TCP Streams are not fully utilized , Many of them will never be avoided TCP Slow start , At worst, it will degrade performance . Besides , If you are using HTTPS, So because of TLS Extra network round trips caused by handshakes , Will make the above costs even higher . here , Please pay attention to the following :

  • First , hold TCP Make good use of ;

  • The browser will open for you automatically 6 A connection ;

  • The amount of resources 、 Both size and response time affect the optimal number of partitions ;”

  • Client latency and bandwidth affect the optimal number of partitions ;

  • Domain partition will be because of the extra DNS Query and TCP Slow start affects performance .

Domain name partition is a reasonable but imperfect optimization method . Please start with the minimum number of partitions ( No zoning ) Start , Then add partitions one by one and measure the impact of partitions on the application . In reality , There are not many sites that really improve performance by opening more than a dozen connections at the same time , If you end up using a lot of partitions , Then you'll find that reducing the number of resources or merging them into fewer requests , On the contrary, it can bring more benefits .

DNS Query and TCP The extra consumption caused by slow start has the greatest impact on high latency clients . let me put it another way , Move (3G、4G) Clients are often the most affected by excessive domain partitioning !

Measure and control protocol overhead

HTTP 0.9 At the beginning, it was a simple one line ASCII request , Used to get a hypertext document , The cost is minimal .HTTP 1.0 Added request and response headers , So that both parties can exchange meta information about requests and responses . Final ,HTTP 1.1 Turn this format into a standard : Both the server and the client can easily extend the header , And always in plain text , To keep up with the previous HTTP Version compatible .

today , Every browser initiated HTTP request , Will carry extra 500~800 Bytes of HTTP Metadata : User agent string 、 Rarely changed reception and transmission header 、 Cache instructions , wait . occasionally ,500~800 Less bytes , Because it doesn't contain the largest piece :HTTP cookie. Modern applications often use cookie Do session management 、 Record personality options or complete analysis . Put it all together , All this uncompressed HTTP Metadata is often given to each HTTP Request to increase protocol overhead by several thousand bytes .

HTTP The increase in the number of titles is not a bad thing for itself , Because most of them have a specific purpose . However , Because of all HTTP The headers are sent in plain text ( Without any compression ), This places a higher load on each request , And this can cause serious performance problems in some applications . for instance ,API Driven Web More and more applications , These applications need to serialize messages frequently ( Such as JSON) In the form of communication . In these applications , additional HTTP The overhead is often an order of magnitude greater than the actual data payload transmitted :

“$> curl --trace-ascii - -

The corresponding result :

== Info: Connected to
=> Send header, 218 bytes *
POST /api HTTP/1.1
User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 ...
Accept: */*
Content-Length: 15 *
Content-Type: application/x-www-form-urlencoded
=> Send data, 15 bytes (0xf)
{&quot;msg&quot;:&quot;hello&quot;} <= Recv header, 134 bytes *
HTTP/1.1 204 No Content
Server: nginx/1.0.11
Via: HTTP/1.1 GWA
Date: Thu, 20 Sep 2012 05:41:30 GMT
Cache-Control: max-age=0, no-cache
  1. HTTP Ask for the first :218 byte

  2. Apply static load 15 byte ({&quot;msg”:&quot;hello&quot;})

  3. Server's 204 Respond to :134 byte

In the previous example , few 15 A character JSON The message is 352 Bytes of HTTP The first one is wrapped with , All in plain text —— Protocol byte overhead accounts for 96%, And it's still not cookie At its best . Reduce the number of header data to be transmitted ( Highly repetitive and uncompressed ), Can save the equivalent of a round-trip delay time , Significantly improved a lot Web Application performance .

“Cookie It is a common performance bottleneck in many applications , Many developers ignore the extra burden it imposes on each request .

Connecting and joining

The fastest request is no request . No matter what protocol is used , No matter what kind of application it is , Reducing the number of requests is always the best means of performance optimization . But , If you can't reduce the requests anyway , So right. HTTP 1.x for , Consider bundling multiple resources together , Get... Through a network request :

  • Connect : The multiple JavaScript or CSS Files are combined into one file .

  • To put together : Combine multiple images into a larger composite image .

Yes JavaScript and CSS Come on , Just keep the order , You can connect multiple files without affecting the behavior and execution of the code . Similarly , Multiple images can be combined into one “ Image wizard ”, And then use CSS Choose the appropriate part of the larger picture , Show in browser . Both technologies have two advantages .

  • Reduce protocol overhead : By combining files into one resource , Can eliminate protocol overhead associated with files . As mentioned earlier , Every file is easy to incur KB The overhead of level 1 uncompressed data .

  • Application layer pipeline : When it comes to transmitted bytes , The effect of both technologies seems to be enabled HTTP The Conduit : Data from multiple responses are connected one after another , Eliminates additional network latency . actually , That's to raise the pipe to a higher level , It's in the app .

Both connectivity and mashup technologies belong to content centric application layer optimization , They do this by reducing network round trip overhead , You can get significant performance improvements . But , Implementing these technologies also requires additional processing 、 Deployment and coding ( For example, select the sub image of the image wizard CSS Code ), So it also brings extra complexity to the application . Besides , Package multiple resources together , It can also burden the cache , Affect the execution speed of the page .

Understand why these technologies hurt performance , Consider a situation that is not uncommon : One contains about ten JavaScript and CSS Application of documents , Merge all files into one... In product state CSS Documents and a JavaScript file .

  • Resources of the same type are all located in one URL( Cache key ) below .

  • The resource package may contain content that is not needed by the current page .

  • Updates to any file in the resource pack , They all need to download the whole resource package again , Resulting in high byte overhead .

  • JavaScript and CSS Only after the transfer is complete can it be parsed and executed , This will slow down the execution of the application .

In practice , majority Web Applications don't have only one page , It's made up of multiple views . Each view has its own resources , At the same time, there is some overlap between resources : Public CSS、JavaScript And pictures . actually , Combining all resources into one file often results in processing and loading unnecessary bytes . Although it can be regarded as a kind of pre acquisition , But the price is to slow down the initial startup .

For many applications , Updating resources brings more problems . Update image sprites or combinations JavaScript Somewhere in the file , It could lead to a retransmission of hundreds of KB data . At the expense of modularity and cache granularity , If packaging resources change too often , Especially when the resource package is too large , It's going to be more than it's going to be . If your application does get to this point , Then we can consider “ The core of stability ”, Like frameworks and Libraries , Move to a separate package .

Memory footprint can also be a problem . For the image wizard , The browser has to analyze the whole picture , Even if only a small part of it is actually shown , And always keep the whole picture in memory . The browser doesn't delete the part that isn't displayed from memory !

Last , Why is execution speed still affected ? We know , Browsers are incremental HTML Of , And for JavaScript and CSS The analysis and implementation of the system , You have to wait until the whole file is downloaded .JavaScript and CSS Processors don't allow incremental execution .

CSS and JavaScript File size and performance

CSS The bigger the file , The browser is building CSSOM The longer the blocking time you've experienced before , Thus delaying the first time the page is drawn . Similarly ,JavaScript The bigger the file , The greater the impact on execution speed ; Small files can achieve “ Incremental ” perform . How big is the package file ? It is a pity , There's no ideal size . However , Google PageSpeed The team's tests show that ,30~50 KB( After the compression ) Is each JavaScript The appropriate range of file size : It is large enough to reduce the network delay caused by small files , It also ensures incremental and hierarchical execution . The specific results may vary depending on the type of application and the number of scripts .

All in all , Connecting and joining is in HTTP 1.x Agreement limits ( The pipeline is not universally supported , Multiple requests cost a lot ) Application layer optimization is feasible in reality . If used properly , These two technologies can bring significant performance improvements , The cost is to increase the complexity of the application , And cause caching 、 to update 、 Execution speed , Even the problem of rendering pages . When applying these two optimizations , Pay attention to the measurement results , Consider the following problems according to the actual situation .

  • Will your app be blocked when downloading many small resources ?

  • Is it good for your application to combine some requests selectively ?

  • Does abandoning cache granularity have a negative impact on users ?

  • Whether combining pictures will take up too much memory ?

  • Whether the first rendering will encounter delay execution ?

It's an art to strike a balance between the answers to these questions .

Embed resources

Embedding resources is another very popular optimization method , Embedding resources into documents can reduce the number of requests . such as ,JavaScript and CSS Code , Through appropriate script and style Blocks can be placed directly on the page , And pictures or even audio or PDF file , You can use data URI(data:[mediatype][;base64],data ) Embedded in the page in a different way :

<img src=&quot;data:image/gif;base64,R0lGODlhAQABAIAAAAA
alt=&quot;1x1 transparent (GIF) pixel&quot; />

data URI Suitable for very small , Ideally , It's better to use resources only once . Resources embedded on the page , It should be part of the page , Can't be browsed 、CDN Or other caching agents as separate resource caches . let me put it another way , If the same resource is embedded in multiple pages , This resource will be loaded with each page , This increases the overall size of each page . in addition , If the embedded resource is updated , Then all previous pages will be invalidated , And the client gets it from the server again .

Last , although CSS and JavaScript It's easy to embed text-based resources, such as text, directly into a page , It's not going to cost you anything , But non textual resources have to go through base64 code , And that leads to a significant increase in spending : The size of the encoded resource is larger than the original size 33%!

base64 Code using 64 individual ASCII Symbols and white spaces encode any byte stream as ASCII character string . In the coding process ,base64 Will cause the encoded to change into the original 4/3, That is to increase 33% The byte overhead of .

In practice , A common rule of thumb is to only consider embedding 1~2 KB The following resources , Because resources smaller than this standard often lead to higher HTTP expenses . However , If the embedded resources change frequently , It will also increase the invalid cache rate of the host document . Embedding resources is not the perfect way . If your app is going to use very small 、 Individual documents , When considering whether to embed , We can refer to the following suggestions :

  • If the file is small , And only individual pages use , Consider embedding ;

  • If the file is small , But it needs to be reused in multiple pages , You should think about centralized packaging ;

  • If small files need to be updated frequently , Don't embed it ;

  • By reducing HTTP cookie To minimize protocol overhead .

Reference books :

Ilya Grigorik. Web Performance authority Guide ( Turing Programming Series )

HTTP 1.x Learning notes —— Web More related articles in the performance authority guide

  1. Classic performance optimization best practices web Performance authority Guide Reading notes

    web Performance authority Guide page 203 Classic performance optimization best practices No matter what network , No matter what version of network protocol is used , All applications should focus on eliminating or reducing Less unnecessary network delay , Compress the data that needs to be transmitted to a minimum . These two criteria are classic ...

  2. 《Web Performance authority Guide 》

    <Web Performance authority Guide > essential information The original title of the book :High performance browser networking The original publishing house : O'Reilly Media author : ( Add )Ilya Grig ...

  3. web Performance authority Guide (High Performance Browser Networking)

    web Performance authority Guide (High Performance Browser Networking) HTT ...

  4. Web Performance authority Guide PDF Scanning version ​

    Web The performance authority guide is the authoritative work of the core members of Google's high performance team , It can be called the product of the perfect combination of actual combat experience and normative interpretation .<Web Performance authority Guide > The goal is to cover Web All the network and performance optimization knowledge that should be mastered in the developer's technical system . whole book ...

  5. 【Web Performance authority Guide 】 PDF

    Web Performance authority Guide .pdf Network disk :     Acquisition code :276922

  6. loadrunner&#160; Scene design - Performance errors in learning notes

    Scene design - Performance errors in learning notes by: Give the guest QQ:1033553122 Scenario assumption : Each transaction contains only one request , perform 10000 Number of concurrent users Performance mistakes : Number of concurrent users per second = The number of requests submitted to the server per second Detailed answers : Every time ...

  7. Reading notes --Android Gradle Authoritative guide ( Next )

    Preface I read a book recently <Android Gradle Authoritative guide >, There's a lot to gain , Just want to take some reading notes , Convenient for subsequent reference . This article is based on the previous article : Reading notes --Android Gradle Authoritative guide ( ...

  8. Reading notes --Android Gradle Authoritative guide ( On )

    This article has authorized WeChat official account. dasu_Android( Da Su ) exclusive I read a book recently <Android Gradle Authoritative guide >, about Gradle Understanding is deeper , But I don't want to forget it for a while , the ...

  9. Raspberry pie study notes ——USB wifi Configuration Guide

    0 Preface     Raspberry pie can use both wired and wireless networks , If it is not convenient to use wired network, we can use USB wifi Wireless network card lets raspberry pie also plug in Wireless " Wing ". But it's not the same as using cable network plug and play ...

  10. 【 note 】javascript Authoritative guide - Chapter six - object

    object // This book means :javascript Authoritative guide     // The following is excerpted from :2013.7.28 Object definition : 1. Object is a compound value : There will be a lot of values ( Original value or object ) Come together , You can access these values by name . ...

Random recommendation

  1. H3 BPM Product installation manual (.Net edition )

    1         Installation instructions 1.1    Server installation required software Before using the workflow software , Some of the following software must be installed : l  IIS7.0 Above version ( must ): l  .Net Framework 4.5( have to ...

  2. Show hidden files .DS_Store file

    1. mac In order to protect system files and reduce interference to users , Improve user experience , Hiding some system files , Such as hosts The configuration file ,.DS_Store file (Desktop Services Store),.localiz ...

  3. from C# Of Singleton Design patterns

    Recently, , I'm learning how to C# Language used in design patterns to read some information , It's about Singleton Design patterns caught my attention . Developers who have studied design patterns know that Singleton Pattern . I want to briefly explain that this design pattern is for those who are still ...

  4. jQuery The text box gains and loses focus

    Version of a css Code section : .focus { border: 1px solid #f00; background: #fcc; } When the focus gets , add to focus style , Add border , And change the background color to #fcc h ...

  5. cgg Type conversion of

    3. Type conversion 3.1 Arithmetic type conversion Compiler default implicit conversion level : long double >double >float >long long >long >int >c ...

  6. WiFi-ESP8266 introduction http(1)- Set up the server , Direct transmission network

    #include <ESP8266WiFi.h> /*** The project can be carried out in 2.4.0 edition esp8266 Run in the library , Not tested in a higher version Library ***/ const char *ssid = &q ...

  7. java ip Regular expressions

    private static boolean isBoolIp(String ipAddress) { String ip = "(?:(?:25[0-5]|2[0-4][0-9]|[01] ...

  8. Spark practice —— Using decision tree algorithm to predict forest vegetation

    This article is based on <Spark Advanced data analysis > The first 4 Chapter Using decision tree algorithm to predict forest vegetation set . For the full code, see ...

  9. [ solve the problem ]selenium.remote.UnreachableBrowserException Exception analysis and problem solving

    I have a set of automations that work fantastically in Firefox and Chrome, and I'd like to launch an ...

  10. Minimum spanning tree ( Templates prim)

    Description provincial government “ Unimpeded works ” Our goal is to achieve road traffic between any two villages in the province ( But it doesn't have to be directly connected by road , As long as it can be reached indirectly by road ). Now we get the statistical table of urban roads , The table lists the roads to be built between any two towns ...