Recently, I spent some time summarizing nginx Common skills and knowledge points , Through some common practical cases to nginx Many small points of knowledge are connected .
The first is to enter nginx Directory for script startup , Ready to initialize environment :
After successful startup , adopt curl Access tests can be performed
First, through ip add To view the address information ( I am using a virtual machine ):
And then through curl Test on this virtual machine :
If the access fails , Please check whether the firewall is on first , If it's on , You can close it first , Facilitate subsequent demo practice :
After successful access , Will see nginx Home page :
start-up nginx after , You can view the process related information through the log
master process It is mainly responsible for updating logs , Hot loading The main process
worker process Working process Handle client connections , Processing requests
nginx Log
stay nginx Of log Under the folder , After we revisit the server , Corresponding access The log will add new attribute content .
Look for the logs here , It is actually based on the object handle to find the specified file .
nginx The high performance of the is mainly attributed to its corresponding unique io Multiplexing . Whenever a request is sent , All requests are put into a select Inside the queue , Dedicated storage client http Link information , Polling is then handled through a special selector , Monitor whether the client sends the requested data ( Always polling , Congestion will not occur until data is sent ), Once the transmitted data is received ,selector The specified processing will be made .
When we start nginx after , There are two types of processes , Namely master The process and worker process , The request sent by the client is usually just and master Process communication ,master The process is responsible for receiving these requests , And then distribute it to different worker Process to process .
In essence ,worker The process is from master Process side fork Coming out , stay master In the process of , You need to establish the monitored socket after , Again fork More than one worker process , Multiple worker process ( Multicore cpu In the environment ) Will fight accept_mutex The mutex , The process that successfully grabs the lock will register the corresponding event to process the task . The advantage of this independent process handling events is that it avoids the original locking overhead , At the same time, when something happens in the process bug When out of the ordinary , It won't affect other processes , Reduced risk .
nginx How to deal with high concurrency ? When multiple worker To handle the request , Every worker There is only one main thread in it , But the number of concurrency is very limited , When the number of concurrent is large , Do you need to create thousands of threads to process requests ?
nginx The design of is brilliant at this point , Its mechanism is a bit similar to linux Inside epoll Such system scheduling . Response thread processed ( Tentatively named A) only one , When the requested event is processed , Will inform you A , There is no waiting operation for blocked requests . This design avoids A Event consumption caused by actively polling and waiting for event processing .
Compared to traditional apache The server , Every connection ,apache It creates a process , Single thread per process ,apache You can create at most 256 A process . For a website with a relatively high load ,256 The process of , That is to say 256 Threads , Because when a thread processes a request , It's synchronous blocking mode , After receiving the request , Will always wait for the request to read the program file (IO)( Sync ), Execute business logic , Back to the client , All operations are completed before the next request can be processed ( Blocking ) If the server has reached 256 The limits of , Then the next access needs to be queued, which is why the load of some servers is not high .
Every worker Processes are single threaded
Every io Links are built based on asynchronous non blocking models .
worker The number of processes is combined with the number of servers cpu Quantity to set , It can achieve the effect of tuning
nginx When processing a request , More importantly, these requests will form a request chain , Each module on the chain will handle different functions . For example, for request decompression and compression processing , about ssl Processing module, etc . in general ,nginx I summarize the basic modules for the following :
event module Event handling mechanism module , It provides a specific processing framework for various events .nginx What kind of events are used to handle , Depending on the operating system and compilation settings , for example ngx_event_core_module and ngx_epoll_module.
phase handler Mainly responsible for processing client requests and generating content to be responded ,ngx_http_static_module Is one of them , The module Mainly is to nginx The request goes to read some data on the disk , Then make an output response , For example, :uri In order to / ending , Will be left to index Module to link the complete path name, and then call this module through internal call .
output filter The module can make specified modifications to the output content of the response , For example, some specific to the page url Carry out replacement operations .
upstream This module is the first mock exam. , This module mainly does the work of reverse proxy .
load-balancer This module is mainly responsible for the load balancing function , Select any server in the cluster to process the request .
First, let's take a look at the most basic nginx Basic configuration :
ps: This configuration is divided into many configuration blocks , A space is required between each configuration block and braces , Otherwise, the failure will be identified
For example, we are /usr/local/www/ A copy is created under the folder html page , And then you need to go through nginx To visit ,nginx It's matching server Of location In the path , Will follow the full path first , From left to right , Then match from right to left .
Be careful , Inside root The configuration is inherited ,location Inside root Will inherit location Outside root Information . If location It says root, Will follow location Inside root To judge .server Outside >server Inside >location Inside .
The so-called dynamic static separation essentially refers to our understanding of nginx The dynamic request and static file in the configuration are separated to some extent . For example, the following configuration information :
ps: Configuration content in this , I introduced it host File modification
The actual storage location of the picture is :
/usr/local/static/img/logo.jpg
According to the above configuration , The way to access is :
http://www.idea.com/static/img/logo.jpg
But in this case , We go through nginx The way to access pictures won't succeed , The reason is that this address will be deleted nginx Process as :
Go to root Address +static Query the final address of :
/usr/local/static/static
To avoid that , Aliases are usually used alias To match , The specific configuration is as follows :
This is the time , We'll visit again http://www.idea.com/static/img/logo.jpg The visit will be successful .
Similarly, we can carry out more complex logical operation cases for aliases :
Suppose we need to visit static Contents of the file below , At this time, you don't need to carry the corresponding name , Direct access through
http://www.idea.com/static/css/test.css You can access /usr/local/www/static/css/ The content below
that , If there are many types of static files that need to be mapped, how to configure them ? At this time, the configuration of regular expressions can be introduced :
After adding the configuration of this regular expression (~* .(gif|png|css|js)$) By accessing the path :
http://www.idea.com/css/test.css We can visit /usr/local/static/css/ The contents of the file below .
nginx It also provides a very flexible proxy access mechanism , For us to access by proxy location
adopt nginx Configure the exact matching agent for page Jump :
adopt proxy_pass Configure to provide proxy request forwarding , When we visit http://www.idea.com/idea-serach When , It will match the page visiting Baidu .
nginx Common configurations for access support the following :
Full path access location
By keyword static Match to access location
Access through regular expressions location
Access through reverse proxy location
Set some links to allow access only on fixed websites , Prevent some special domain names from changing ip After visiting, steal the resource files of this website , Therefore, the function of the anti-theft chain can be set . The specific configuration is as follows :
The configuration of blacklist is relatively simple , Just create the blacklist file first , And then in http Just introduce it into the block
Remember to make the configuration successful nginx reload once , At the same time, check whether it is 203, If so, it indicates that it is a cache request .
When we need to view some details of the data sent by the client , Need to be right nginx Keep a log , The corresponding optional parameters are as follows :
Place log configuration in http Just inside the block .
Usually, we will place the log configuration in http Modules , For example, the following group of cases :
Forward agency
Add a layer between the client and the server proxy Agent machine , When a request occurs , Request by proxy server server End . The most common forward agency cases are :***, Internet client tools in LAN .
For example, a chestnut :
When we need to match to $host: $port /baidu.html
When , Proxy request Baidu page , The following configurations can be performed :
Reverse proxy
When the client requests the server , In fact, there is a proxy machine in the access layer of the server to forward requests , The forwarding of requests in this layer is transparent to the client .
The reverse proxy requests the internal server , Although the configuration is similar , But the function is different , The configuration case is as follows :
nginx There's one called upstream Module , Content dedicated to configuring load balancing ,upstream The following related parameters are provided :
service Reverse service address Add ports
weight The weight
max_fails How many times have you failed If you think the host is down , Kicked out
fail_timeout remove server Time after re request When the service hangs up , Reconnect during this time
backup Standby service ( When all the services hang up , Then you will request the service here )
max_conns Maximum connections allowed
slow_start When the node recovers , Don't join immediately , But wait slow_start Add service pair column after .
The specific configuration of corresponding parameters is as follows: :
ngixn The default supported load balancing strategy is polling and weighting , besides ,nginx It also supports a variety of additional load balancing strategies :
ll+weight: Polling weighted weight ( Default )
Prone to weightlessness , For example, access to a machine is too slow , It is easy to cause request accumulation .
ip_hash : be based on Hash Calculation , Often used to keep session Oneness
be based on hash When calculating , According to ip Conduct hash Calculate the request to the specified server .
( Usually session The best way to deal with consistency in distributed is to session Stored in a third-party storage center )
First of all, ip Conduct hash After calculation , Modulo the value and the number of servers .
url_hash: Static resource cache , Save storage , Speed up
According to the picture url Request to the specified server , It's easy to understand .
least_conn : Minimum number of links
Each request will only request to the server with the minimum number of client connections .
least_time : Minimum response time
Calculate the average response time of nodes , And then take the fastest response , Higher assignment weight
By using ip Request the back-end server by hashing
Of course nginx In addition to these common functions , It also provides a very rich configuration of other functions , Please refer to nginx Official document configuration information for http://nginx.org/en/docs/
After reading this article, there are gains ? Please forward to share with more people