There are a lot of Nginx Actual combat content , In this article, let's introduce Nginx Principle .
Introducing Nginx Let's explain some common terms to you first , This can help us to better understand Nginx Process model of . As Web The server , The original design is to be able to handle more client requests , Generally speaking , There are three ways to accomplish parallel processing of requests ,
Multi process 、
asynchronous The way .
Every time the server receives a client request , A child process will be generated by the main process to establish a connection with the request for interaction , Until the connection is broken, the child process is finished
advantage It's that the sub processes are independent of each other , There is no interference between client requests .
shortcoming Is to generate a child process, memory copy is required 、 There will be a certain extra cost in resources and time . If there are more requests , There will be some pressure on system resources
Multithreading and multiprocessing are very similar , Every time the server receives a client request , A thread will be generated to interact with the client . The cost of producing a thread is much less than that of a process , So the multithreading approach to a certain extent reduces web Server requirements for system resources .
shortcoming It's memory sharing between multiple threads 、 There are interactions between them
Asynchronous mode is totally different from the two methods mentioned above , About asynchrony , There are also several concepts
Non blocking , Let's make an explanation here
About synchronization and asynchrony , It's easy for us to understand . Synchronization mechanism refers to that after the sender sends a request , You need to wait for the receiver to return the response , To send the next request , And the asynchronous mechanism , After the sender sends the request , Don't wait for the receiver to respond to this request , Just keep sending the next request .
Blocking and non-blocking , Mainly refers to socket Blocking and non blocking ways of reading and writing data .Socket In fact, the essence of is also IO operation . every last TCP Socket There is a send buffer and a receive buffer in the kernel of . For blocking mode , If the receive buffer is empty , that socket Of read Method thread will block , Until data enters the receive buffer . And for writing data to socket In China , If the length of the data to be sent is greater than the free length of the send buffer , that write The method goes into blocking .
At first glance, the interpretation of these four concepts will feel big in an instant , It is also often said that synchronous asynchronous is equivalent to blocking and non blocking , Actually , It's very simple to distinguish them .
The main difference between synchronous asynchronous and blocking non blocking is that the objects are different .
Synchronous asynchronous Is aimed at
caller Speaking of , After the caller makes a request , Waiting for the callee's feedback all the time is synchronization , You don't have to wait to do anything else, it's asynchronous .
Blocking non blocking Is aimed at
Callees Speaking of , After the callee receives a request , Giving feedback only after the request task is blocked , It's non blocking to give feedback directly after receiving a request .
And for non blocking mode , To achieve the goal through event triggering . We can argue that NIO There is one at the bottom I/O Thread scheduling , It's constantly scanning every Socket The buffer , When the write buffer is found to be empty , It will produce a Socket Can write event , At this point, the program can write data to Socket in . If you can't finish it at one time , Just wait for the next writable event notification ; conversely , When you find data in the buffer , It will produce a Socket Can read the event , When the program receives this notification event, it can start from Socket Read the data .
Then based on these concepts, four concepts are introduced ：
Synchronous blocking 、
Synchronous nonblocking 、
Asynchronous blocking 、
Synchronous blocking ： After the sender sends the request to the receiver , Waiting for the receiver to respond ; The request is processed by the receiver IO If the operation cannot get the result immediately , Just wait for the result to return before responding to the sender . It's been blocked all the time ;
Synchronous nonblocking ： After the sender sends the request to the receiver , Waiting for a response , The receiver is in the process of IO During operation , You can do something else without waiting , And because there's no result yet , The sender is still waiting . The receiver gets io After the operation of , Respond the result to the sender , The receiver enters the next request process
Asynchronous blocking ： After the sender sends the request to the receiver , Don't wait for a response , You can go on to other operations . When the receiver processes the request IO If the operation cannot get the result immediately , After waiting for the result to be returned, the sender will respond
Asynchronous non-blocking ： After the sender sends the request , Don't wait for a response , You can continue to do other things . When the receiver processes the request IO If the operation cannot get the result immediately , And don't wait , But to do something else . When io After the operation is completed , Inform the receiver of the result , The receiver responds to the sender
Nginx Combined with the
Multi process mechanism and
Asynchronous mechanism External services
Nginx After the service starts , Will produce a
The main process and
Multiple work processes .
master Processes are mainly used to manage worker process , contain ： Receiving signals from the outside world , To each worker Process signaling , monitor worker The running state of the process , When worker After the process exits ( Under abnormal conditions ), Will automatically restart the new worker process
And basic network events , Put it in worker In the process to deal with . Multiple worker Processes are equivalent , They compete equally for requests from clients , Processes are independent of each other . A request , Only one worker In process processing , One worker process , It is not possible to process the requests of other processes ,worker The number of processes can be set , In general, we will set the machine cpu Consistency of nuclear numbers
Master The role of the process is ？
Read and verify the configuration file nginx.conf; management worker process ;
Worker The role of the process is ？
every last Worker Processes maintain a thread （ Avoid thread switching ）, Handle connections and requests ; Be careful Worker The number of processes is determined by the configuration file , In general, and CPU Number related （ It's good for process switching ）, There are several configurations Worker process .
master To manage worker process , So we just need to master Just process communication .master The process receives signals from the outside world , Do different things according to the signal , For example, we often used
./sbin/nginx -c conf/nginx.conf -s reload
When you execute this order ,master After receiving this signal, start a new one Nginx process , And new Nginx Process is resolving to reload After the parameter , You know it's to control Nginx To reload the configuration file , It will be to master Process signaling , then master The configuration file will be reloaded , Starting a new worker process , And to all the old worker Process signaling , Tell them they can retire , new worker After startup, you can receive new requests with a new configuration file – The principle of hot deployment
We basically know that we are operating nginx when ,nginx What's going on inside , that worker How does the process handle requests ？ stay Nginx in , be-all worker The process is all equal , The opportunity for each process to process each request is the same . When we provide 80 Port of http The service , A connection request , Every process can handle this connection .
worker The process is from master process fork Over here , And in the master In progress , Will establish the need first listen Of socket, then fork More than one worker process , When a new connection request comes in work Processes can handle , To avoid the panic effect ,worker The process must preempt before processing the request accept_mutex, That is, mutually exclusive lock , When the lock is successful , You can parse and process the request . After the request is processed, it is returned to the client .
Some of the benefits of the way process models are handled are ： Processes are independent , That's one worker Process exited unexpectedly , other worker The process will not be affected ; Besides , Independent processes also avoid unnecessary lock operations , This will improve the processing efficiency , And it's easier to develop and debug .
worker The process will compete to listen for connection requests from clients ： This way may bring about a problem , It's possible that all requests will be one worker The process gains... From the competition , Causes other processes to be idle , And a certain process will be in a busy state , This state may also cause the connection to not respond in a timely manner and be discarded discard Drop requests that you could handle . This kind of unfair phenomenon , It needs to be avoided , Especially in high reliability web Server environment .
In response to this phenomenon ,Nginx Used a whether to open accept_mutex The value of the option ,ngx_accept_disabled The logo controls a worker Whether the process needs to compete for accept_mutex Options , And then to get accept event
ngx_accept_disabled value ：nginx One eighth of the total number of connections for a single process , Subtract the number of free connections left , I got this ngx_accept_disabled.
When ngx_accept_disabled Greater than 0 when , Not trying to get accept_mutex lock , And will ngx_accept_disabled reduce 1, therefore , Every time I execute here , They're going to cut 1, Until less than 0. Not to get accept_mutex lock , It's about giving up the opportunity to connect , It is obvious that , When there are fewer free connections ,ngx_accept_disable The bigger it is , So the more opportunities you give up , In this way, the more opportunities other processes have to acquire locks . Don't go to accept, My connection is under control , The connection pool of other processes will be utilized , such ,nginx It controls the balance of connections between multiple processes .
Okay ~ This article first introduces here , If you have any questions, please leave a message