Nginx explains its principle in detail

Bobo roast duck 2020-11-13 05:00:06
nginx explains principle


There are a lot of Nginx Actual combat content , In this article, let's introduce Nginx Principle .

Nginx Process model analysis

Introducing Nginx Let's explain some common terms to you first , This can help us to better understand Nginx Process model of . As Web The server , The original design is to be able to handle more client requests , Generally speaking , There are three ways to accomplish parallel processing of requests , Multi process Multithreading asynchronous The way .

Multiprocess mode

Every time the server receives a client request , A child process will be generated by the main process to establish a connection with the request for interaction , Until the connection is broken, the child process is finished
advantage It's that the sub processes are independent of each other , There is no interference between client requests .
shortcoming Is to generate a child process, memory copy is required 、 There will be a certain extra cost in resources and time . If there are more requests , There will be some pressure on system resources

 Insert picture description here

Multithreaded mode

Multithreading and multiprocessing are very similar , Every time the server receives a client request , A thread will be generated to interact with the client . The cost of producing a thread is much less than that of a process , So the multithreading approach to a certain extent reduces web Server requirements for system resources .
shortcoming It's memory sharing between multiple threads 、 There are interactions between them
 Insert picture description here

Asynchronous way

Asynchronous mode is totally different from the two methods mentioned above , About asynchrony , There are also several concepts Sync asynchronous ; Blocking Non blocking , Let's make an explanation here
About synchronization and asynchrony , It's easy for us to understand . Synchronization mechanism refers to that after the sender sends a request , You need to wait for the receiver to return the response , To send the next request , And the asynchronous mechanism , After the sender sends the request , Don't wait for the receiver to respond to this request , Just keep sending the next request .
 Insert picture description here
 Insert picture description here

Blocking and non-blocking , Mainly refers to socket Blocking and non blocking ways of reading and writing data .Socket In fact, the essence of is also IO operation . every last TCP Socket There is a send buffer and a receive buffer in the kernel of . For blocking mode , If the receive buffer is empty , that socket Of read Method thread will block , Until data enters the receive buffer . And for writing data to socket In China , If the length of the data to be sent is greater than the free length of the send buffer , that write The method goes into blocking .
 Insert picture description here
 Insert picture description here

At first glance, the interpretation of these four concepts will feel big in an instant , It is also often said that synchronous asynchronous is equivalent to blocking and non blocking , Actually , It's very simple to distinguish them .

The main difference between synchronous asynchronous and blocking non blocking is that the objects are different .

Synchronous asynchronous Is aimed at caller Speaking of , After the caller makes a request , Waiting for the callee's feedback all the time is synchronization , You don't have to wait to do anything else, it's asynchronous .

Blocking non blocking Is aimed at Callees Speaking of , After the callee receives a request , Giving feedback only after the request task is blocked , It's non blocking to give feedback directly after receiving a request .

And for non blocking mode , To achieve the goal through event triggering . We can argue that NIO There is one at the bottom I/O Thread scheduling , It's constantly scanning every Socket The buffer , When the write buffer is found to be empty , It will produce a Socket Can write event , At this point, the program can write data to Socket in . If you can't finish it at one time , Just wait for the next writable event notification ; conversely , When you find data in the buffer , It will produce a Socket Can read the event , When the program receives this notification event, it can start from Socket Read the data .
Then based on these concepts, four concepts are introduced : Synchronous blocking Synchronous nonblocking Asynchronous blocking Asynchronous non-blocking

Synchronous blocking : After the sender sends the request to the receiver , Waiting for the receiver to respond ; The request is processed by the receiver IO If the operation cannot get the result immediately , Just wait for the result to return before responding to the sender . It's been blocked all the time ;

 Insert picture description here
Synchronous nonblocking : After the sender sends the request to the receiver , Waiting for a response , The receiver is in the process of IO During operation , You can do something else without waiting , And because there's no result yet , The sender is still waiting . The receiver gets io After the operation of , Respond the result to the sender , The receiver enters the next request process
 Insert picture description here
Asynchronous blocking : After the sender sends the request to the receiver , Don't wait for a response , You can go on to other operations . When the receiver processes the request IO If the operation cannot get the result immediately , After waiting for the result to be returned, the sender will respond
 Insert picture description here

Asynchronous non-blocking : After the sender sends the request , Don't wait for a response , You can continue to do other things . When the receiver processes the request IO If the operation cannot get the result immediately , And don't wait , But to do something else . When io After the operation is completed , Inform the receiver of the result , The receiver responds to the sender

 Insert picture description here

Nginx The server's request processing process

Nginx Combined with the Multi process mechanism and Asynchronous mechanism External services
Nginx After the service starts , Will produce a The main process and Multiple work processes .

master Processes are mainly used to manage worker process , contain : Receiving signals from the outside world , To each worker Process signaling , monitor worker The running state of the process , When worker After the process exits ( Under abnormal conditions ), Will automatically restart the new worker process
And basic network events , Put it in worker In the process to deal with . Multiple worker Processes are equivalent , They compete equally for requests from clients , Processes are independent of each other . A request , Only one worker In process processing , One worker process , It is not possible to process the requests of other processes ,worker The number of processes can be set , In general, we will set the machine cpu Consistency of nuclear numbers

Master The role of the process is ?
Read and verify the configuration file nginx.conf; management worker process ;
Worker The role of the process is ?
every last Worker Processes maintain a thread ( Avoid thread switching ), Handle connections and requests ; Be careful Worker The number of processes is determined by the configuration file , In general, and CPU Number related ( It's good for process switching ), There are several configurations Worker process .

 Insert picture description here

Thermal deployment

master To manage worker process , So we just need to master Just process communication .master The process receives signals from the outside world , Do different things according to the signal , For example, we often used

./sbin/nginx -c conf/nginx.conf -s reload

When you execute this order ,master After receiving this signal, start a new one Nginx process , And new Nginx Process is resolving to reload After the parameter , You know it's to control Nginx To reload the configuration file , It will be to master Process signaling , then master The configuration file will be reloaded , Starting a new worker process , And to all the old worker Process signaling , Tell them they can retire , new worker After startup, you can receive new requests with a new configuration file – The principle of hot deployment

 Insert picture description here

worker How the process processes requests ?

We basically know that we are operating nginx when ,nginx What's going on inside , that worker How does the process handle requests ? stay Nginx in , be-all worker The process is all equal , The opportunity for each process to process each request is the same . When we provide 80 Port of http The service , A connection request , Every process can handle this connection .
worker The process is from master process fork Over here , And in the master In progress , Will establish the need first listen Of socket, then fork More than one worker process , When a new connection request comes in work Processes can handle , To avoid the panic effect ,worker The process must preempt before processing the request accept_mutex, That is, mutually exclusive lock , When the lock is successful , You can parse and process the request . After the request is processed, it is returned to the client .

 Insert picture description here
 Insert picture description here
Some of the benefits of the way process models are handled are : Processes are independent , That's one worker Process exited unexpectedly , other worker The process will not be affected ; Besides , Independent processes also avoid unnecessary lock operations , This will improve the processing efficiency , And it's easier to develop and debug .

worker The process will compete to listen for connection requests from clients : This way may bring about a problem , It's possible that all requests will be one worker The process gains... From the competition , Causes other processes to be idle , And a certain process will be in a busy state , This state may also cause the connection to not respond in a timely manner and be discarded discard Drop requests that you could handle . This kind of unfair phenomenon , It needs to be avoided , Especially in high reliability web Server environment .

In response to this phenomenon ,Nginx Used a whether to open accept_mutex The value of the option ,ngx_accept_disabled The logo controls a worker Whether the process needs to compete for accept_mutex Options , And then to get accept event

ngx_accept_disabled value :nginx One eighth of the total number of connections for a single process , Subtract the number of free connections left , I got this ngx_accept_disabled.
When ngx_accept_disabled Greater than 0 when , Not trying to get accept_mutex lock , And will ngx_accept_disabled reduce 1, therefore , Every time I execute here , They're going to cut 1, Until less than 0. Not to get accept_mutex lock , It's about giving up the opportunity to connect , It is obvious that , When there are fewer free connections ,ngx_accept_disable The bigger it is , So the more opportunities you give up , In this way, the more opportunities other processes have to acquire locks . Don't go to accept, My connection is under control , The connection pool of other processes will be utilized , such ,nginx It controls the balance of connections between multiple processes .

Okay ~ This article first introduces here , If you have any questions, please leave a message

版权声明
本文为[Bobo roast duck]所创,转载请带上原文链接,感谢

  1. [front end -- JavaScript] knowledge point (IV) -- memory leakage in the project (I)
  2. This mechanism in JS
  3. Vue 3.0 source code learning 1 --- rendering process of components
  4. Learning the realization of canvas and simple drawing
  5. gin里获取http请求过来的参数
  6. vue3的新特性
  7. Get the parameters from HTTP request in gin
  8. New features of vue3
  9. vue-cli 引入腾讯地图(最新 api,rocketmq原理面试
  10. Vue 学习笔记(3,免费Java高级工程师学习资源
  11. Vue 学习笔记(2,Java编程视频教程
  12. Vue cli introduces Tencent maps (the latest API, rocketmq)
  13. Vue learning notes (3, free Java senior engineer learning resources)
  14. Vue learning notes (2, Java programming video tutorial)
  15. 【Vue】—props属性
  16. 【Vue】—创建组件
  17. [Vue] - props attribute
  18. [Vue] - create component
  19. 浅谈vue响应式原理及发布订阅模式和观察者模式
  20. On Vue responsive principle, publish subscribe mode and observer mode
  21. 浅谈vue响应式原理及发布订阅模式和观察者模式
  22. On Vue responsive principle, publish subscribe mode and observer mode
  23. Xiaobai can understand it. It only takes 4 steps to solve the problem of Vue keep alive cache component
  24. Publish, subscribe and observer of design patterns
  25. Summary of common content added in ES6 + (II)
  26. No.8 Vue element admin learning (III) vuex learning and login method analysis
  27. Write a mini webpack project construction tool
  28. Shopping cart (front-end static page preparation)
  29. Introduction to the fluent platform
  30. Webpack5 cache
  31. The difference between drop-down box select option and datalist
  32. CSS review (III)
  33. Node.js学习笔记【七】
  34. Node.js learning notes [VII]
  35. Vue Router根据后台数据加载不同的组件(思考->实现->不止于实现)
  36. Vue router loads different components according to background data (thinking - & gt; Implementation - & gt; (more than implementation)
  37. 【JQuery框架,Java编程教程视频下载
  38. [jQuery framework, Java programming tutorial video download
  39. Vue Router根据后台数据加载不同的组件(思考->实现->不止于实现)
  40. Vue router loads different components according to background data (thinking - & gt; Implementation - & gt; (more than implementation)
  41. 【Vue,阿里P8大佬亲自教你
  42. 【Vue基础知识总结 5,字节跳动算法工程师面试经验
  43. [Vue, Ali P8 teaches you personally
  44. [Vue basic knowledge summary 5. Interview experience of byte beating Algorithm Engineer
  45. 【问题记录】- 谷歌浏览器 Html生成PDF
  46. [problem record] - PDF generated by Google browser HTML
  47. 【问题记录】- 谷歌浏览器 Html生成PDF
  48. [problem record] - PDF generated by Google browser HTML
  49. 【JavaScript】查漏补缺 —数组中reduce()方法
  50. [JavaScript] leak checking and defect filling - reduce() method in array
  51. 【重识 HTML (3),350道Java面试真题分享
  52. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  53. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  54. [re recognize HTML (3) and share 350 real Java interview questions
  55. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  56. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  57. 【重识 HTML ,nginx面试题阿里
  58. 【重识 HTML (4),ELK原来这么简单
  59. [re recognize HTML, nginx interview questions]
  60. [re recognize HTML (4). Elk is so simple