High performance nginx HTTPS tuning - how to speed up HTTPS by 30%

Passerby a Java 2021-02-23 00:32:07
high performance nginx https tuning





author |  Mr. Carla

source | https://kalasearch.cn/blog/high-performance-nginx-tls-tuning/

Why optimize Ngin HTTPS Delay


Nginx Often as the most common server , Often used as load balancing (Load Balancer)、 Reverse proxy (Reverse Proxy), And gateway (Gateway) wait . A properly configured Nginx The server stand-alone should be able to withstand 50K To 80K About requests per second , At the same time CPU The load is controllable .

But in many cases , Load is not the first priority to optimize . For example, for Kara search , We want users to be able to , Can experience the feeling of instant search , in other words , Every search request must be in 100ms - 200ms Time for Internal end-to-end return to the user , So that users can search without “ Carton ” and “ load ”. therefore , For us , Optimizing request latency is the most important optimization direction .

In this article , Let's start with Nginx Medium TLS Set what may be related to request latency , How to adjust to maximize acceleration . Then we use optimized Kara search Nginx Server instance to share how to adjust Nginx TLS/SSL Set up , Speed up your first search 30% about . We'll discuss in detail what optimizations we've made at each step , The motivation and effect of optimization . I hope I can help other students with similar problems .

As usual , In this paper, the Nginx Set the file to be placed in github, Welcome to use : High performance Nginx HTTPS tuning (https://github.com/Kalasearch/high-performance-nginx-tls-tuning)

TLS Handshakes and delays


Most of the time, developers think : If you don't absolutely care about performance , So it's not necessary to understand the underlying and more detailed optimizations . This sentence is appropriate in many cases , Because a lot of the time complex underlying logic has to be wrapped up , Only in this way can the complexity of higher level application development be controlled . for instance , If you just need to develop one APP Or websites , It may not be necessary to pay attention to assembly details , Focus on how the compiler optimizes your code —— After all, on Apple or Android, a lot of optimizations are done at the bottom .

that , Understand the underlying TLS And the application layer Nginx What does delay optimization have to do with ?

The answer is that most of the time , Optimizing network latency is actually trying to reduce the number of data transfers between users and servers , It's called roundtrip. Due to physical limitations , The speed of light from Beijing to Yunnan is almost running 20 In milliseconds , If you don't care that the data has to travel between Beijing and Yunnan many times , Then there must be a delay .

So if you need to optimize request latency , A little understanding of the context of the underlying network can be helpful , Many times, even if you can easily understand the key to an optimization . In this article we don't go into too much TCP perhaps TLS Details of the mechanism , If you are interested, please refer to High Performance Browser Networking A Book .

for instance , The figure below shows if your service is enabled HTTPS, Data transfer before starting any data transfer .


You can see , Before your user gets the data he needs , The underlying packets are already running between the user and your server 3 Back and forth .

Let's say that each round trip requires 28 In milliseconds , The user has been waiting 224 Millisecond before receiving data .

At the same time the 28 Millisecond is actually a very optimistic assumption , In domestic telecommunications 、 China Unicom, China Mobile and all kinds of complex network conditions , The delay between the user and the server is more uncontrollable . On the other hand , Usually a web page needs dozens of requests , These requests may not be all in parallel , So dozens times 224 millisecond , It may be a few seconds before the page opens .

therefore , In principle, if possible , We need to minimize the backhaul between users and servers (roundtrip), In the settings below , For each setting, we'll discuss why this setting might help reduce backhaul .

Nginx Medium TLS Set up


So in Nginx Setting up , How to adjust the parameters to reduce the delay ?

Turn on HTTP/2

HTTP/2 The standard is from Google Of SPDY Improvements made on , Compared with HTTP 1.1 Improved a lot of performance , Especially when multiple requests need to be paralleled, the latency can be significantly reduced . Now on the Internet , On average, a web page needs dozens of requests , And in the HTTP 1.1 What the era browser can do is open a few more connections ( Usually 6 individual ) Make parallel requests , and HTTP 2 Parallel requests can be made in one connection .HTTP 2 Native supports multiple parallel requests , Therefore, it greatly reduces the backhaul of requests executed in sequence , The first consideration is to turn on .

If you want to see for yourself HTTP 1.1 and HTTP 2.0 The speed difference , You can try it :https://www.httpvshttps.com/. My network test came down HTTP/2 Than HTTP 1.1 fast 66%.


stay Nginx In the open HTTP 2.0 It's simple , Just add one http2 A sign is enough

listen 443 ssl;

#  Change it to
listen 443 ssl http2;

If you're worried that your users are using old clients , such as Python Of requests, Not for the time being HTTP 2 Words , So don't worry . If the user's client does not support HTTP 2, Then the connection will automatically be downgraded to HTTP 1.1, Backward compatible . therefore , All use old Client Users of , Still unaffected , New clients can enjoy HTTP/2 New features .

How to confirm your website or API Open the HTTP 2

stay Chrome Open developer tools , It opens at  Protocol  Then you can see the protocol used in all the requests . If  protocol  The value of this column is  h2  Words , So what we use is HTTP 2 了


Of course, another way is to use it directly  curl  If returned status Prior to  HTTP/2  If you do, it's just HTTP/2 Open the .

*  ~ curl --http2 -I https://kalasearch.cn
HTTP/2 403
server: Tengine
content-type: application/xml
content-length: 264
date: Tue, 22 Dec 2020 18:38:46 GMT
x-oss-request-id: 5FE23D363ADDB93430197043
x-oss-cdn-auth: success
x-oss-server-time: 0
x-alicdn-da-ups-status: endOs,0,403
via: cache13.l2et2[148,0], cache10.l2ot7[291,0], cache4.us13[360,0]
timing-allow-origin: *
eagleid: 2ff6169816086623266688093e

adjustment Cipher priority

Try to pick the ones that are updated faster Cipher, Helps reduce latency :

#  Manually enable  cipher  list 
ssl_prefer_server_ciphers on;  # prefer a list of ciphers to prevent old and slow ciphers
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';

Enable OCSP Stapling

In China, this may be the right way to use Let's Encrypt Certificate service or website is the most influential delay optimized . If not enabled OCSP Stapling Words , When users connect to your server , Sometimes you need to verify the certificate . And for some unknown reason ( Let's not get this straight )Let's Encrypt The authentication server is not very smooth , Therefore, it can cause a delay of several seconds or even more than ten seconds , The question is iOS It's very serious on the equipment

There are two ways to solve this problem :

  1. Don't use Let's Encrypt, You can try to replace it with the free one provided by Alibaba cloud DV certificate
  2. Turn on OCSP Stapling

Open the OCSP Stapling Words , The step of certificate verification can be omitted . Save one roundtrip, Especially when the network situation is uncontrollable roundtrip, It may be able to greatly reduce your delay .

stay Nginx Enable OCSP Stapling It's very simple , Just set up :

ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /path/to/full_chain.pem;

How to detect OCSP Stapling Is it turned on ?

You can use the following command

openssl s_client -connect test.kalasearch.cn:443 -servername kalasearch.cn -status -tlsextdebug < /dev/null 2>&1 | grep -i "OCSP response"

To test . If the result is

OCSP response:
OCSP Response Data:
    OCSP Response Statussuccessful (0x0)
    Response TypeBasic OCSP Response

It indicates that it has been turned on . Reference resources HTTPS stay iPhone On the problem of slow One article

adjustment  ssl_buffer_size

sslbuffersize Control when sending data buffer size , The default setting is 16k. The smaller the value , The smaller the delay , And adding a header or something will make overhead It's going to get bigger , On the contrary, the greater the delay ,overhead The smaller it is .

So if your service is REST API Or websites , Turning this value down can reduce latency and TTFB, But if your server is used to transfer large files , Then it can maintain 16k. A discussion of this value and more general TLS Record Size The discussion of the , You can refer to :Best value for nginx's sslbuffersize option

If it's a website or REST API, Recommended values for 4k, But the best value of this value will obviously vary with the data , So please try 2 - 16k Different values between . stay Nginx Adjusting this value is also very easy.

ssl_buffer_size 4k;

Enable SSL Session cache

Enable SSL Session Caching can be greatly reduced TLS Repeated verification of , Reduce TLS Handshaking roundtrip. although session The cache takes up a certain amount of memory , But with 1M You can cache it with less memory 4000 A connection , It's very, very cost-effective . meanwhile , For most websites and services , To achieve 4000 A simultaneous connection itself requires a very, very large user base , So it's safe to open .

here  ssl_session_cache  Set to use 50M Memory , as well as 4 Hours of connection timeout closing time  ssl_session_timeout

# Enable SSL cache to speed up for return visitors
ssl_session_cache   shared:SSL:50m; # speed up first time. 1m ~= 4000 connections
ssl_session_timeout 4h;

How can Kara search reduce 30% Request delay for


Kara search is domestic Algolia, Dedicated to helping developers quickly build instant search capabilities (instant search), Do the fastest and easiest search as a service in China .

After developers access , All search requests go through Kara API It can be returned directly to the end user . To give users an instant search experience , We need a very short time after each keystroke ( Usually 100ms To 200ms) Return the result to the user . So each search needs to be able to reach 50 Engine processing time in milliseconds and 200 End to end time in milliseconds .

We did a movie search with the data of Douban movies Demo, If you are interested, you are welcome to experience instant search , Try searching “ Infernal Affairs ” perhaps “ A Chinese Odyssey ” Experience speed and relevance :https://movies-demo.kalasearch.cn/

For each request only 100 To 200 Millisecond delay budget , We have to take every delay into account .

To simplify the , The delays that each search request experiences are


Total delay = User requests arrive at the server (T1) + Reverse processing (Nginx T2) + Data center latency (T3) + The server processes ( Kara engine T4) + The user requests to return (T3+T1)

In the above delay ,T1 It is only related to the physical distance between the user and the server , and T3 A very small ( Reference resources Jeff Dean Numbe) Negligible .

So what we can control is basically T2 and T4, namely Nginx Server processing time and Kara's engine processing time .

Nginx Here as a reverse proxy , Deal with some security 、 Flow control and TLS The logic of , And Kara's engine is one in Lucene Based on the inverted engine .

The first possibility we consider first is : Does the delay come from the Kara engine ?

In the picture below Grafana Instrument cluster , We see, except for a few slow queries from time to time , The search of 95% Server processing latency is less than 20 millisecond . Compared to the same data set benchmark Of Elastic Search Engine P95 The search delay is in 200 Millisecond or so , So the possibility of slow engine speed is ruled out .


And in Alibaba cloud monitoring , We set up to send search requests to Kara servers from all over the country . We finally found out SSL Processing time often exceeds 300 millisecond , That is to say T2 This step , Light treatment TLS Shaking hands and things like that ,Nginx We've used up all of our request time budgets .

At the same time, we found that , Searching on Apple devices is particularly slow , Especially the first access device . So we should roughly judge that it's because we use Let's Encrypt The problem with certificates .

We follow the steps above to Nginx The settings have been adjusted , And summed up the steps and wrote this article . In the adjustment Nginx TLS After setting ,SSL Time from average 140ms Down to 110ms about ( All provinces of China Unicom and mobile test points ), At the same time, the problem of slow access for the first time on Apple Devices disappeared .


After adjustment , Search latency for nationwide testing has been reduced to 150 Millisecond or so .

summary


adjustment Nginx Medium TLS Settings for using HTTPS Service and website delay have a very big impact . This paper summarizes Nginx China and TLS Related settings , Discuss in detail the possible impact of various settings on latency , And the adjustment suggestions are given . And then we'll continue to talk about HTTP/2 contrast HTTP 1.x What are the specific improvements , And in REST API Use HTTP/2 What are the advantages and disadvantages , Please keep an eye on

More good articles

  1. Java High concurrency series ( common 34 piece )
  2. MySql Master Series ( common 27 piece )
  3. Maven Master Series ( common 10 piece )
  4. Mybatis series ( common 12 piece )
  5. Chat db And cache consistency common implementation
  6. Interface idempotence is so important , What is it ? How to achieve ?
  7. Generic , It's a little difficult , It will make a lot of people confused , That's because you didn't read this article !

This article is from WeChat official account. - A passer-by Java(javacode2018).
If there is any infringement , Please contact the support@oschina.cn Delete .
Participation of this paper “OSC Source creation plan ”, You are welcome to join us , share .

版权声明
本文为[Passerby a Java]所创,转载请带上原文链接,感谢
https://qdmana.com/2021/02/20210222164937186r.html

  1. 【微前端】微前端最终章-qiankun指南以及微前端整体探索
  2. Vue-Cli 创建vue3项目
  3. Go in the front of the progress of u boot v7.0 U disk boot disk production tools
  4. 使用NTLM的windows身份验证的nginx反向代理
  5. Rust教程:针对JavaScript开发人员的Rust简介
  6. 使用 Serverless Framework 部署个人博客到腾讯云
  7. #研發解決方案#易車前端監控系統
  8. Vue changes localhost to IP address and cannot access
  9. JavaScript进阶学习
  10. HTML5 from entry to proficient, realize annual salary 10W +, zero basic students must see
  11. Vue:vuex状态数据持久化插件vuex-persistedstate
  12. Vue source code analysis - start
  13. Vue -- the child component calls the method of the parent component and passes parameters --- props
  14. React-Native 获取设备当前网络状态 NetInfo
  15. 高性能 Nginx HTTPS 调优 - 如何为 HTTPS 提速 30%
  16. How to learn HTML5? How can Xiaobai start HTML5 quickly?
  17. HTML + CSS detailed tutorial, this article is enough, but also quickly save
  18. JavaScript高级:JavaScript面向对象,JavaScript内置对象,JavaScript BOM,JavaScript封装
  19. Why Vue uses asynchronous rendering
  20. JavaScript高级:JavaScript面向对象,JavaScript内置对象,JavaScript BOM,JavaScript封装
  21. vue判断elementui中el-form是否更新变化,变化就提示是否保存,没变就直接离开
  22. 算法题:两数之和——JavaScript及Java实现
  23. 高性能 Nginx HTTPS 调优
  24. Why Vue uses asynchronous rendering
  25. day 31 jQuery进阶
  26. day 30 jQuery
  27. CSS whimsy -- using background to create all kinds of wonderful backgrounds
  28. Why are more and more people learning front end?
  29. What do you do with 4K front-end development?
  30. 8 years of front-end development knowledge precipitation (do not know how many words, keep writing it...)
  31. What is the annual salary of a good web front end?
  32. Front end novice tutorial! How to get started with web front end
  33. Will the front end have a future?
  34. Is the front end hard to learn?
  35. Seven new Vue combat skills to improve efficiency in 2021!
  36. Is front end learning difficult?
  37. How about the process of Web front-end development and self-study?
  38. Front end learning route from zero basis to proficient
  39. What is the basis of learning front end?
  40. What knowledge points need to be learned for self-study front end? How long can I become a front-end Engineer?
  41. An inexperienced front-end engineer, what are the common problems when writing CSS?
  42. HttpServletRequest get URL (parameter, path, port number, protocol, etc.) details
  43. Springboot starts http2
  44. Enabling http2.0 in spring boot
  45. JQuery:JQuery基本语法,JQuery选择器,JQuery DOM,综合案例 复选框,综合案例 随机图片
  46. Using JavaScript in Safari browser history.back () the page will not refresh after returning to the previous page
  47. vue.js Error in win10 NPM install
  48. In less than two months, musk made more than $1 billion, more than Tesla's annual profit
  49. Springboot starts http2
  50. Vue event bus
  51. JQuery easy UI tutorial: custom data grid Pagination
  52. Using okhttp and okhttpgo to obtain onenet cloud platform data
  53. Vue3 component (IX) Vue + element plus + JSON = dynamic rendering form control
  54. HTTP 1. X learning notes: an authoritative guide to Web Performance
  55. Vue3 component (IX) Vue + element plus + JSON = dynamic rendering form control
  56. HTTP 1. X learning notes: an authoritative guide to Web Performance
  57. JQuery:JQuery基本语法,JQuery选择器,JQuery DOM,综合案例 复选框,综合案例 随机图片
  58. Event bubble and capture in JavaScript
  59. The root element is missing solution
  60. Event bubble and capture in JavaScript