Biography of tadpole: practice and principle of front end rookie to speed up interface by 60%

The first tadpole 2020-11-10 08:47:49
biography tadpole practice principle end


background

I haven't written for a long time , It was silent for half a year

Persistent malaise , Intermittent seizures

Come to my aunt every day , In confusion 、 Spend every day in anxiety

Have to admit , In fact, I am a waste

As a junior front-end Engineer

Recently, it has dealt with an old interface of more than ten years

It inherits all the logic of supreme complexity

Once in a legend, it can be called once cpu The load is soaring 90% Day to day service

Special treatment for all kinds of patients with senile dementia

Let's appreciate the time consuming of this interface

The average call time is 3s above

Cause page appears serious turn chrysanthemum

After a variety of in-depth analysis and professional answers

The conclusion is that : Give up medical care

Lu Xun is in 《 A Madman's Diary 》 It used to say :“ Can beat me , Only women and alcohol , instead of bug

When in the dark

This sentence always makes me see light

So we have to be tough this time

I decided to make one node Broker layer

Use the following three methods to optimize :

  • Load on demand -> graphQL
  • Data caching -> redis
  • Polling for updates -> schedule

Code address :github

Load on demand -> graphQL

There is a problem with the old interface of Tianxiu , Every time we ask for 1000 Data , In the returned array , Each piece of data has hundreds of fields , In fact, we only use the front end 10 It's just a field .

How to get from more than 100 fields , Extract arbitrarily n A field , This uses graphQL.

graphQL Loading data on demand takes only three steps :

  • Define data pools root
  • Describe the data structure in the data pool schema
  • Custom query data query

Define data pools

We are aiming at the scene of diaosi pursuing goddess , Define a data pool , as follows :

// Data pool
var root = {
girls: [{
id: 1,
name: ' Goddess one ',
iphone: 12345678910,
weixin: 'xixixixi',
height: 175,
school: ' University of Cambridge ',
wheel: [{ name: ' Spare tire 1 Number ', money: '24 Ten thousand yuan ' }, { name: ' Spare tire 2 Number ', money: '26 Ten thousand yuan ' }]
},
{
id: 2,
name: ' Goddess two ',
iphone: 12345678910,
weixin: 'hahahahah',
height: 168,
school: ' Harvard University, ',
wheel: [{ name: ' Spare tire 3 Number ', money: '80 Ten thousand yuan ' }, { name: ' Spare tire 4 Number ', money: '200 Ten thousand yuan ' }]
}]
}

It has all the information about the two goddesses , Including the name of the goddess 、 mobile phone 、 WeChat 、 height 、 School 、 Spare wheel set and other information .

Next, we will describe these data structures .

Describe the data structure in the data pool

const { buildSchema } = require('graphql');
// Describe the data structure schema
var schema = buildSchema(`
type Wheel {
name: String,
age: Int
}
type Info {
id: Int
name: String
iphone: Int
weixin: String
height: Int
school: String
wheel: [Wheel]
}
type Query {
girls: [Info]
}
`);

The above code is the goddess message schema.

First of all, we use type Query A query of Goddess information is defined , It contains a lot of girls girls Information about Info, This information is a bunch of arrays , So it is [Info]

We are type Info Describes all the dimensions of a girl's information , Including the name (name)、 mobile phone (iphone)、 WeChat (weixin)、 height (height)、 School (school)、 Spare wheel set up (wheel)

Define query rules

Get a description of the goddess (schema) after , You can customize to get various information combinations of goddess .

For example, I want to know the goddess , Just get her name (name) And micro signals (weixin). The query rule code is as follows :

const { graphql } = require('graphql');
// Define query content
const query = `
{
girls {
name
weixin
}
}
`;
// Query data
const result = await graphql(schema, query, root)

The screening results are as follows :

Another example is that I want to further develop with the goddess , I need to get her spare tire information , Check with her spare tires (wheel) My family property (money) What are the differences , Analyze whether you can get the priority of spouse . The query rule code is as follows :

const { graphql } = require('graphql');
// Define query content
const query = `
{
girls {
name
wheel {
money
}
}
}
`;
// Query data
const result = await graphql(schema, query, root)

The screening results are as follows :

We use the example of the goddess , Shows how to pass through graphQL Load data on demand .

Mapping to our specific business scenarios , Every data returned by Tianxiu interface contains 100 A field , We configure schema, Get the 10 A field , This avoids the rest 90 Transfer of unnecessary fields .

graphQL Another advantage is that it can be configured flexibly , This interface needs 10 A field , The other interface is 5 A field , The first n Interfaces need additional x A field

In the traditional way, we have to make n Only one interface can satisfy , Now only one interface needs to be configured differently schema That's enough for all the situations .

Sentiment

In the life , We really lack of licking dogs graphQL On demand loading thinking

Scum man scum woman , each takes what he needs

Your true feelings are not worth mentioning in front of celebrities

We have to learn to give in to what they like

Light up the car key when you come up , No car, no talent

I have an ancestral chromosome I want to share with you tonight

Just do it , If you can't, change to the next one

Go straight to the theme , Simple and crude

cache -> redis

The second optimization means , Use redis cache

Tianxiu old interface calls the other three old interfaces , And it's a serial call , Extremely time consuming and resource consuming , Your scalp is numb

We use it redis To cache aggregate data of Tianxiu interface , Next time, call Tianxiu interface , Get data directly from the cache , Avoid time-consuming complex calls , The simplified code is as follows :

const redis = require("redis");
const { promisify } = require("util");
// link redis service
const client = redis.createClient(6379, '127.0.0.1');
// promise turn redis Method , In order to use async/await
const getAsync = promisify(client.get).bind(client);
const setAsync = promisify(client.set).bind(client);
async function list() {
// First get the data in the cache , No cache to pull Tianxiu interface
let result = await getAsync(" cache ");
if (!result) {
// Pull interface
const data = await Tianxiu interface ();
result = data;
// Set cache data
await setAsync(" cache ", data)
}
return result;
}
list();

Through the first getAsync To read redis Data in cache , If there's data , Go straight back to , Bypass interface calls , If there is no data , Will call the Tianxiu interface , then setAsync Update to cache , So that the next call . because redis It's a string , So when setting up the cache , Need to add JSON.stringify(data), For your understanding , I won't add , Will put the detailed code in github in .

Put the data in redis There are several benefits in the cache

It can realize multi interface reuse 、 Multi machine shared cache

This is the legendary Cloud backup

The success rate of pursuing a goddess is 1%

At the same time, pursue 100 A goddess , Then the probability that you get a goddess is 100%

Lu xun 《 A Madman's Diary 》 It used to say :“ One is to lick the dog , Lick a hundred and you are the wolf

Do you want to be a licking dog or a wolf ?

Come on , The cache works ,redis To use

Polling for updates -> schedule

The last optimization means : Polling for updates -> schedule

The goddess's spare tire has been used for a long time , A batch of spare tires will be changed regularly , Let fresh blood in , Discover new happiness

The same goes for caching , It needs to be updated regularly , Maintain consistency with data sources , The code is as follows :

const schedule = require('node-schedule');
// Update the cache every hour
schedule.scheduleJob('* * 0 * * *', async () => {
const data = await Tianxiu interface ();
// Set up redis Cache data
await setAsync(" cache ", data)
});

We use it node-schedule This library is used to poll the update cache ,* * 0 * * * This means to set the day of every hour 0 The cache update logic is executed in minutes , Update the acquired data to the cache , In this way, when other interfaces and machines call the cache , You can get the latest data , This is the benefit of shared caching and polling for updates .

When I was licking dogs in my early years , The polling mechanism will be brought into full play

Every day to the white list of goddesses , Regular polling to send messages

Infinite loop Cloud kneels and licks Three piece set :

  • “ Ah, baby , Have you missed me recently ”
  • “ Oh, good morning, baby ”
  • “ Good night, baby , mua ”

Although the goddess still can't see me

But still ready to serve the goddess !

ending

After optimization of the above three methods

Interface requests take time from 3s Down to 860ms

This code is the logic that is abstracted from the business by simplification

Real business scenarios are much more complex than that : Segmented data storage 、 Master slave synchronization Read / write separation 、 High concurrency synchronization strategy and so on

Every module is obscure

It's like every goddess is out of reach

Losers beat all bug, Only can't defeat her heart

Injured, can only be drunk alone in the middle of the night

But every time I dream of the goddess opening the page I made

I'm amazed by the extremely smooth experience

Enjoy the sublimation of soul in the climax of spirit

That moment

I think I can do it again

( End )

Code address :github

FFCreator It's a light weight made by our team 、 Flexible short video processing library . You just need to add a few pictures or text , You can quickly generate a cool short video tiktok .github Address : https://github.com/tnfe/FFCreator Welcome to friends star.
版权声明
本文为[The first tadpole]所创,转载请带上原文链接,感谢

  1. [front end -- JavaScript] knowledge point (IV) -- memory leakage in the project (I)
  2. This mechanism in JS
  3. Vue 3.0 source code learning 1 --- rendering process of components
  4. Learning the realization of canvas and simple drawing
  5. gin里获取http请求过来的参数
  6. vue3的新特性
  7. Get the parameters from HTTP request in gin
  8. New features of vue3
  9. vue-cli 引入腾讯地图(最新 api,rocketmq原理面试
  10. Vue 学习笔记(3,免费Java高级工程师学习资源
  11. Vue 学习笔记(2,Java编程视频教程
  12. Vue cli introduces Tencent maps (the latest API, rocketmq)
  13. Vue learning notes (3, free Java senior engineer learning resources)
  14. Vue learning notes (2, Java programming video tutorial)
  15. 【Vue】—props属性
  16. 【Vue】—创建组件
  17. [Vue] - props attribute
  18. [Vue] - create component
  19. 浅谈vue响应式原理及发布订阅模式和观察者模式
  20. On Vue responsive principle, publish subscribe mode and observer mode
  21. 浅谈vue响应式原理及发布订阅模式和观察者模式
  22. On Vue responsive principle, publish subscribe mode and observer mode
  23. Xiaobai can understand it. It only takes 4 steps to solve the problem of Vue keep alive cache component
  24. Publish, subscribe and observer of design patterns
  25. Summary of common content added in ES6 + (II)
  26. No.8 Vue element admin learning (III) vuex learning and login method analysis
  27. Write a mini webpack project construction tool
  28. Shopping cart (front-end static page preparation)
  29. Introduction to the fluent platform
  30. Webpack5 cache
  31. The difference between drop-down box select option and datalist
  32. CSS review (III)
  33. Node.js学习笔记【七】
  34. Node.js learning notes [VII]
  35. Vue Router根据后台数据加载不同的组件(思考->实现->不止于实现)
  36. Vue router loads different components according to background data (thinking - & gt; Implementation - & gt; (more than implementation)
  37. 【JQuery框架,Java编程教程视频下载
  38. [jQuery framework, Java programming tutorial video download
  39. Vue Router根据后台数据加载不同的组件(思考->实现->不止于实现)
  40. Vue router loads different components according to background data (thinking - & gt; Implementation - & gt; (more than implementation)
  41. 【Vue,阿里P8大佬亲自教你
  42. 【Vue基础知识总结 5,字节跳动算法工程师面试经验
  43. [Vue, Ali P8 teaches you personally
  44. [Vue basic knowledge summary 5. Interview experience of byte beating Algorithm Engineer
  45. 【问题记录】- 谷歌浏览器 Html生成PDF
  46. [problem record] - PDF generated by Google browser HTML
  47. 【问题记录】- 谷歌浏览器 Html生成PDF
  48. [problem record] - PDF generated by Google browser HTML
  49. 【JavaScript】查漏补缺 —数组中reduce()方法
  50. [JavaScript] leak checking and defect filling - reduce() method in array
  51. 【重识 HTML (3),350道Java面试真题分享
  52. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  53. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  54. [re recognize HTML (3) and share 350 real Java interview questions
  55. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  56. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  57. 【重识 HTML ,nginx面试题阿里
  58. 【重识 HTML (4),ELK原来这么简单
  59. [re recognize HTML, nginx interview questions]
  60. [re recognize HTML (4). Elk is so simple