Understanding TCP, HTTP, socket, socket connection pool

Web front end learning circle 2021-02-23 10:36:23
understanding tcp http socket socket


Preface

As a developer, we often hear HTTP agreement 、TCP/IP agreement 、UDP agreement 、Socket、Socket A long connection 、Socket Connection pool Equal word , But the relationship between them 、 Not everyone can understand the differences and principles , This article starts from the network protocol foundation to Socket Connection pool , Explain their relationship step by step .

Seven layer network model

First of all, from the layered model of network communication : Seven layer model , Also known as OSI(Open System Interconnection) Model . From the bottom up, it is divided into : The physical layer 、 Data link layer 、 The network layer 、 Transport layer 、 The session layer 、 Presentation layer and application layer . It's all about communication , The following picture introduces some protocols and hardware corresponding to each layer

Pass the above figure , That's true. IP The protocol corresponds to the network layer ,TCP、UDP The protocol corresponds to the transport layer , and HTTP The protocol corresponds to the application layer ,OSI did not Socket, What is Socket, Later, we will introduce it in detail with the code .

TCP and UDP Connect

About transport layer TCP、UDP Maybe we'll meet a lot of people , Some people say TCP Is safe ,UDP It's not safe ,UDP Transmission ratio TCP fast , Then why , Let's start with TCP The process of establishing a connection between the two is analyzed , Then explain UDP and TCP The difference between .

TCP Three handshakes and four breakups

We know TCP It takes three handshakes to establish a connection , It takes four breakups to disconnect , What did those three handshakes and four breakups do and how did they do it .

The first handshake : Establishing a connection . Client sends connection request message segment , take SYN The position is 1,Sequence Number by x; then , Client access SYN_SEND state , Wait for the server to confirm ;

The second handshake : The server received... From the client SYN Message segment , Need to be on this SYN Message segment to confirm , Set up Acknowledgment Number by x+1(Sequence Number+1); meanwhile , Send it by yourself SYN Request information , take SYN The position is 1,Sequence Number by y; The server puts all the above information into a message segment ( namely SYN+ACK Message segment ) in , Send it to the client , At this time, the server enters SYN_RECV state ;

The third handshake : Client receives server's SYN+ACK Message segment . And then Acknowledgment Number Set to y+1, Send to server ACK Message segment , After this message segment is sent , Both the client and the server enter ESTABLISHED state , complete TCP Three handshakes .

Finished three handshakes , The client and server can start to transmit data . That's all TCP General introduction of three handshakes . At the end of communication, the client and the server are disconnected , It takes four breakup confirmations .

First break up : host 1( Can make the client , It can also be server-side ), Set up Sequence Number and Acknowledgment Number, Host computer 2 Send a FIN Message segment ; here , host 1 Get into FIN_WAIT_1 state ; This means host 1 No data to send to the host 2 了 ;

Second break up : host 2 Received the host 1 Sent FIN Message segment , Host computer 1 Go back to one ACK Message segment ,Acknowledgment Number by Sequence Number Add 1; host 1 Get into FIN_WAIT_2 state ; host 2 Tell the host 1, I “ agree! ” Your request to close ;

The third break up : host 2 Host computer 1 send out FIN Message segment , Request close connection , At the same time, the host 2 Get into LAST_ACK state ;

The fourth break up : host 1 Received the host 2 Sent FIN Message segment , Host computer 2 send out ACK Message segment , And then the mainframe 1 Get into TIME_WAIT state ; host 2 Received the host 1 Of ACK After message segment , Just close the connection ; here , host 1 wait for 2MSL I still haven't received a reply , Then prove Server The end is closed normally , Good. , host 1 You can also close the connection .

You can see it once tcp The establishment and closing of the request is at least 7 Secondary communication , This doesn't include data communication , and UDP No need 3 Secondary handshake 4 Break up .

TCP and UDP The difference between

1、TCP It's link oriented , Although the insecure and unstable characteristics of the network determine how many handshakes can not guarantee the reliability of the connection , but TCP Three handshakes at a minimum ( In fact, it also guarantees to a large extent ) Ensure the reliability of the connection ; and UDP Not connection oriented ,UDP No connection is established with the other party before data transmission , The received data will not be sent a confirmation signal , The sender does not know if the data will be received correctly , Of course, there is no need to resend , So UDP It's disconnected 、 An unreliable data transfer protocol .

2、 Also due to 1 What is said is characteristic , bring UDP Less overhead, higher data transfer rate , Because there is no need to confirm the receiving and sending data , therefore UDP Better real-time . got it TCP and UDP The difference between , It's not hard to understand why TCP Transmission protocol MSN Comparison and adoption UDP Of QQ File transfer is slow , But I can't say QQ 's communication is not secure , Because programmers can do it manually UDP The data is sent and received to verify , For example, the sender numbers each packet and the receiver verifies it , Even so ,UDP Because the encapsulation of the underlying protocol is not similar TCP Of “ Three handshakes ” To achieve the TCP The transmission efficiency that cannot be achieved .

problem

We hear a lot about the transport layer

1.TCP What is the maximum number of concurrent connections on the server ?

About TCP There is a misunderstanding about the maximum number of concurrent connections on the server “ Because the maximum port number is 65535, therefore TCP The theoretical maximum number of concurrent connections that a server can carry is also 65535”. First of all, we need to understand TCP The components of the connection : client IP、 Client port 、 Server side IP、 Server port . So for TCP For the server process , The number of clients he can connect to at the same time is not limited by the available port number , In theory, the number of connections that can be established by one port of a server is Global IP Count * Number of ports per machine . The actual number of concurrent connections is limited by linux Number of open files , This number is configurable , It can be very large , So it's actually limited by system performance . adopt #ulimit -n  View the maximum number of file handles for the service , adopt ulimit -n xxx  modify xxx It's the number you want to be able to open . You can also change the system parameters :

#vi /etc/security/limits.conf
*  soft  nofile  65536
*  hard  nofile  65536

2. Why? TIME_WAIT The state needs to wait 2MSL Before returning to CLOSED state ?

This is because although both sides have agreed to close the connection , And shaking hands 4 All messages are coordinated and sent , You can go straight back to CLOSED state ( It's like from SYN_SEND State to ESTABLISH The state is like that ); But because we have to assume that the Internet is unreliable , You can't guarantee that you sent the last ACK The message must be received by the other party , So the other person is in LAST_ACK In state Socket Maybe it's not received due to timeout ACK message , And resend FIN message , So this TIME_WAIT The function of the state is to resend what may be lost ACK message .

3.TIME_WAIT The state needs to wait 2MSL Before returning to CLOSED What's wrong with state

Both sides of the communication establish TCP After connection , The party that actively closes the connection will enter TIME_WAIT state ,TIME_WAIT The duration of the state is two MSL Length of time , That is to say 1-4 minute ,Windows The operating system is 4 minute . Get into TIME_WAIT The general state is the client , One TIME_WAIT The state of the connection takes up a local port . The maximum number of port numbers on a machine is 65536 individual , If you stress test on the same machine, simulate tens of thousands of customer requests , And circulates with the server for short connection communication , So this machine will produce 4000 About TIME_WAIT Socket, Subsequent short connections will occur address already in use : connect It's abnormal , If you use Nginx As a direction agent, we also need to consider TIME_WAIT state , Find a large number of systems TIME_WAIT State connection , By adjusting the kernel parameters .

vi /etc/sysctl.conf

Edit the file , Add the following :

net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 30

And then execute  /sbin/sysctl -p  Let the parameter take effect .

net.ipv4.tcp_syncookies = 1 Open for indication SYN Cookies. When there is a SYN Waiting for the queue to overflow , Enable cookies To deal with it , Preventable small amount SYN attack , The default is 0, Means closing ;

net.ipv4.tcp_tw_reuse = 1 Represents open reuse . Allows you to TIME-WAIT sockets Reapply to new TCP Connect , The default is 0, Means closing ;

net.ipv4.tcp_tw_recycle = 1 Open for indication TCP Connecting TIME-WAIT sockets Rapid recycling of , The default is 0, Means closing .

net.ipv4.tcp_fin_timeout Change the system default TIMEOUT Time

HTTP agreement

About TCP/IP and HTTP Relationship of agreement , The Internet has a relatively easy to understand introduction :“ When we transmit data , You can just use ( Transport layer )TCP/IP agreement , But in that case , If there is no application layer , So we can't recognize the data content . If you want to make sense of the data being transmitted , You must use the application layer protocol . There are many application layer protocols , such as HTTP、FTP、TELNET etc. , You can also define your own application layer protocol .

HTTP Protocol is hypertext transfer protocol (Hypertext Transfer Protocol ), yes Web The foundation of networking , It is also one of the commonly used protocols for mobile phone networking ,WEB Use HTTP Protocol as application layer protocol , To encapsulate HTTP Text information , And then use TCP/IP Do transport layer protocol and send it to the network .

because HTTP The connection is actively released after each request , therefore HTTP Connection is a kind of “ Short connection ”, Keep the client program online , You need to make continuous connection requests to the server . Usually We don't need to get any data right now , The client also keeps sending it to the server at regular intervals “ Keep connected ” Request , The server replies to the client after receiving the request , Show that you know client “ On-line ”. If the server cannot receive the client's request for a long time , The client is considered “ Offline ”, If the client cannot receive the server's reply for a long time , Think the network has been disconnected .

Here's a simple one HTTP Post application/json Data content request :

POST  HTTP/1.1
Host: 127.0.0.1:9017
Content-Type: application/json
Cache-Control: no-cache

{"a":"a"}

About Socket( Socket )

Now we know TCP/IP It's just a protocol stack , Just like the operating system's operating mechanism , It must be realized concretely , At the same time, it also provides external operation interface . Just as the operating system provides a standard programming interface , such as Win32 Programming interface is the same ,TCP/IP Programming interface must also be provided , This is it. Socket. Now we know that ,Socket Follow TCP/IP There is no necessary connection .Socket Programming interfaces are designed , I hope to adapt to other network protocols . therefore ,Socket It's just more convenient to use TCP/IP The protocol stack , the TCP/IP It's abstracted , Several basic function interfaces are formed . such as create,listen,accept,connect,read and write wait .

Different languages have corresponding establishment Socket Server and client Libraries , The following example Nodejs How to create server and client :

Server side :

const net = require('net');
const server = net.createServer();
server.on('connection'(client) => {
  client.write('Hi!\n'); //  The server outputs information to the client , Use  write()  Method
  client.write('Bye!\n');
  //client.end(); //  The server ends the session
});
server.listen(9000);

Service monitoring 9000 port  

Now use the command line to send http Request and telnet

$ curl http://127.0.0.1:9000
Bye!

$telnet 127.0.0.1 9000
Trying 192.168.1.21...
Connected to 192.168.1.21.
Escape character is '^]'.
Hi!
Bye!
Connection closed by foreign host.

be aware curl Only one message was processed .

client

const client = new net.Socket();
client.connect(9000'127.0.0.1'function ({
});
client.on('data'(chunk) => {
  console.log('data', chunk.toString())
  //data Hi!
  //Bye!
});

Socket A long connection

So called long connection , Point at a TCP Multiple packets can be sent continuously on the connection , stay TCP During connection hold , If no packets are sent , Both parties need to send test packets to maintain this connection ( Heartbeat bag ), Generally need to do their own online maintenance . Short connection refers to the time when the communication parties have data interaction , Just build a TCP Connect , When the data is sent , Then disconnect this TCP Connect . such as Http Of , Just connect 、 request 、 close , The process time is short , If the server does not receive a request within a period of time, it can close the connection . In fact, long connection is relative to short connection , That is to say, keep the client and server connected for a long time .

The usual short connection procedure is :
Connect → The data transfer → Close the connection ;

And long connections are usually :
Connect → The data transfer → Keep connected ( heartbeat )→ The data transfer → Keep connected ( heartbeat )→……→ Close the connection ;

When to use long connection , Short connection ?

Long connection is often used for frequent operation , Point to point communication , And the number of connections can't be too many ,. Every TCP All connections require a three-step handshake , It takes time , If every operation is connected first , If you operate again, then deal with it It's going to slow down a lot , So after each operation, it keeps on opening , Send packets directly in the next processing OK 了 , Don't build TCP Connect . for example : The connection of database is long connection , Frequent communication with short connections can cause Socket error , And often Socket Creating is also a waste of resources .

What is a heartbeat pack and why it needs :

Heartbeat packet is a self-defined command word that informs the other side of its own status between the client and the server , Send... At regular intervals , It's like a heartbeat , So it's called a heartbeat bag . Both receiving and sending data in the network use Socket To implement . But if this socket has been disconnected ( For example, one side is disconnected ), There must be problems when sending and receiving data . But how to judge whether this socket can still be used ? This requires creating a heartbeat mechanism in the system . Actually TCP A mechanism called heartbeat has been implemented for us in . If you set the heartbeat , that TCP At a certain time ( For example, what you set up is 3 Second ) Send the number of heartbeats you set within ( for instance 2 Time ), And this information will not affect your own defined protocol . You can also define , So-called “ heartbeat ” Send a custom structure on time ( Heartbeat packet or heartbeat frame ), Let the other person know about themselves “ On-line ”, To ensure the validity of the link .

Realization :
Server side :

const net = require('net');

let clientList = [];
const heartbeat = 'HEARTBEAT'//  Define the content of heartbeat packet to ensure that it will not conflict with the data normally sent

const server = net.createServer();
server.on('connection'(client) => {
  console.log(' The client establishes a connection :', client.remoteAddress + ':' + client.remotePort);
  clientList.push(client);
  client.on('data'(chunk) => {
    let content = chunk.toString();
    if (content === heartbeat) {
      console.log(' Received a heartbeat packet from the client ');
    } else {
      console.log(' Received data from client :', content);
      client.write(' Data on the server :' + content);
    }
  });
  client.on('end'() => {
    console.log(' Client received end');
    clientList.splice(clientList.indexOf(client), 1);
  });
  client.on('error'() => {
    clientList.splice(clientList.indexOf(client), 1);
  })
});
server.listen(9000);
setInterval(broadcast, 10000); //  Send heartbeat packets regularly
function broadcast() {
  console.log('broadcast heartbeat', clientList.length);
  let cleanup = []
  for (let i=0;i<clientList.length;i+=1) {
    if (clientList[i].writable) { //  First check  sockets  Is it possible to write
      clientList[i].write(heartbeat);
    } else {
      console.log(' An invalid client ');
      cleanup.push(clientList[i]); //  If not writable , Collect and destroy . Before you destroy it Socket.destroy()  use API How to destroy .
      clientList[i].destroy();
    }
  }
  //Remove dead Nodes out of write loop to avoid trashing loop index
  for (let i=0; i<cleanup.length; i+=1) {
    console.log(' Remove invalid clients :', cleanup[i].name);
    clientList.splice(clientList.indexOf(cleanup[i]), 1);
  }
}

Server output results :

 The client establishes a connection : ::ffff:127.0.0.1:57125
broadcast heartbeat 1
Received data from client : Thu, 29 Mar 2018 03:45:15 GMT
Received a heartbeat packet from the client
Received data from client : Thu, 29 Mar 2018 03:45:20 GMT
broadcast heartbeat 1
Received data from client : Thu, 29 Mar 2018 03:45:25 GMT
Received a heartbeat packet from the client
The client establishes a connection : ::ffff:127.0.0.1:57129
Received a heartbeat packet from the client
Received data from client : Thu, 29 Mar 2018 03:46:00 GMT
Received data from client : Thu, 29 Mar 2018 03:46:04 GMT
broadcast heartbeat 2
Received data from client : Thu, 29 Mar 2018 03:46:05 GMT
Received a heartbeat packet from the client

Client code :

const net = require('net');

const heartbeat = 'HEARTBEAT'
const client = new net.Socket();
client.connect(9000'127.0.0.1'() => {});
client.on('data'(chunk) => {
  let content = chunk.toString();
  if (content === heartbeat) {
    console.log(' Got the heartbeat bag :', content);
  } else {
    console.log(' Receive the data :', content);
  }
});

//  Send data regularly
setInterval(() => {
  console.log(' send data 'new Date().toUTCString());
  client.write(new Date().toUTCString());
}, 5000);

//  Send heartbeat packets regularly
setInterval(function () {
  client.write(heartbeat);
}, 10000);

The client outputs the result :

 send data  Thu, 29 Mar 2018 03:46:04 GMT
Receive the data :  Data on the server :Thu, 29 Mar 2018 03:46:04 GMT
Got the heartbeat bag :HEARTBEAT
send data  Thu, 29 Mar 2018 03:46:09 GMT
Receive the data :  Data on the server :Thu, 29 Mar 2018 03:46:09 GMT
send data  Thu, 29 Mar 2018 03:46:14 GMT
Receive the data :  Data on the server :Thu, 29 Mar 2018 03:46:14 GMT
Got the heartbeat bag :HEARTBEAT
send data  Thu, 29 Mar 2018 03:46:19 GMT
Receive the data :  Data on the server :Thu, 29 Mar 2018 03:46:19 GMT
send data  Thu, 29 Mar 2018 03:46:24 GMT
Receive the data :  Data on the server :Thu, 29 Mar 2018 03:46:24 GMT
Got the heartbeat bag :HEARTBEAT

Define your own protocol

If you want to make sense of the data being transmitted , You have to use application layer protocols, such as Http、Mqtt、Dubbo etc. . be based on TCP There are several problems that need to be solved to customize your own application layer protocol :

  1. Definition and processing of heartbeat packet format
  2. Definition of message header , When you send data, you need to send the header first , The length of the data you are going to send can be parsed in the message
  3. The format of the packet you send , yes json Or other ways of serialization

Let's define our own protocol together , And write services and client calls : Define header format :length:000000000xxxx; xxxx Represents the length of the data , Total length 20, Examples are not rigorous .

The format of the data table : Json 

Server side :

const net = require('net');
const server = net.createServer();
let clientList = [];
const heartBeat = 'HeartBeat'//  Define the content of heartbeat packet to ensure that it will not conflict with the data normally sent
const getHeader = (num) => {
  return 'length:' + (Array(13).join(0) + num).slice(-13);
}
server.on('connection'(client) => {
  client.name = client.remoteAddress + ':' + client.remotePort
  // client.write('Hi ' + client.name + '!\n');
  console.log(' The client establishes a connection ', client.name);

  clientList.push(client)
  let chunks = [];
  let length = 0;
  client.on('data'(chunk) => {
    let content = chunk.toString();
    console.log("content:", content, content.length);
    if (content === heartBeat) {
      console.log(' Received a heartbeat packet from the client ');
    } else {
      if (content.indexOf('length:') === 0){
        length = parseInt(content.substring(7,20));
        console.log('length', length);
        chunks =[chunk.slice(20, chunk.length)];
      } else {
        chunks.push(chunk);
      }
      let heap = Buffer.concat(chunks);
      console.log('heap.length', heap.length)
      if (heap.length >= length) {
        try {
          console.log(' Receive the data ', JSON.parse(heap.toString()));
          let data = ' Data on the server side :' + heap.toString();;
          let dataBuff =  Buffer.from(JSON.stringify(data));
          let header = getHeader(dataBuff.length)
          client.write(header);
          client.write(dataBuff);
        } catch (err) {
          console.log(' Data parsing failure ');
        }
      }
    }
  })

  client.on('end'() => {
    console.log(' Client received end');
    clientList.splice(clientList.indexOf(client), 1);
  });
  client.on('error'() => {
    clientList.splice(clientList.indexOf(client), 1);
  })
});
server.listen(9000);
setInterval(broadcast, 10000); //  Check the client regularly   And send heartbeat packets
function broadcast() {
  console.log('broadcast heartbeat', clientList.length);
  let cleanup = []
  for(var i=0;i<clientList.length;i+=1) {
    if(clientList[i].writable) { //  First check  sockets  Is it possible to write
      // clientList[i].write(heartBeat); //  Send heartbeat data
    } else {
      console.log(' An invalid client ')
      cleanup.push(clientList[i]) //  If not writable , Collect and destroy . Before you destroy it Socket.destroy()  use API How to destroy .
      clientList[i].destroy();
    }
  }
  //  Remove invalid clients
  for(i=0; i<cleanup.length; i+=1) {
    console.log(' Remove invalid clients :', cleanup[i].name);
    clientList.splice(clientList.indexOf(cleanup[i]), 1)
  }
}

Log printing :

  The client establishes a connection  ::ffff:127.0.0.1:50178
 content: length:0000000000031 20
 length 31
 heap.length 0
 content: "Tue, 03 Apr 2018 06:12:37 GMT" 31
 heap.length 31
  Receive the data  Tue, 03 Apr 2018 06:12:37 GMT
 broadcast heartbeat 1
 content: HeartBeat 9
  Received a heartbeat packet from the client
 content: length:0000000000031"Tue, 03 Apr 2018 06:12:42 GMT" 51
 length 31
 heap.length 31
  Receive the data  Tue, 03 Apr 2018 06:12:42 GMT

client

const net = require('net');
const client = new net.Socket();
const heartBeat = 'HeartBeat'; //  Define the content of heartbeat packet to ensure that it will not conflict with the data normally sent
const getHeader = (num) => {
  return 'length:' + (Array(13).join(0) + num).slice(-13);
}
client.connect(9000'127.0.0.1'function () {});
let chunks = [];
let length = 0;
client.on('data', (chunk) => {
  let content = chunk.toString();
  console.log("content:", content, content.length);
  if (content === heartBeat) {
    console.log(' Received a heartbeat packet from the server ');
  } else {
    if (content.indexOf('length:') === 0){
      length = parseInt(content.substring(7,20));
      console.log('length', length);
      chunks =[chunk.slice(20, chunk.length)];
    } else {
      chunks.push(chunk);
    }
    let heap = Buffer.concat(chunks);
    console.log('heap.length', heap.length)
    if (heap.length >= length) {
      try {
        console.log(' Receive the data ', JSON.parse(heap.toString()));
      } catch (err) {
        console.log(' Data parsing failure ');
      }
    }
  }
});
//  Send data regularly
setInterval(function () {
  let data = new Date().toUTCString();
  let dataBuff =  Buffer.from(JSON.stringify(data));
  let header =getHeader(dataBuff.length);
  client.write(header);
  client.write(dataBuff);
}, 5000);
//  Send heartbeat packets regularly
setInterval(function () {
  client.write(heartBeat);
}, 10000);

Log printing :

 content: length:0000000000060 20
 length 60
 heap.length 0
 content: " Data on the server side :\"Tue, 03 Apr 2018 06:12:37 GMT\"" 44
 heap.length 60
  Receive the data   Data on the server side :"Tue, 03 Apr 2018 06:12:37 GMT"
 content: length:0000000000060" Data on the server side :\"Tue, 03 Apr 2018 06:12:42 GMT\"" 64
 length 60
 heap.length 60
  Receive the data   Data on the server side :"Tue, 03 Apr 2018 06:12:42 GMT"

The client sends custom protocol data to the server regularly , Send the header first , Sending content data , Another timer sends heartbeat data , The server judges that it is heartbeat data , Then judge whether it's header data , And then content data , And then parse and send the data to the client . From the log printing, we can see that the client has write header and data data , The server may be in a data The incident received .

Here we can see that a client can handle a request at the same time , But imagine a scene like this , If the same client calls the server request multiple times at the same time , Sending header data and content data multiple times , Server side data The data received by the event is difficult to distinguish which data is which request , For example, two header data arrive at the server at the same time , The server will ignore one of them , And the following content data does not necessarily correspond to this header . So I want to reuse long connections and handle server requests with high concurrency , You need connection pooling .

Socket Connection pool

What is? Socket Connection pool , The concept of pool can be associated with a collection of resources , therefore Socket Connection pool , Is to maintain a certain amount of Socket Long connected sets . It can automatically detect Socket The effectiveness of long connections , Weeding out invalid connections , The number of long connections to complement the connection pool . From the code level, it is actually a class that realizes this function artificially , Generally, a connection pool contains the following properties :

  1. Idle long connection queue available
  2. Long connection queue for running traffic
  3. Waiting for a queue of requests to get an idle long connection
  4. Elimination function of invalid long connection
  5. Number configuration of long connection resource pool
  6. New function of long connection resource

scene : A request to come , First, go to the resource pool and get a long connection resource , If there are long connections in the free queue , You get this long connection Socket, And put this Socket Move to the running long connection queue . If there's no... In the free queue , And the running queue length is less than the number of configured connection pool resources , Just create a new long connection to the running queue , If the running resource pool is no less than the configured resource pool length , Then the request goes into the waiting queue . When a running Socket Completed the request , Just move from the running queue to the idle queue , And trigger the waiting request queue to get free resources , If there's a wait .

Here is a brief introduction Nodejs Of Socket Connection pool generic-pool Module source code .

https://github.com/coopernurse/node-pool

Main file directory structure

.
|————lib  -------------------------  The code base
| |————DefaultEvictor.js ---------- 
| |————Deferred.js ---------------- 
| |————Deque.js ------------------- 
| |————DequeIterator.js ----------- 
| |————DoublyLinkedList.js -------- 
| |————DoublyLinkedListIterator.js- 
| |————factoryValidator.js -------- 
| |————Pool.js --------------------  Connection pool main code
| |————PoolDefaults.js ------------ 
| |————PooledResource.js ---------- 
| |————Queue.js -------------------  queue
| |————ResourceLoan.js ------------ 
| |————ResourceRequest.js --------- 
| |————utils.js -------------------  Tools
|————test -------------------------  Test directory
|————README.md  -------------------  Project description file
|————.eslintrc  ------------------- eslint Statically check the configuration file
|————.eslintignore  --------------- eslint Statically check ignored files
|————package.json ----------------- npm Package dependent configuration

Here's how to use the library :

Initialize connection pool

'use strict';
const net = require('net');
const genericPool = require('generic-pool');

function createPool(conifg) {
  let options = Object.assign({
    fifo: true,                             //  Whether to give priority to old resources
    priorityRange: 1,                       //  priority
    testOnBorrow: true,                     //  Whether to turn on get validation
    // acquireTimeoutMillis: 10 * 1000,     //  The timeout for getting
    autostart: true,                        //  Automatic initialization and release scheduling enabled
    min10,                                //  Initialize the minimum number of long connections maintained by the connection pool
    max0,                                 //  The number of long connections maintained by the maximum connection pool
    evictionRunIntervalMillis: 0,           //  Resource release check interval check   Set the following parameters to work
    numTestsPerEvictionRun: 3,              //  The number of resources released each time
    softIdleTimeoutMillis: -1,              //  More than the smallest available min  And free time   Reach release
    idleTimeoutMillis: 30000                //  Forced release
    // maxWaitingClients: 50                //  Maximum waiting
  }, conifg.options);
  const factory = {

    createfunction () {
      return new Promise((resolve, reject) => {
        let socket = new net.Socket();
        socket.setKeepAlive(true);
        socket.connect(conifg.port, conifg.host);
        // TODO  Heartbeat packet processing logic
        socket.on('connect', () => {
          console.log('socket_pool', conifg.host, conifg.port, 'connect' );
          resolve(socket);
        });
        socket.on('close', (err) => { //  First end  Event revisited close event
          console.log('socket_pool', conifg.host, conifg.port, 'close', err);
        });
        socket.on('error', (err) => {
          console.log('socket_pool', conifg.host, conifg.port, 'error', err);
          reject(err);
        });
      });
    },
    // Destroy connection
    destroy: function (socket) {
      return new Promise((resolve) => {
        socket.destroy(); //  Not trigger end  event   The first time will trigger close event   If there is message Will trigger error event
        resolve();
      });
    },
    validate: function (socket) { // Get the resource pool and verify the resource validity
      return new Promise((resolve) => {
        // console.log('socket.destroyed:', socket.destroyed, 'socket.readable:', socket.readable, 'socket.writable:', socket.writable);
        if (socket.destroyed || !socket.readable || !socket.writable) {
          return resolve(false);
        } else {
          return resolve(true);
        }
      });
    }
  };
  const pool = genericPool.createPool(factory, options);
  pool.on('factoryCreateError', (err) => { //  Error listening for new long connection   Let the request return the error directly
    const clientResourceRequest = pool._waitingClientsQueue.dequeue();
    if (clientResourceRequest) {
      clientResourceRequest.reject(err);
    }
  });
  return pool;
};

let pool = createPool({
  port: 9000,
  host: '127.0.0.1',
  options: {min0max10}
});

Use connection pool

Below is the use of connection pooling , The protocol we used is our custom protocol .

let pool = createPool({
  port: 9000,
  host: '127.0.0.1',
  options: {min: 0, max: 10}
});
const getHeader = (num) => {
  return 'length:' + (Array(13).join(0) + num).slice(-13);
}
const request = async (requestDataBuff) => {
  let client;
  try {
    client = await pool.acquire();
  } catch (e) {
    console.log('acquire socket client failed: ', e);
    throw e;
  }
  let timeout = 10000;
  return new Promise((resolve, reject) => {
    let chunks = [];
    let length = 0;
    client.setTimeout(timeout);
    client.removeAllListeners('error');
    client.on('error'(err) => {
      client.removeAllListeners('error');
      client.removeAllListeners('data');
      client.removeAllListeners('timeout');
      pool.destroyed(client);
      reject(err);
    });
    client.on('timeout'() => {
      client.removeAllListeners('error');
      client.removeAllListeners('data');
      client.removeAllListeners('timeout');
      //  It should be destroyed in case the next req Of data Event monitoring returns data
      pool.destroy(client);
      // pool.release(client);
      reject(`socket connect timeout set ${timeout}`);
    });
    let header = getHeader(requestDataBuff.length);
    client.write(header);
    client.write(requestDataBuff);
    client.on('data'(chunk) => {
      let content = chunk.toString();
      console.log('content', content, content.length);
      // TODO  Filter heartbeat packets
      if (content.indexOf('length:') === 0){
        length = parseInt(content.substring(7,20));
        console.log('length', length);
        chunks =[chunk.slice(20, chunk.length)];
      } else {
        chunks.push(chunk);
      }
      let heap = Buffer.concat(chunks);
      console.log('heap.length', heap.length);
      if (heap.length >= length) {
        pool.release(client);
        client.removeAllListeners('error');
        client.removeAllListeners('data');
        client.removeAllListeners('timeout');
        try {
          // console.log(' Receive the data ', JSON.parse(heap.toString()));
          resolve(JSON.parse(heap.toString()));
        } catch (err) {
          reject(err);
          console.log(' Data parsing failure ');
        }
      }
    });
  });
}
request(Buffer.from(JSON.stringify({a: 'a'})))
  .then((data) => {
    console.log(' Receive data from the service ',data)
  }).catch(err => {
    console.log(err);
  });

request(Buffer.from(JSON.stringify({b: 'b'})))
  .then((data) => {
    console.log(' Receive data from the service ',data)
  }).catch(err => {
    console.log(err);
  });

setTimeout(function () { // See if it will be reused Socket  Did you make a new connection
  request(Buffer.from(JSON.stringify({c: 'c'})))
    .then((data) => {
      console.log(' Receive data from the service ',data)
    }).catch(err => {
    console.log(err);
  });

  request(Buffer.from(JSON.stringify({d: 'd'})))
    .then((data) => {
      console.log(' Receive data from the service ',data)
    }).catch(err => {
    console.log(err);
  });
}, 1000)

Log printing :

 socket_pool 127.0.0.1 9000 connect
 socket_pool 127.0.0.1 9000 connect
 content length:0000000000040" Data on the server side :{\"a\":\"a\"}" 44
 length 40
 heap.length 40
  Receive data from the service   Data on the server side :{"a":"a"}
 content length:0000000000040" Data on the server side :{\"b\":\"b\"}" 44
 length 40
 heap.length 40
  Receive data from the service   Data on the server side :{"b":"b"}
 content length:0000000000040 20
 length 40
 heap.length 0
 content " Data on the server side :{\"c\":\"c\"}" 24
 heap.length 40
  Receive data from the service   Data on the server side :{"c":"c"}
 content length:0000000000040" Data on the server side :{\"d\":\"d\"}" 44
 length 40
 heap.length 40
  Receive data from the service   Data on the server side :{"d":"d"}

You can see here that both of the previous requests create new Socket Connect socket_pool 127.0.0.1 9000 connect, When two requests are re initiated after the timer ends, no new one is created Socket Connected to , Directly from the connection pool Socket Connect resources .

Source code analysis

Find the main code in lib In folder Pool.js
Constructors :
lib/Pool.js

  /**
   * Generate an Object pool with a specified `factory` and `config`.
   *
   * @param {typeof DefaultEvictor} Evictor
   * @param {typeof Deque} Deque
   * @param {typeof PriorityQueue} PriorityQueue
   * @param {Object} factory
   *   Factory to be used for generating and destroying the items.
   * @param {Function} factory.create
   *   Should create the item to be acquired,
   *   and call it's first callback argument with the generated item as it's argument.
   * @param {Function} factory.destroy
   *   Should gently close any resources that the item is using.
   *   Called before the items is destroyed.
   * @param {Function} factory.validate
   *   Test if a resource is still valid .Should return a promise that resolves to a boolean, true if resource is still valid and false
   *   If it should be removed from pool.
   * @param {Object} options
   */
  constructor(Evictor, Deque, PriorityQueue, factory, options) {
    super();
    factoryValidator(factory); //  Test our definition of factory The validity of this method includes create destroy validate
    this._config = new PoolOptions(options); //  Connection pool configuration
    // TODO: fix up this ugly glue-ing
    this._Promise = this._config.Promise;

    this._factory = factory;
    this._draining = false;
    this._started = false;
    /**
     * Holds waiting clients
     * @type {PriorityQueue}
     */
    this._waitingClientsQueue = new PriorityQueue(this._config.priorityRange); //  The requested object manages the queue queue  initialization queue Of size 1 { _size: 1, _slots: [ Queue { _list: [Object] } ] }
    /**
     * Collection of promises for resource creation calls made by the pool to factory.create
     * @type {Set}
     */
    this._factoryCreateOperations = new Set(); //  Long connection being created

    /**
     * Collection of promises for resource destruction calls made by the pool to factory.destroy
     * @type {Set}
     */
    this._factoryDestroyOperations = new Set(); //  Long connection being destroyed

    /**
     * A queue/stack of pooledResources awaiting acquisition
     * TODO: replace with LinkedList backed array
     * @type {Deque}
     */
    this._availableObjects = new Deque(); //  Idle resources, long connections

    /**
     * Collection of references for any resource that are undergoing validation before being acquired
     * @type {Set}
     */
    this._testOnBorrowResources = new Set(); //  Resources that are being tested for effectiveness

    /**
     * Collection of references for any resource that are undergoing validation before being returned
     * @type {Set}
     */
    this._testOnReturnResources = new Set();

    /**
     * Collection of promises for any validations currently in process
     * @type {Set}
     */
    this._validationOperations = new Set();//  In the middle of verification temp

    /**
     * All objects associated with this pool in any state (except destroyed)
     * @type {Set}
     */
    this._allObjects = new Set(); //  All the linked resources   It's a  PooledResource object

    /**
     * Loans keyed by the borrowed resource
     * @type {Map}
     */
    this._resourceLoans = new Map(); //  Of the borrowed object map release When using

    /**
     * Infinitely looping iterator over available object
     * @type {DequeIterator}
     */
    this._evictionIterator = this._availableObjects.iterator(); //  An iterator

    this._evictor = new Evictor();

    /**
     * handle for setTimeout for next eviction run
     * @type {(number|null)}
     */
    this._scheduledEviction = null;

    // create initial resources (if factory.min > 0)
    if (this._config.autostart === true) { //  Initialize the minimum number of connections
      this.start();
    }
  }

You can see the queue containing the idle resources mentioned before , Requesting resource queue , Waiting request queue, etc .

See below Pool.acquire Method
lib/Pool.js

/**
   * Request a new resource. The callback will be called,
   * when a new resource is available, passing the resource to the callback.
   * TODO: should we add a seperate "acquireWithPriority" function
   *
   * @param {Number} [priority=0]
   *   Optional.  Integer between 0 and (priorityRange - 1).  Specifies the priority
   *   of the caller if there are no available resources.  Lower numbers mean higher
   *   priority.
   *
   * @returns {Promise}
   */
  acquire(priority) { //  Idle resources queue resources have priority  
    if (this._started === false && this._config.autostart === false) {
      this.start(); //  Will be in this._allObjects  add to min The number of connected objects
    }
    if (this._draining) { //  If it is in the resource release phase, you can no longer request resources
      return this._Promise.reject(
        new Error("pool is draining and cannot accept work")
      );
    }
    //  If you want to set the length of the waiting queue and wait   If it exceeds the limit, it returns that the resource is not available
    // TODO: should we defer this check till after this event loop incase "the situation" changes in the meantime
    if (
      this._config.maxWaitingClients !== undefined &&
      this._waitingClientsQueue.length >= this._config.maxWaitingClients
    ) {
      return this._Promise.reject(
        new Error("max waitingClients count exceeded")
      );
    }

    const resourceRequest = new ResourceRequest(
      this._config.acquireTimeoutMillis, //  Timeout configuration in object   Indicates waiting time   It starts a timer   Time out triggers resourceRequest.promise  Of reject Trigger
      this._Promise
    );
    // console.log(resourceRequest)
    this._waitingClientsQueue.enqueue(resourceRequest, priority); //  The request enters the waiting request queue
    this._dispense(); //  Distribute resources   It will eventually trigger resourceRequest.promise Of resolve(client) 

    return resourceRequest.promise; //  Back to a promise object resolve It's triggered elsewhere
  }
 


  /**
   * Attempt to resolve an outstanding resource request using an available resource from
   * the pool, or creating new ones
   *
   * @private
   */
  _dispense() {
    /**
     * Local variables for ease of reading/writing
     * these don't (shouldn't) change across the execution of this fn
     */
    const numWaitingClients = this._waitingClientsQueue.length; //  The queue length of the waiting request   Sum of priorities
    console.log('numWaitingClients', numWaitingClients)  // 1

    // If there aren't any waiting requests then there is nothing to do
    // so lets short-circuit
    if (numWaitingClients < 1) {
      return;
    }
    //  max: 10, min: 4
    console.log('
_potentiallyAllocableResourceCount', this._potentiallyAllocableResourceCount) //  The current number of potentially idle available connections
    const resourceShortfall =
      numWaitingClients - this._potentiallyAllocableResourceCount; //  There are still a few available   Less than zero means you don't need   Greater than 0 Indicates the number of new long connections that need to be created
    console.log('
spareResourceCapacity', this.spareResourceCapacity) //  distance max There are still a few that have not been created
    const actualNumberOfResourcesToCreate = Math.min(
      this.spareResourceCapacity, // -6
      resourceShortfall //  This is  -3
    ); //  If resourceShortfall>0  It means that you need to create a new one, but the number of new ones cannot exceed spareResourceCapacity The most you can create
    console.log('
actualNumberOfResourcesToCreate', actualNumberOfResourcesToCreate) //  If actualNumberOfResourcesToCreate >0  Indicates that you need to create a connection
    for (let i = 0; actualNumberOfResourcesToCreate > i; i++) {
      this._createResource(); //  New long connection
    }

    // If we are doing test-on-borrow see how many more resources need to be moved into test
    // to help satisfy waitingClients
    if (this._config.testOnBorrow === true) { //  If the validation of resources before use is turned on
      // how many available resources do we need to shift into test
      const desiredNumberOfResourcesToMoveIntoTest =
        numWaitingClients - this._testOnBorrowResources.size;// 1
      const actualNumberOfResourcesToMoveIntoTest = Math.min(
        this._availableObjects.length, // 3
        desiredNumberOfResourcesToMoveIntoTest // 1
      );
      for (let i = 0; actualNumberOfResourcesToMoveIntoTest > i; i++) { //  Number of validation checks required   At least the smallest waiting clinet
        this._testOnBorrow(); //  The resources are verified and then distributed
      }
    }

    // if we aren'
t testing-on-borrow then lets try to allocate what we can
    if (this._config.testOnBorrow === false) { //  If validation is not turned on   Start the distribution of effective resources
      const actualNumberOfResourcesToDispatch = Math.min(
        this._availableObjects.length,
        numWaitingClients
      );
      for (let i = 0; actualNumberOfResourcesToDispatch > i; i++) { //  Start distributing resources
        this._dispatchResource();
      }
    }
  }

  /**
   * Attempt to move an available resource to a waiting client
   * @return {Boolean} [description]
   */
  _dispatchResource() {
    if (this._availableObjects.length < 1) {
      return false;
    }

    const pooledResource = this._availableObjects.shift(); //  You can take one out of the resource pool
    this._dispatchPooledResourceToNextWaitingClient(pooledResource); //  distribution
    return false;
  }
  /**
   * Dispatches a pooledResource to the next waiting client (if any) else
   * puts the PooledResource back on the available list
   * @param  {PooledResource} pooledResource [description]
   * @return {Boolean}                [description]
   */
  _dispatchPooledResourceToNextWaitingClient(pooledResource) {
    const clientResourceRequest = this._waitingClientsQueue.dequeue(); //  May be undefined  Take out a waiting quene
    console.log('clientResourceRequest.state', clientResourceRequest.state);
    if (clientResourceRequest === undefined ||
      clientResourceRequest.state !== Deferred.PENDING) {
      console.log(' There's no waiting ')
      // While we were away either all the waiting clients timed out
      // or were somehow fulfilled. put our pooledResource back.
      this._addPooledResourceToAvailableObjects(pooledResource); //  Add a... To the available resources
      // TODO: do need to trigger anything before we leave?
      return false;
    }
    // TODO clientResourceRequest  Of state Whether it is necessary to judge   If it's already resolve The state of   It's time out to go back   Is there a problem with this
    const loan = new ResourceLoan(pooledResource, this._Promise); 
    this._resourceLoans.set(pooledResource.obj, loan); // _resourceLoans  It's a map k=>value  pooledResource.obj  Namely socket In itself
    pooledResource.allocate(); //  Identify the state of the resource is in use
    clientResourceRequest.resolve(pooledResource.obj); //  acquire Method promise Object's resolve It's done here
    return true;
  }

The above code goes all the way to get the resources of long connection , You can learn more about other codes by yourself .

Derived from :https://segmentfault.com/a/1190000014044351

Statement : Article copyright belongs to the author , If there is any infringement , Please contact Xiaobian to delete .

thank · forward Comments are welcome

This article is from WeChat official account. - web Front end learning circle (web-xxq).
If there is any infringement , Please contact the support@oschina.cn Delete .
Participation of this paper “OSC Source creation plan ”, You are welcome to join us , share .

版权声明
本文为[Web front end learning circle]所创,转载请带上原文链接,感谢
https://qdmana.com/2021/02/20210223103440695T.html

  1. Svg editor -- new path
  2. Detailed explanation of debounce and throttle of JavaScript function
  3. Anti shake and throttling and corresponding react hooks package
  4. C2m: the first CSDN article moved to MOOC script 5000 words, detailed painstaking development process, there are renderings and source code at the end of the article
  5. Front end, school recruitment, Taobao, guide
  6. [vue2 & G6] get started quickly
  7. Canvas from the beginning to the pig
  8. Take five minutes to standardize the code comments?
  9. Some thoughts on sass
  10. what?! You haven't filled in the award information yet
  11. How to get the interface + tsdoc needed by TS through swagger
  12. Binary tree
  13. Canvas drawing method in Web screenshot
  14. Front end docker image volume optimization (node + nginx / node + multi-stage construction)
  15. Become a big influence of technology? Coding pages quickly build personal blog
  16. Object and array deconstruction, spread operator, rest operator
  17. Analysis of Axios source code
  18. Two ways to delete useless code in project (Practical)
  19. Edit your picture with canvas
  20. Today's chat: 2-4 years to walk out of the resignation dilemma and comfort zone
  21. JS mechanism 3: stack, heap, garbage collection
  22. [grid compression evaluation] meshquan, meshopt, Draco
  23. Deep understanding of Vue modifier sync [Vue sync modifier example]
  24. WebView for front end engineers
  25. React form source code reading notes
  26. Deep thinking about modern package manager -- why do I recommend pnpm instead of NPM / yarn?
  27. On the sequence of event capture and event bubbling
  28. Help you build a systematic understanding of the front end scaffolding
  29. commander.js Principle analysis
  30. Common usage of nginx
  31. H5 jump to wechat app
  32. Front end algorithm interview must brush questions series [14]
  33. Thinking of cross end practice
  34. Vue server rendering principle analysis and introduction
  35. [KT] vscode plug in development example series (2)
  36. Design ideas of react and Vue framework
  37. JavaScript String.prototype.replaceAll 兼容性导致的问题
  38. JavaScript String.prototype.replaceAll Problems caused by compatibility
  39. 爱奇艺体育:体验Serverless极致扩缩容,资源利用率提升40%
  40. Iqiyi Sports: experience the ultimate expansion and contraction of serverless, and increase the utilization rate of resources by 40%
  41. 对前端异常window error捕获的全面总结
  42. A comprehensive summary of front end exception window error capture
  43. 成功解决Problem while trying to mount target]\“. HTTP response code is 400
  44. Problem while trying to mount target] \ ". HTTP response code is 400
  45. 放弃okhttp、httpClient,选择了这个牛逼的神仙工具!贼爽
  46. 前端面试每日 3+1 —— 第679天
  47. How to add elements at the beginning of an array in JS?
  48. Give up okhttp and httpclient and choose this awesome immortal tool! Thief Shuang
  49. Front end interview daily 3 + 1 - day 679
  50. 【2021 第一期】日常开发 26 个常见的 JavaScript 代码优化方案
  51. Daily development of 26 common JavaScript code optimization schemes
  52. 前端的字符串时间如何自动转换为后端Java的Date属性,介绍springMVC中如何解决时间转换问题
  53. How to automatically convert the front-end string time to the back-end Java date attribute, and how to solve the time conversion problem in spring MVC are introduced
  54. 前端面试常考题:JS垃圾回收机制
  55. ReactDOM.render串联渲染链路(一)
  56. 更简洁、更快速!腾讯云 Serverless 云函数创建流程再次升级!
  57. 粗涉Webpack
  58. Frequently asked questions in front end interview: JS garbage collection mechanism
  59. ReactDOM.render Serial rendering link (1)
  60. More concise and faster! Tencent cloud serverless cloud function creation process upgrade again!