Strange things encountered in side missions – concurrent requests

Concurrent requests refer to the situation where one or more requests are always sent out at the same time. It does not mean that these requests are sent exactly at the same time, but within a period of time, these requests may be sent one after another due to small differences in time. These requests may be independent of each other or may be interdependenton each other.

A typical application scenario for concurrent requests is webpage loading. When you visit a web page, the browser will send multiple requests to the server at the same time to obtain the page’s HTML documents, style sheets, JavaScript scripts, images and other resources. These requests are made essentially at the same time, but due to differences in network latency and server processing speeds, these requests may be responded to at different points in time.

When handling concurrent requests, we usually have to consider the following issues:

  • Sequentiality of requests: If the processing result of one request depends on the result of another request, then we need to ensure the execution order of the two requests.

  • Number of concurrent requests: If too many requests are sent at the same time, it may burden the server, or it may be limited by the browser or server on the number of concurrent connections.

In JavaScript, we can use Promise, async/await and other methods to manage and control concurrent requests. For concurrent requests that exceed the limit, we can also use some third-party libraries, such as axios, request-promise, etc., for queuing processing or concurrency control.

In JavaScript, there are many different ways to handle concurrent requests:

  1. Promise.all(): This is a common method to solve parallel request processing. You can pass an array of requests that need to be executed concurrently as a parameter to the Promise.all() method. These requests will be executed concurrently, and Promise.all() will return a new Promise object only when all requests are successful. After executing and returning the response, this new Promise will change to the resolved state.

For example:

let request1 = fetch('https://api.myjson.com/bins/9hog6');
let request2 = fetch('https://api.myjson.com/bins/nttn4');

Promise.all([request1, request2])
.then(responses => {<!-- -->
  return Promise.all(responses.map(response => response.json()))
})
.then(data => console.log(data))
.catch(err => console.log(err));
  1. Async/Await: You can also use async functions and await expressions to handle concurrent requests. Inside an async function, you can use await to pause function execution until the Promise is resolved or rejected.

For example:

async function fetchAndDecode(url) {<!-- -->
  let response = await fetch(url);
  let data = await response.json();
  return data;
}

(async function() {<!-- -->
  let data1 = await fetchAndDecode('https://api.myjson.com/bins/wx4io');
  let data2 = await fetchAndDecode('https://api.myjson.com/bins/jv265');
  console.log(data1, data2);
})();
  1. Promise.any()/Promise.race(): These two methods will end when any Promise reaches the resolved state, but there is a difference. Promise.race() will end when any Promise becomes resolved. It ends when it is fulfilled or rejected. Promise.any() will end when any Promise reaches the fulfilled state.

How to ensure the order of concurrent requests?

When processing concurrent requests, if you need to ensure the order of the requests, it usually means that there is a dependency between the requests. You may need to send a request first and wait for its response to be received and processed. Send next request.

In JavaScript, there are many ways to achieve this sequential execution effect.

  1. Use Promise chain calls: By returning a new Promise in the callback function of .then(), you can form a Promise chain, so that multiple operations can Execute in the specified order. For example:
fetch('https://api.example.com/data1')
  .then(response => response.json())
  .then(data1 => {<!-- -->
    console.log(data1);
    return fetch('https://api.example.com/data2');
  })
  .then(response => response.json())
  .then(data2 => console.log(data2));

In the above code, the second fetch request will be issued after the first fetch request is successful and processed.

  1. Use async/await: async/await is syntactic sugar based on Promise, makes asynchronous operations look more like synchronous operations, so you can Easily implements request ordering. For example:
async function fetchData() {<!-- -->
  let response1 = await fetch('https://api.example.com/data1');
  let data1 = await response1.json();
  console.log(data1);
  
  let response2 = await fetch('https://api.example.com/data2');
  let data2 = await response2.json();
  console.log(data2);
}

fetchData();

In the above code, async/await is used to ensure that the second fetch request will be issued after the first fetch request is successful and processed.

Note: The above two methods have a common feature, that is, the latter request must wait for the previous request to be completely completed before it can be started. If there are no dependencies between your requests, then this approach may reduce the performance of the program because it cannot fully utilize the concurrency of the network. In the absence of dependencies, a better approach is to issue these requests simultaneously and then proceed after all requests have completed (you can use Promise.all()) strong>.

How to control the number of concurrent requests?

In JavaScript, the number of concurrent requests is determined by the browser itself. This value is usually fixed, usually 6-8. However, when we build back-end applications, or need to control the number of concurrent requests

The following is a common method to control the number of concurrent requests:

We can use the Queue method to control the number of concurrencies. You can create a request queue with a fixed maximum number of concurrencies. When a request is completed, the next request is taken out of the queue, so that the number of concurrencies can be controlled not to exceed the set maximum number of concurrencies.

let maxConcurrentRequests = 5; //Set the maximum number of concurrencies
let currentRequests = 0; // Current number of concurrencies
let requestQueue = []; // request queue

function request(url){<!-- -->
  //If the current number of concurrent requests (`currentRequests`) is less than the maximum number of concurrent requests (`maxConcurrentRequests`),
  //The function will call fetch internally to send a request and increase the current concurrency number by one. Fetch returns a Promise, and the request is completed.
  //After completion, the content in `then()` will be executed, the current request number will be reduced by one, and then the request queue (`requestQueue`) will be checked.
  //If there are still requests in the queue, take out the first request in the queue and call the `request` function to send it
  // If the current number of concurrent requests reaches the maximum number of concurrent requests, then the new request
  //It cannot be executed immediately, so the new request is put into the request queue to wait.
  if(currentRequests < maxConcurrentRequests){<!-- -->
    currentRequests + + ;
    fetch(url).then(() => {<!-- -->
      currentRequests--; // The request ends, reducing the number of concurrencies
      if (requestQueue.length > 0) {<!-- -->
        // If there are still requests waiting in the queue, take one out and continue sending.
        request(requestQueue.shift());
      }
    });
  } else {<!-- -->
    requestQueue.push(url);
  }
}

//The following is how to use the above function to simulate sending concurrent requests
for(let i=0; i<15; i + + ){<!-- -->
  request('https://someurl.com/api');
}

When the number of concurrency exceeds the specified threshold, new requests will be placed in the queue. After a request is completed and the request is returned, the new request will be taken out of the queue for execution. In this way, the actual number of requests issued at any time will not exceed our preset upper limit.
Many third-party packages/libraries provide concurrency control functions, such as async.js, bottleneck, p-limit, etc.

syntaxbug.com © 2021 All Rights Reserved.