Part 1: Managing API Requests with Exponential Backoff and a Request Queue in JavaScript
When building web applications, managing multiple API requests efficiently is crucial, especially when dealing with rate limits or unstable network conditions. This is where exponential backoff and request queuing come into play. In this blog post, we will discuss how to implement these two powerful techniques to ensure smooth and reliable API communication in your application.
Introduction to the Code
We'll explore two main components of this implementation:
Exponential Backoff: A retry strategy that waits progressively longer between retries when a request fails, especially due to rate limiting (status code
429
).Request Queue: A system that handles multiple requests in a controlled and sequential manner, ensuring that they don't overwhelm the server or trigger rate-limiting.
The key code snippets used in this post include:
A
RequestQueueItem
interface to manage each request's metadata.A
fetchWithExponentialBackoff
function that retries failed requests.A
processQueue
function to manage sequential execution of queued requests.A
queueRequest
function to add requests to the queue.
1. Understanding the RequestQueueItem
Interface
interface RequestQueueItem {
url: string;
options: RequestInit;
resolve: (value: any) => void;
reject: (reason?: any) => void;
}
This interface defines the structure of a request item in the queue. Each request is an object containing:
url
: The API endpoint to be called.options
: The request options (method, headers, body, etc.).resolve
: A function that resolves the promise when the request succeeds.reject
: A function that rejects the promise if the request fails.
This setup allows us to manage requests in a queue with promises and handle the response or errors accordingly.
2. Exponential Backoff with fetchWithExponentialBackoff
Exponential backoff is essential when dealing with rate-limited APIs. Instead of retrying immediately when a request fails (like when you hit a 429 Too Many Requests
error), you can delay the retry and progressively increase the waiting time. Here's how it works:
async function fetchWithExponentialBackoff<T>(
url: string,
options: RequestInit,
retries = 7,
backoff = 300,
): Promise<T> {
try {
const response = await fetch(url, options);
if (response.ok) {
return response.json(); // Return the response if successful
} else if (response.status === 429 && retries > 0) {
// Rate-limited, so retry with exponential backoff
await new Promise((resolve) => setTimeout(resolve, backoff));
return fetchWithExponentialBackoff(url, options, retries - 1, backoff * 2);
} else {
throw new Error(`HTTP error! Status: ${response.status}`);
}
} catch (error) {
throw new Error(
error instanceof Error ? error.message : "An unknown error occurred",
);
}
}
Key Points:
Retries: By default, it retries the request 7 times. Each time the backoff delay doubles (
backoff * 2
), giving the server more time to recover.Backoff: Initially set to 300ms, the delay increases exponentially between retries.
Error Handling: If the response status is not
429
or retries are exhausted, the function throws an error with the relevant status code.
This ensures that API calls are retried with increasing delays, giving your system the ability to handle rate-limiting and temporary network issues gracefully.
3. Managing the Request Queue with processQueue
Now that we have exponential backoff in place, let's see how to manage a queue of API requests. This ensures that requests are sent one after the other, with a delay between each to prevent overwhelming the API.
const requestQueue: RequestQueueItem[] = [];
let isProcessingQueue = false;
async function processQueue(): Promise<void> {
if (isProcessingQueue || requestQueue.length === 0) return;
isProcessingQueue = true;
while (requestQueue.length > 0) {
const { url, options, resolve, reject } =
requestQueue.shift() as RequestQueueItem;
try {
const result = await fetchWithExponentialBackoff(url, options);
resolve(result); // Resolve the promise with the API result
} catch (error) {
reject(error); // Reject the promise if an error occurs
}
// Add a delay between processing each request
await new Promise((resolve) => setTimeout(resolve, 300));
}
isProcessingQueue = false;
}
How It Works:
Queue Management:
requestQueue
is an array ofRequestQueueItem
objects. Each request is stored in the queue and processed in order.Sequential Processing: The
processQueue
function processes one request at a time, waiting 300ms between requests to avoid overloading the server.Promise Handling: Each request has a
resolve
andreject
function (from theRequestQueueItem
interface) that handles the success or failure of the API call. This ensures that the result is returned when the request completes, or the error is propagated.
By using a queue, we can ensure that requests are sent sequentially and that the API is not overwhelmed, which helps prevent issues like hitting rate limits or timeouts.
4. Queuing a Request with queueRequest
Finally, the queueRequest
function is the interface to add a new request to the queue. This function is used whenever you want to send a request.
export function queueRequest(url: string, options: RequestInit): Promise<any> {
return new Promise((resolve, reject) => {
requestQueue.push({ url, options, resolve, reject });
processQueue(); // Start processing the queue
});
}
Key Features:
Promise-Based API: The function returns a promise, making it easy to use in asynchronous code.
Request Queueing: When you call
queueRequest
, the request is added to therequestQueue
. TheprocessQueue
function is triggered to start processing the queue if it's not already running.
This function integrates seamlessly with the processQueue
and fetchWithExponentialBackoff
functions, ensuring that requests are handled in sequence with exponential backoff in case of failures.
5. Usage Example
Here’s an example of how to use queueRequest
to make API calls:
queueRequest("https://api.example.com/data", { method: "GET" })
.then((data) => {
console.log("API Response:", data);
})
.catch((error) => {
console.error("API Error:", error);
});
This makes a request to the given URL, and the result (or error) is handled when the promise resolves or rejects.
Here's a more detailed example showing how to use the queueRequest
function and log the results, including success and error cases. This will give you a clear idea of how the returned results will look like when using the request queue with exponential backoff.
Example Usage with Logs
async function exampleUsage() {
// Define the API URL and request options
const apiUrl = "https://jsonplaceholder.typicode.com/posts/1";
const options: RequestInit = {
method: "GET",
};
console.log("Starting API request...");
// Use queueRequest to send the API call
queueRequest(apiUrl, options)
.then((data) => {
console.log("API Request Success:");
console.log(data); // Log the successful response
})
.catch((error) => {
console.error("API Request Failed:");
console.error(error); // Log any errors
});
console.log("Request has been queued.");
}
exampleUsage();
Expected Output Logs:
Success Case:
Starting API request...
Request has been queued.
API Request Success:
{
"userId": 1,
"id": 1,
"title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit",
"body": "quia et suscipit\nsuscipit rerum ..."
}
In this case, the API call is successful, and the response data from the jsonplaceholder
API is logged.
Error Case:
If the request fails due to a network issue or the server responds with a non-200
status code, you will see the error logs:
Starting API request...
Request has been queued.
API Request Failed:
Error: HTTP error! Status: 500
Or, if the request exceeds the allowed retries:
Starting API request...
Request has been queued.
API Request Failed:
Error: An unknown error occurred
Example of Multiple Queued Requests
You can also see how multiple requests are handled sequentially by queuing more than one request:
async function exampleMultipleRequests() {
const urls = [
"https://jsonplaceholder.typicode.com/posts/1",
"https://jsonplaceholder.typicode.com/posts/2",
"https://jsonplaceholder.typicode.com/posts/invalid-url", // This will fail
];
for (const url of urls) {
queueRequest(url, { method: "GET" })
.then((data) => {
console.log(`Success for ${url}:`, data);
})
.catch((error) => {
console.error(`Error for ${url}:`, error);
});
}
}
exampleMultipleRequests();
Logs for Multiple Requests:
Success Case:
Success for https://jsonplaceholder.typicode.com/posts/1:
{ "userId": 1, "id": 1, "title": "sunt aut facere...", "body": "quia et suscipit..." }
Success for https://jsonplaceholder.typicode.com/posts/2:
{ "userId": 1, "id": 2, "title": "qui est esse", "body": "est rerum tempore..." }
Error Case:
Error for https://jsonplaceholder.typicode.com/posts/invalid-url:
Error: HTTP error! Status: 404
How This Works:
Sequential Processing: Even though multiple requests are made in quick succession, they are processed one after another with a 300ms delay between each.
Backoff in Action: If the API returns a
429 Too Many Requests
status, it will retry with an exponential backoff.Logs for Success and Failure: Each request logs either the successful result or the error details.
This example gives you a full picture of how to use the request queue and exponential backoff in practice, with clear logs showing how both success and error scenarios are handled.
Conclusion
Managing API requests efficiently is critical for creating robust web applications. Using exponential backoff ensures that your application can handle rate limits and temporary network issues gracefully. Meanwhile, a request queue helps you control the flow of API calls, ensuring that you don't overwhelm the server with too many requests at once.
This combination of exponential backoff and a request queue will help you build resilient applications that handle network conditions more effectively and improve overall reliability.
Subscribe to my newsletter
Read articles from Abdulwasiu Abdulmuize directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by