Understanding HTTP Connections and Browser-Imposed TCP Limits

k.Ankitk.Ankit
3 min read

Recently, I was exploring HTTP connections and the TCP connection limits imposed by browsers. Here, I’ll summarize my learnings related to these topics. Let's begin with understanding an HTTP connection.

HTTP -an overview

HTTP, which stands for "HyperText Transfer Protocol," is an extensible protocol defined by a set of rules to establish communication between nodes in a network. It relies on resources, URIs, a simple message structure, and a client-server communication flow. HTTP is an application layer protocol that is typically sent over TCP or TLS-encrypted TCP connections (these are transport protocols in the data link layer; theoretically, other transport protocols can also be used).

In a typical HTTP connection, the following components are involved:

  1. Client

  2. Proxies

  3. Server

The client always initiates a request to the server. This request goes through multiple computers, routers, and other intermediaries known as proxies. These proxies serve purposes such as caching, filtering, load balancing, and other operations, acting as gateways. The server then reads the request and responds accordingly.

Note that HTTP connections are stateless but not sessionless. This means there is no inherent link between two requests carried over the same connection. However, HTTP cookies provide a way to maintain sessions by adding a cookie header in the response.

Max parallel TCP connection limit

HTTP connections are controlled by underlying transport protocols that should be reliable, which is why TCP is commonly used. By default, HTTP/1.0 opens separate TCP connections for each request-response pair. Later, HTTP/1.1 introduced pipelining and persistent connections, allowing multiple requests to be sent over a single connection. This is controlled by defining a connection header as either keep-alive (for persistent connections) or close.

When you open a tab and visit a website, your browser sends a request to the server as an HTTP request message, and the server responds with an HTTP response message. If you open multiple tabs visiting different websites, each visit represents a separate HTTP transaction. Browsers can initiate multiple HTTP transactions concurrently. However, when accessing a single website, the browser can initiate multiple requests with the same server.

Since HTTP/1.1 does not allow two requests to be sent on the same TCP connection simultaneously, each request establishes a dedicated TCP connection with the server and waits for the response. If there are a large number of requests to the same domain, it can cause a TCP connection flood to the server. To prevent this, browsers limit the number of parallel requests to each domain. For example, Chrome hardcodes this limit to 6. While this can increase latency, it balances server cost and hardware requirements. If the limit is too low, latency increases; if it's too high, it increases server cost and can affect the user experience of other users.

The number of TCP connections can be reduced to one by enabling HTTP/2.0. HTTP/2.0 allows a single TCP connection to handle multiple requests and responses simultaneously through multiplexing. However, this increases backend costs. Therefore, the choice of connection type depends on the application's requirements.

That's the end of blog. I'll try to write more blogs to summarize my learnings further.

Stay tuned :)

other links:-

https://developer.mozilla.org/en-US/docs/Web/HTTP/Connection_management_in_HTTP_1.x

https://medium.com/@hnasr/chromes-6-tcp-connections-limit-c199fe550af6

2
Subscribe to my newsletter

Read articles from k.Ankit directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

k.Ankit
k.Ankit

I know to console/print "Hello World" in multiple language and love to talk,write and learn about complex engineering behind a tech.