Boosting Performance: How LinkedIn Slashed Latency by 60%
Page load times are critical for LinkedIn as they directly impact user experience. As a large company, LinkedIn doesn't rely on monolithic services but rather utilizes numerous microservices. They use a framework called Rest.li, which they developed and have open-sourced, to manage these microservices.
LinkedIn has over 5,000+ endpoints in production powered by Rest.li. This framework uses JSON as its serialization format, facilitating communication between services.
Challenges LinkedIn faced with JSON
JSON is advantageous due to its readability and support across various programming languages. However, it is not an efficient format.
For example:
{
"id": "123456789",
"name": "LinkedIn"
}
The format is verbose since it requires specifying the field names to represent any data. This verbosity leads to inefficiencies in several ways:
Intermediate Parsing Inefficiency: Parsing JSON to access data consumes resources, which is problematic for low-latency, large-scale systems.
Space and Bandwidth Consumption: JSON's key-value pair structure is not densely packed, requiring more bytes. This increases network bandwidth usage and machine resource consumption (RAM & CPU).
Although compression could be used to mitigate these issues, it also requires additional resources for both compression and decompression. The processes of serialization and deserialization add to the time, creating a bottleneck for performance-critical use cases.
Alternative
LinkedIn's Criteria:
Compact Payload Size: Conserves bandwidth and reduces latencies.
Efficient Serialization and Deserialization: Enhances throughput.
Cross-Language Support: Ensures compatibility across various programming languages
What’s the Alternative?
Google Protocol Buffers (Protobuf)
How Did They Roll It Out?
Added Protobuf support in Rest.li.
Updated all services to the latest version of Rest.li to support both JSON and Protobuf.
Clients set the
Content-Type
header toapplication/x-protobuf2
, encoding requests in Protobuf before sending them to the server.If the server supports Protobuf, it responds with a Protobuf-encoded response, with the
Content-Type
header set toapplication/x-protobuf2
.
Thus, the entire roll out was gradually happened with easy rollback.
Improvements and Impact
- Achieved a 60% improvement in latencies for services handling large payloads.
Note: LinkedIn is using protobuf with REST as serialization format and not gRPC yet.
Subscribe to my newsletter
Read articles from SubashMohan directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
SubashMohan
SubashMohan
"Building Salesforce solutions as junior software developer.Diving into the world of web development and experimenting with chatgpt"