System Design

Day 2 - Message Queue, Monolithic and Microservices
Message Queue
Message Queues are mainly used for the communication between the services in the Microservice architecture, as services to exchange information asynchronously by placing messages into a queue data structure.
How the Messaging Queue works
A Producer creates and sends a message to the messaging/task queue. From that task queue, the message is assigned to a server (Server X), and the data is stored in the database. The task queue then waits for a response from that server. If the server takes too long to respond or becomes unresponsive, a request is sent to check if the server is still alive. If there is no response, the task is reassigned to another server via the load balancer. The load balancer assigns the task to a suitable server based on its load balancing technique. That server then processes the data, and the processed data is delivered to the Consumer. That’s how the messaging queue works under the hood, we don’t need to create a load balancer for that messaging queue and we don’t need to create a notifier, everything will be inside that messaging queue
Core Component
Producer: Creates and sends message to the queue.
Queue: Stores the message until the consumer retrieve them. Message persist in the queue until explicitly deleted.
Consumer: Retrieves the process message from the Queue.
Broker: The messaging server that manages queues, routes messages.
Benefits
Decoupling & Resilience: Producers and consumers operate independently, tolerating failures
Scalability: Brokers shard queues/topics across nodes, supporting elastic growth
Asynchronous Processing: Enables nonblocking flows and efficient resource utilization
Challenges
Complexity: Distributed brokers require configuration, monitoring, and maintenance
Latency & Throughput Trade‑Offs: Durability and delivery guarantees can increase latency; tuning is essential
Example for Messaging queue systems
1. RabbitMQ
2. Apache Kafka
3. Amazon SQS etc…
Monolithic and Microservices
Monolithic Architecture
In this architecture, all services are encapsulated within one machine or deployed on a single machine. However, we can scale out by creating multiple monolithic servers, similar to horizontal scaling, so that instead of reaching only one server, we can have multiple instances of it. The main point is that all services, such as Order service, Payment service, Orders list service, and so on, are deployed inside one server. There are advantages and disadvantages to this method, which we will discuss below.
Advantages
Single Codebase & Process: we have single code bace here in this architecture then the latency of taking to each service will be very low. All modules are compiled and deployed together.
Tight Coupling: Components share memory space and resources
Synchronous Calls: Internal method or function calls are used for inter‑module communication, like it’ll be very fast communication between the inter - modules.
Simplicity: One deployable artifact and unified technology stack reduce initial complexity
Disadvantages
Scalability Limits: Scaling requires vertical upgrades (more CPU/RAM), Horizontal scaling requires more servers, more money.
Single Point of Failure: The whole system goes down if any component crashes, if any one service goes down then the entire system goes down.
Slow Deployments: A change in any module mandates redeploying the entire application, slowing release cycles
Microservices Architecture
In this architecture, individual services are deployed on different instances or servers. They may be connected to the same database, or each may have its own database. This architecture allows for easy scaling, both horizontally and vertically. We can either increase the instances of a particular service or add more resources to a specific instance, enabling it to handle as many requests as possible. The main issue with this architecture is that it must be well-designed, as there will be inter-service calls that require more latency. More latency means longer loading times, which is not ideal. For inter-service communication, we can use REST, gRPC (remote procedure call), or messaging queues like RabbitMQ and Kafka.
Characteristics
Independent Deployability: Each service can be built, tested, and deployed separately
Decentralized Data Management: Services own private data stores (database‑per‑service pattern) to enforce loose coupling
Communication Patterns
Synchronous APIs: REST or gRPC calls routed through an API Gateway that handles authentication, routing, and protocol translation
Asynchronous Messaging: Event‑driven interactions via message queues or streams for loose coupling and resilience
Advantages
Scalability: Scale services independently based on workload, either vertically or horizontally.
Fault Isolation: Failures are contained within individual services, not all services.
Organizational Alignment: Small teams own end‑to‑end service development.
Disadvantages
Operational Complexity: Managing many services, configurations, and deployments increases overhead.
Network Latency & Reliability: Inter‑service calls add latency and introduce partial‑failure scenarios.
Data Consistency: Ensuring consistency across distributed data stores requires eventual consistency and careful design.
Subscribe to my newsletter
Read articles from Manoj Kumar directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
