Module 2: Compute in the Cloud

Kumar ChaudharyKumar Chaudhary
23 min read

Introduction to Amazon EC2

Compute refers to the processing power needed to run applications, manage data, and perform calculations. In the cloud, this power is available on-demand. You can access it remotely without owning or maintaining physical hardware. Essentially, compute in the cloud means creating virtual machines with a cloud provider to run applications and tasks over the internet. In the following lessons, you will gain a thorough understanding of Amazon Elastic Compute Cloud (Amazon EC2), a powerful compute service from AWS, as you explore its flexibility, cost-effectiveness, and scalability.

In this lesson, you will learn how to do the following:

  • Describe how compute resources are provisioned and managed in the cloud.

  • Compare the benefits and challenges of using virtual servers to managing physical servers on premises.

  • Identify the concept of multi-tenancy in Amazon EC2.

In this lesson, you will get an overview of Amazon EC2. You will learn about provisioning and managing virtual servers to host your applications and business resources.

Amazon EC2

Amazon EC2 is more flexible, cost-effective, and faster than managing on-premises servers. It offers on-demand compute capacity that can be quickly launched, scaled, and terminated, with costs based only on active usage.

The flexibility of Amazon EC2 allows for faster development and deployment of applications. You can launch as many or as few virtual servers as needed and configure security, networking, and storage. You can also scale resources up or down based on usage, such as handling high traffic or compute-heavy tasks.

Key takeaways: Comparing on-premises and cloud resources

When designing infrastructure for your business, selecting the right resources can significantly affect your efficiency, flexibility, and overall costs. Review the key differences between on-premises and cloud resource management.

Challenges of on-premises resources

Imagine that you're responsible for designing your company's infrastructure to support new websites. With traditional on-premises resources, you must purchase hardware upfront, wait for delivery, and handle installation and configuration. This process is time-consuming, costly, and inflexible because you're locked into a specific capacity that might not align with changing demands.

Benefits of using Cloud Resources

In contrast, with Amazon EC2, you can quickly launch, scale, and stop instances based on your needs without the delays and upfront costs associated with traditional on-premises resources.

How Amazon EC2 works

You’ve learned that AWS manages complex infrastructure, offering on-demand compute capacity that’s available whenever you need it. You can request EC2 instances and have them ready to use within minutes. But how do you actually get started?

Step 1:- Launch an instance

When launching an EC2 instance, you start by selecting an Amazon Machine Image (AMI), which defines the operating system and might include additional software. You also choose an instance type, which determines the underlying hardware resources, such as CPU, memory, and network performance.

Step 2:- Connect to the instance

You can connect to an EC2 instance in various ways. Applications can interact with services running on the instance over the network.

Users or administrators can connect using SSH for Linux instances or Remote Desktop Protocol (RDP) for Windows instances. Alternatively, AWS services like AWS Systems Manager offer a secure and simplified method for accessing instances.

Step 3:- Use the instance

After you are connected to the instance, you can begin using it to run commands, install software, add storage, organize files, and perform other tasks.

Test your skills

How does Amazon EC2 compare to running servers on premises?

  • It is more expensive but offers more control.

  • It is more flexible, cost-effective, and quicker to get started.

  • It requires more time to set up and maintain.

  • It is only useful for large businesses.

COMPUTE IN CLOUD

Amazon EC2 Instance Types

In this lesson, you will learn how to do the following:

  • Explain the different EC2 instance types and their characteristics.

  • Identify appropriate use cases for each EC2 instance type.

Amazon EC2 offers a broad range of instance types, each tailored to meet specific use case requirements. These instances come with varying combinations of CPU, memory, storage, and networking capabilities, so you can choose the right mix of resources to optimize performance for your applications.

Key takeaways: EC2 instance types

Whether you're running a simple web service or complex data processing tasks, Amazon EC2 provides the flexibility to select the ideal instance type for your needs.

To review EC2 instance types, read each of the five numbered markers.

General purpose

General purpose instances provide a balanced mix of compute, memory, and networking resources. They are ideal for diverse workloads, like web services, code repositories, and when workload performance is uncertain.

Compute optimized

Compute optimized instances are ideal for compute-intensive tasks, such as gaming servers, high performance computing (HPC), machine learning, and scientific modeling.

Memory optimized

Memory optimized instances are used for memory-intensive tasks like processing large datasets, data analytics, and databases. They provide fast performance for memory-heavy workloads.

Accelerated computing

Accelerated computing instances use hardware accelerators, like graphics processing units (GPUs), to efficiently handle tasks, such as floating-point calculations, graphics processing, and machine learning.

Storage optimized

Storage optimized instances are designed for workloads that require high performance for locally stored data, such as large databases, data warehousing, and I/O-intensive applications.

Test your skills

  1. A financial institution is running a real-time analytics application that processes large datasets stored across multiple servers to provide quick query results. The application requires fast processing of data with a focus on handling large volumes of information efficiently.

    Which Amazon EC2 instance type would be the BEST choice for this task?

    • General purpose

    • Compute optimized

    • Storage optimized

    • Memory optimized

  2. A retail company is setting up a solution to analyze historical sales data that is stored locally. The solution requires fast access to large datasets with consistent, high disk throughput for quick data retrieval.

    Which Amazon EC2 instance type would be the MOST suitable for this use case?

    • General purpose

    • Compute optimized

    • Accelerated computing

    • Storage optimized

Demo: Launching an Amazon EC2 Instance

In this lesson, you will learn how to do the following:

  • Identify the key configurations needed when setting up an EC2 instance.­­

  • Explain how an AMI maintains consistency and efficiency when scaling applications.

Amazon EC2 demonstration

If you're eager to understand how Amazon EC2 works, this is your chance to see it in action! In this demo, you learn about the basic steps of launching an EC2 instance. By the end of this demo, you will have a fully functional EC2 instance.

Amazon Machine Images

In the demo, you got a quick introduction to AMIs. AMIs are pre-built virtual machine images that have the basic components for what is needed to start an instance. Now, let's explore AMIs in more detail.

  • AMI components

    An AMI includes the operating system, storage setup, architecture type, permissions for launching, and any extra software that is already installed. You can use one AMI to launch several EC2 instances that all have the same setup.

  • Three ways to use AMIs

    AMIs can be used in three ways. First, you can create your own by building a custom AMI with specific configurations and software tailored to your needs. Second, you can use pre-configured AWS AMIs, which are set up for common operating systems and software. Lastly, you can purchase AMIs from the AWS Marketplace, where third-party vendors offer specialized software designed for specific use cases.

  • AMI repeatability

    AMIs provide repeatability through a consistent environment for every new instance. Because configurations are identical and deployments automated, development and testing environments are consistent. This helps when scaling, reduces errors, and streamlines managing large-scale environments.

Test your skills

  1. What are the required configurations when launching an Amazon EC2 instance for a web server? (Select THREE.)

    • Amazon Machine Image (AMI)

    • Load balancing

    • Instance type

    • Permissions

    • Storage

    • Instance termination behavior

  1. What is an Amazon Machine Image (AMI) used for when launching an Amazon EC2 instance?

    • To choose the instance size

    • To configure networking settings

    • To pre-configure the operating system and software

    • To store instance data

Amazon EC2 Pricing

In this lesson, you will learn how to do the following:

  • Explain the available Amazon EC2 pricing options.

  • Describe when to use each pricing option based on specific use cases.

  • Describe Amazon EC2 Capacity Reservations and Reserved Instance (RI) flexibility.

In this lesson, you will learn about the pricing options for Amazon EC2. This information will help you find the most cost-effective solution for your workloads—whether you're just getting started or aiming to maximize savings on long-term usage.

Key takeaways: AWS pricing options

By understanding the different Amazon EC2 pricing options, you can make more informed decisions and optimize your costs based on your specific usage needs. To review the AWS pricing options, Observe each of the following flashcards.

On-Demand Instances:

Pay only for the compute capacity you consume with no upfront payments or long-term commitments required.

Reserved Instances:

Get a savings of up to 75 percent by committing to a 1-year or 3-year term for predictable workloads using specific instance families and AWS Regions.

Spot Instances:

Bid on spare compute capacity at up to 90 percent off the On-Demand price, with the flexibility to be interrupted when AWS reclaims the instance.

Savings Plans:

Save up to 72 percent across a variety of instance types and services by committing to a consistent usage level for 1 or 3 years.

Dedicated Hosts:

Reserve an entire physical server for your exclusive use. This option offers full control and is ideal for workloads with strict security or licensing needs.

Dedicated Instances:

Pay for instances running on hardware dedicated solely to your account. This option provides isolation from other AWS customers.

Dedicated Instances

Dedicated Hosts provide exclusive use of physical servers, offering full control over instance placement and resource allocation. This makes them ideal for security- or compliance-driven workloads. But what if you don’t need that level of control?

You could use Dedicated Instances, which offer physical isolation from other AWS accounts while still benefiting from the flexibility and cost savings of shared infrastructure.

The key difference is that Dedicated Instances provide isolation without you choosing which physical server they run on. Dedicated Hosts give you an entire physical server for exclusive use, providing complete control over instance placement and resource allocation.

Ultimately, the right choice depends on your specific workload requirements and the level of control you need over your infrastructure.

Dedicated Hosts offer exclusive use of a server with full control, whereas Dedicated Instances provide isolation without server control.

More about cost optimization

To optimize costs and resource allocation, AWS offers a range of pricing options including Savings Plans, Amazon EC2 Capacity Reservations, and Reserved Instances (RIs). Each of these is tailored to meet different workload and capacity needs.

Savings Plans

Good for: Predictable workloads

Savings Plans offer discounts compared to On-Demand rates in exchange for a commitment to use a specified amount of compute power (measured per hour) over a one-year or three-year period. They provide flexible pricing for Amazon EC2, AWS Fargate, AWS Lambda, and Amazon SageMaker AI usage, regardless of instance type or AWS Region. Payment options include All upfront, Partial upfront, or No upfront.

Capacity Reservations

Good for: Critical workloads with strict capacity requirements

With Amazon EC2 Capacity Reservations, you reserve compute capacity in a specific Availability Zone for critical workloads. Reservations are charged at the On-Demand rate, whether used or not. You only pay for the instances you run. This is ideal for strict capacity requirements for current or future business-critical workloads.

Reserved Instance flexibility

Good for: Steady-state workloads with predictable usage

RIs offer up to 75 percent savings over On-Demand pricing by applying discounts across instance sizes and multiple Availability Zones within a Region. When you purchase a Reserved Instance (RI), AWS automatically applies the discount to other instance sizes within the same family based on the instance size footprint. It also applies the discount across multiple Availability Zones for enhanced resource distribution and fault tolerance.

Test your skills

  1. A financial services company needs to run sensitive applications that handle confidential customer data and require compliance with industry regulations. They need complete control over the physical server, including instance placement and resource allocation.

    Which pricing option should they choose?

    • Dedicated Hosts

    • Savings Plans

    • On Demand

    • Spot Instances

  2. A startup is running a batch processing workload that can tolerate occasional interruptions, and they want to reduce costs by taking advantage of unused Amazon EC2 capacity.

    Which pricing option would offer them the most savings?

    • Reserved Instances

    • Savings Plans

    • On Demand

    • Spot Instances

  3. A customer is building a new application and is unsure of their usage patterns but expects to grow and stabilize usage over time. They want to start without a long-term commitment.

    Which pricing option should they use?

    • Reserved Instances

    • Savings Plans

    • On Demand

    • Spot Instances

AUTO SCALING AND LOAD BALANCING

Scaling Amazon EC2

In this lesson, you will learn how to do the following:

  • Recognize the concepts of scalability and elasticity as they apply to AWS.

  • Describe how AWS can help businesses adjust compute capacity based on varying demands.

If you've ever tried to access a website that wouldn't load and kept timing out, it might have been overwhelmed by more requests than it could handle. In this lesson, you will explore how scalability helps you manage fluctuating demand by adjusting compute capacity.

Key takeaways

Scalability is about a system’s potential to grow over time, whereas elasticity is about the dynamic, on-demand adjustment of resources.

Scalability

Scalability refers to the ability of a system to handle an increased load by adding resources. You can scale up by adding more power to existing machines, or you can scale out by adding more machines. Scalability focuses on long-term capacity planning to make sure that the system can grow and accommodate more users or workloads as needed.

Elasticity

Elasticity is the ability to automatically scale resources up or down in response to real-time demand. A system can then rapidly adjust its resources, scaling out during periods of high demand and scaling in when the demand decreases. Elasticity provides cost efficiency and optimal resource usage at any given moment.

Amazon EC2 Auto Scaling

Amazon EC2 Auto Scaling automatically adjusts the number of EC2 instances based on changes in application demand, providing better availability. It offers two approaches. Dynamic scaling adjusts in real time to fluctuations in demand. Predictive scaling preemptively schedules the right number of instances based on anticipated demand.

Example: Amazon EC2 Auto Scaling

With EC2 Auto Scaling, you maintain the desired amount of compute capacity for your application by dynamically adjusting the number of EC2 instances based on demand. You can create Auto Scaling groups, which are collections of EC2 instances that can scale in or out to meet your application’s needs.

An Auto Scaling group is configured with the following three key settings.

  1. MINIMUM CAPACITY

    The minimum capacity defines the least number of EC2 instances required to keep the application running. This makes sure that the system never scales below this threshold. It's the number of EC2 instances that launch immediately after you have created the Auto Scaling group.

    In this example, the minimum capacity is four EC2 instances.

  2. DESIRED CAPACITY

    The desired capacity is the ideal number of instances needed to handle the current workload, which Auto Scaling aims to maintain. If you do not specify the desired number of EC2 instances in an Auto Scaling group, the desired capacity defaults to your minimum capacity.

    In this example, the desired capacity is six EC2 instances.

  3. MAXIMUM CAPACITY

    The maximum capacity sets an upper limit on the number of instances that can be launched, preventing over-scaling and controlling costs. For example, you might configure the Auto Scaling group to scale out in response to increased demand.

    In this example, a maximum of 12 EC2 instances can be launched. Amazon EC2 Auto Scaling will scale between the minimum and maximum number of instances.

    Because Amazon EC2 Auto Scaling uses EC2 instances, you pay for only the instances you use, when you use them. This gives you a cost-effective architecture that provides the best customer experience while reducing expenses.

    **Test your skills

    **

    1. What is the primary benefit of scalability and elasticity in AWS?

      • The ability to manually adjust resources based on peak usage

      • The ability to grow and shrink resources dynamically based on real-time demand

      • The ability to create fixed resources that never change in size

      • The ability to permanently increase resource capacity for long-term growth

    2. What is the main reason for deploying Amazon EC2 instances across multiple Availability Zones?

      • To increase the power and speed of each individual instance

      • To provide high availability by allowing instances in different Availability Zones to handle traffic if one Availability Zone fails

      • To decrease the cost of instances by distributing them evenly across AWS Regions

      • To automatically scale instances based on resource usage in each Availability Zone

    3. How does AWS make sure that a business can meet fluctuating demand without over-provisioning resources?

      • By providing fixed resources that are always available

      • By allowing businesses to provision resources that automatically scale based on demand

      • By requiring businesses to purchase excess resources in advance to handle peak demand

      • By offering resources that are always running, regardless of demand

Directing Traffic with Elastic Load Balancing

In this lesson, you will learn how to do the following:

  • Describe the challenge of traffic distribution and scalability in AWS environments.

  • Recognize the benefits of Elastic Load Balancing (ELB) in AWS.

  • Explain the relationship between Amazon EC2 Auto Scaling and ELB in managing AWS resources.

Spreading workloads improves the performance of your applications by preventing any single resource from having to handle the full workload on its own. In this lesson, you will learn how ELB simplifies traffic distribution and management for AWS applications.

Elastic Load Balancing

Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple resources, such as EC2 instances, to optimize performance and reliability. A load balancer serves as the single point of contact for all incoming web traffic to an Auto Scaling group. As the number of EC2 instances fluctuates in response to traffic demands, incoming requests are first directed to the load balancer. From there, the traffic is distributed evenly across the available instances.

Although ELB and Amazon EC2 Auto Scaling are distinct services, they work in tandem to enhance application performance and ensure high availability. Together, they enable applications running on Amazon EC2 to scale effectively while maintaining consistent performance.

Key takeaways: ELB benefits

Let's review the main benefits of elastic load balancing and how it enhances the performance, scalability, and management of your AWS environment. To learn more, choose each of the following flashcards.

  1. Efficient traffic distribution:

    ELB evenly distributes traffic across EC2 instances, preventing overload on any single instance and optimizing resource utilization.

  2. Automatic scaling:

    ELB scales with traffic and automatically adjusts to changes in demand for a seamless operation as backend instances are added or removed.

  3. Simplified management:

    ELB decouples front-end and backend tiers and reduces manual synchronization. It also handles maintenance, updates, and failover to ease operational overhead.

Routing methods

To optimize traffic distribution, ELB uses several routing methods: Round Robin, Least Connections, IP Hash, and Least Response Time. These routing strategies work together for efficient traffic management and optimal application performance.

Round Robin

Distributes traffic evenly across all available servers in a cyclic manner.

Least Connections

Routes traffic to the server with the fewest active connections, maintaining a balanced load.

IP Hash

Uses the client’s IP address to consistently route traffic to the same server

Least Response Time

Directs traffic to the server with the fastest response time, minimizing latency.

Example: Elastic Load Balancing

Let's look at an example to better understand how Elastic Load Balancing works in cloud computing. In the healthcare industry, particularly in hospitals and medical facilities that provide online appointment booking systems or patient portals, website traffic can vary greatly throughout the day.

  1. INITIAL SETUP

    Low-demand period: At the beginning of the day, only a few patients are accessing the system to book appointments or view medical records. The existing web servers are sufficient to handle the low traffic. This matches the demand, with no need for additional resources at this point.

  2. SCALING UP

    High-demand period: As the day progresses, especially during peak hours, such as early mornings or just before the weekend, more patients access the portal to book appointments, view test results, or contact medical professionals. To handle this surge in demand, the healthcare system automatically scales up the number of servers to help ensure that the system remains responsive and available for all users.

  3. LOAD BALANCING

    A load balancer directs the incoming traffic to different web servers based on their current load. For instance, if one server starts receiving too many requests, the load balancer will route new requests to a less busy server. This makes sure that no single server becomes overwhelmed. The traffic is evenly distributed across available EC2 instances.

By using Elastic Load Balancing and Auto Scaling, the healthcare industry can efficiently manage the varying levels of patient traffic to online services. This provides reliable access to medical portals even during high-demand periods.

**Test your skills

**

  1. How does Elastic Load Balancing (ELB) improve scalability in AWS?

    • It manually adjusts the number of Amazon EC2 instances based on traffic.

    • It automatically routes traffic to instances based on various routing methods.

    • It directly increases the size of Amazon EC2 instances.

    • It creates new Amazon EC2 instances for each request.

  2. Which task does Elastic Load Balancing (ELB) perform?

    • Automatically adjusts the number of Amazon EC2 instances to match demand.

    • Distributes a workload across several Amazon EC2 instances.

    • Removes unneeded Amazon EC2 instances when demand is low.

    • Adds a second Amazon EC2 instance during an online store's popular sale.

Messaging and Queuing

In this lesson, you will learn how to do the following:

  • Describe how Amazon Simple Queue Service (Amazon SQS) facilitates message queuing.

  • Explain how Amazon Simple Notification Service (Amazon SNS) uses a publish-subscribe model to distribute messages.

  • Identify the difference between tightly coupled and loosely coupled architectures.

  • Explain how message queues help improve communication between components.

Ever wonder how busy coffee shops keep everything running smoothly, even when the barista is on break or the cashier is overwhelmed? Well, the same principles apply to software architecture. In this lesson, you will look into how messaging and queuing help prevent slowdowns and failures.

Key takeaways: Decoupling services

In modern application development, reliability and resilience are important. One effective way to achieve this is by adopting a service-oriented approach.

  1. Monolithic applications

    Applications consist of multiple components that work together to transmit data, fulfill requests, and keep the application running smoothly. In a traditional approach to application architecture, the components—such as database logic, web application servers, user interfaces, and business logic—are tightly coupled. This means that if one component fails, it can cause the failure of other components, potentially bringing down the entire application.

  2. Microservices architecture

    To improve application availability and resilience, you can adopt a microservices architecture. In this approach, application components are loosely coupled, meaning that if one component fails, the others continue to function normally. The communication between components remains intact, and the failure of a single component does not impact the entire system. This design promotes greater flexibility and reliability in the application.

Supporting scalable and reliable cloud communication

Amazon EventBridge, Amazon SNS, and Amazon SQS are AWS services that help different parts of an application communicate effectively in the cloud. These services support building event-driven and message-based systems. Together, they help create scalable, reliable applications that can handle high traffic and can enhance communication between components.

EventBridge

EventBridge is a serverless service that helps connect different parts of an application using events, helping to build scalable, event-driven systems. With EventBridge, you route events from sources like custom apps, AWS services, and third-party software to other applications. EventBridge simplifies the process of receiving, filtering, transforming, and delivering events, so you can quickly build reliable applications.

Example: EventBridge

Customers use an online food delivery service to order meals from local restaurants through a mobile app. When a customer places an order, several steps need to happen simultaneously. To learn more about these steps, choose each of the four numbered markers.

  1. Payment processing

    The payment service must verify and process the customer's payment.

  2. Restaurant notification

    The restaurant receives a notification to start preparing the meal.

  3. Inventory management

    The inventory system checks if the ingredients for the order are available.

  4. Delivery dispatch

    A delivery driver is notified to pick up and deliver the meal.

How EventBridge helps: EventBridge can route events, like order placed or payment completed, to the relevant services (payment, restaurant, inventory, and delivery). It can handle high volumes of events during peak times, making sure each service works independently. Even if one service fails, EventBridge will store the event and process it as soon as the service is available again. EventBridge helps provide a smooth and reliable operation across the entire system.

Amazon SQS

Amazon SQS is a message queuing service that facilitates reliable communication between software components. It can send, store, and receive messages at any scale, making sure messages are not lost and that other services don't need to be available for processing. In Amazon SQS, an application places messages into a queue, and a user or service retrieves the message, processes it, and then removes it from the queue.

Example: Amazon SQS

As customer support teams grow and the volume of issues increases, traditional workflows can struggle to keep up. Let's consider how a customer support team might tackle this challenge.

  1. SCENARIO

    A customer support team consists of a support agent and a technical specialist. The support agent is responsible for receiving customer issues, and the technical specialist works on resolving them. This process works well as long as both the agent and specialist are available and coordinated.

  2. CHALLANGE

    However, what happens if the support agent creates a ticket but the technical specialist is busy working on another issue or unavailable? The agent would have to wait until the specialist is free to accept the new ticket, causing delays in resolving customer issues and extending wait times for customers. As the volume of customer issues increases, this process becomes inefficient.

  3. SOLUTION

    To improve efficiency, they implement a queue system using Amazon SQS. The support agent adds customer issues to the queue, creating a backlog. Even if the specialist is busy, the agent can continue adding new issues. The specialist checks the queue, resolves issues, and updates the agent. This system provides a smooth workflow and helps handle higher volumes without delays or bottlenecks.

Amazon SNS

Amazon SNS is a publish-subscribe service that publishers use to send messages to subscribers through SNS topics. In Amazon SNS, subscribers can include web servers, email addresses, Lambda functions, and various other endpoints. You will learn about Lambda in more detail later.

Example: Amazon SNS

A company that sells a variety of products is currently sending a single email to all customers with updates on various topics, such as new products, special offers, and upcoming events. Although this method worked initially, customers want to receive only the updates they’re interested in. The current email update is causing customer dissatisfaction and lower engagement.

To learn more about the solution, choose each of the three numbered markers.

  1. Segment the communication

    The company decides to divide the communication into three separate topics, including one for new products, one for special offers, and one for events. Each topic will focus on a specific area of interest.

  2. Let customers choose topics

    Customers can subscribe to the topics they care about, such as the following:

    • A customer might subscribe only to new product updates.

    • Another customer might opt only for event notifications.

    • A third customer might choose to subscribe to new product updates and special offers.

  1. Send tailored notifications

    With Amazon SNS, the company can send personalized notifications to subscribers based on their specific interests. Amazon SNS makes sure that these notifications are promptly delivered to the right audience, improving the efficiency and relevance of the communication.

    **Test your skills

    **

    1. What BEST describes the key difference between tightly coupled and loosely coupled architectures?

      • In a tightly coupled architecture, components are tightly connected and dependent on each other, whereas in a loosely coupled architecture, components can operate independently.

      • Tightly coupled systems are more flexible in adding new components, whereas loosely coupled systems require careful configuration to add new components.

      • Tightly coupled architectures are designed for scalability, whereas loosely coupled systems focus on maintaining high availability.

      • Loosely coupled systems require components to share data directly with each other, whereas tightly coupled systems store data in a central repository.

    2. In a banking system, when customers transfer money, the transaction details are sent from the transaction service to a fraud detection service for verification. Sometimes, the fraud detection service is temporarily down.

      What is the MAIN advantage of using Amazon Simple Queue Service (Amazon SQS) in this banking scenario?

      • It guarantees immediate approval of transactions.

      • It stores transaction details until the fraud detection service can process them, even if the service is down.

      • It speeds up transaction processing by avoiding the use of a buffer.

      • It forces the transaction service and fraud detection service to depend on each other directly.

0
Subscribe to my newsletter

Read articles from Kumar Chaudhary directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Kumar Chaudhary
Kumar Chaudhary