AWS Certified Cloud Practitioner exam preparation

Table of contents

Hey Everyone, recently I have received the AWS Cloud Practitioner certificate(verify here). This blog is a brief summary of the study material I referred, I would strongly suggest going through the Exam Preparation course on AWS skill builder and a few practice question sets too before attempting the exam.

Introduction to AWS

What is cloud computing?

Cloud computing is the on-demand delivery of IT resources over the internet with pay-as-you-go pricing. On-demand delivery indicates that AWS has the resources you need , when you need them. You don't need to have any agreements in advance. With pay-as-you-go pricing, you only pay for the time that you are using the resources.

Deployment models for cloud computing

When selecting a cloud strategy, a company must consider factors such as required cloud application components, preferred resource management tools, and any legacy IT infrastructure requirements. The three cloud computing deployment models are cloud-based, on-premises, and hybrid.

In a cloud-based deployment model, you can migrate existing applications to the cloud, or you can design and build new applications in the cloud. You can build them using higher-level services that reduce the management, architecting, and scaling requirements of the core infrastructure.

On-premises deployment is also known as a private cloud deployment. In this model, resources are deployed on premises by using virtualization and resource management tools.

In a hybrid deployment, cloud-based resources are connected to on-premises infrastructure. For example, you have legacy applications that are better maintained on premises, or government regulations require your business to keep certain records on premises.

Benefits of Cloud Computing

  • Trade upfront expense for variable expense

    Upfront expense refers to data centers, physical servers, and other resources that you would need to invest in before using them. Variable expense means you only pay for computing resources you consume instead of investing heavily in data centers and servers before you know how you’re going to use them.

  • Save costs to run and maintain data centers

    Computing in data centers often requires you to spend more money and time managing infrastructure and servers.

  • Stop guessing capacity

    With cloud computing, you don’t have to predict how much infrastructure capacity you will need before deploying an application. You can provision a resource within few clicks anytime you need them. You can also scale in or scale out in response to demand.

  • Benefit from massive economies of scale

    By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers can aggregate in the cloud, providers, such as AWS, can achieve higher economies of scale.

  • Increase speed and agility

    The flexibility of cloud computing makes it easier for you to develop and deploy applications.

    When computing in data centers, it may take weeks to obtain new resources that you need. By comparison, cloud computing enables you to access new resources within minutes.

  • Go global in minutes

    The global footprint of the AWS Cloud enables you to deploy applications to customers around the world quickly, while providing them with low latency.

Amazon Elastic Compute Cloud (Amazon EC2)

Amazon Elastic Compute Cloud (Amazon EC2) provides secure, resizable compute capacity in the cloud as Amazon EC2 instances.

  • You can provision and launch an Amazon EC2 instance within minutes.

  • You can stop using it when you have finished running a workload.

  • You pay only for the compute time you use when an instance is running, not when it is stopped or terminated.

  • You can save costs by paying only for server capacity that you need or want.

  • EC2 runs on physical hosts managed by AWS, using virtualization. You don’t take an entire host you share it with others.

EC2 Instance types

Amazon EC2 instance types are optimized for different tasks. When selecting an instance type, consider the specific needs of your workloads and applications.

  • General purpose instances

    General purpose instances provide a balance of compute, memory, and networking resources. You can use them for a variety of workloads, such as:

    • application servers

    • gaming servers

    • backend servers for enterprise applications

    • small and medium databases

  • Compute optimized instances

    Compute optimized instances are ideal for compute-bound applications that benefit from high-performance processors. Compute optimized applications are ideal for high-performance web servers, compute-intensive applications servers, and dedicated gaming servers. You can also use compute optimized instances for batch processing workloads that require processing many transactions in a single group.

  • Memory optimized instances

    Memory optimized instances are designed to deliver fast performance for workloads that process large datasets in memory. Memory optimized instances enable you to run workloads with high memory needs and receive great performance. For example, when you have a workload that requires large amounts of data to be preloaded before running an application.

  • Accelerated computing instances

    Accelerated computing instances use hardware accelerators, or coprocessors, to perform some functions more efficiently than is possible in software running on CPUs. Examples of these functions include floating-point number calculations, graphics processing, and data pattern matching.

  • Storage optimized instances

    Storage optimized instances are designed for workloads that require high, sequential read and write access to large datasets on local storage. Examples of workloads suitable for storage optimized instances include distributed file systems, data warehousing applications, and high-frequency online transaction processing (OLTP) systems.

EC2 pricing

Amazon EC2 offers a variety of pricing options for different use cases.

  • On-Demand

    On-Demand Instances are ideal for short-term, irregular workloads that cannot be interrupted. No upfront costs or minimum contracts apply. The instances run continuously until you stop them, and you pay for only the compute time you use.

  • Reserved Instances

    Reserved Instances are a billing discount applied to the use of On-Demand Instances in your account. There are two available types of Reserved Instances:

    • Standard Reserved Instances

    • Convertible Reserved Instances

You can purchase Standard Reserved and Convertible Reserved Instances for a 1-year or 3-year term. You realize greater cost savings with the 3-year option.

Standard Reserved Instances: This option is a good fit if you know the EC2 instance type and size you need for your steady-state applications and in which AWS Region you plan to run them. Reserved Instances require you to state the following qualifications:

  • Instance type and size: For example, m5.xlarge

  • Platform description (operating system): For example, Microsoft Windows Server or Red Hat Enterprise Linux

  • Tenancy: Default tenancy or dedicated tenancy

You have the option to specify an Availability Zone for your EC2 Reserved Instances. If you make this specification, you get EC2 capacity reservation. This ensures that your desired amount of EC2 instances will be available when you need them.

Convertible Reserved Instances: If you need to run your EC2 instances in different Availability Zones or different instance types, then Convertible Reserved Instances might be right for you.

Note: You trade in a deeper discount when you require flexibility to run your EC2 instances.

At the end of a Reserved Instance term, you can continue using the Amazon EC2 instance without interruption. However, you are charged On-Demand rates until you do one of the following:

  • Terminate the instance.

  • Purchase a new Reserved Instance that matches the instance attributes (instance family and size, Region, platform, and tenancy).

  • Savings plan

    AWS offers Savings Plans for a few compute services, including Amazon EC2. EC2 Instance Savings Plans reduce your EC2 instance costs when you make an hourly spend commitment to an instance family and Region for a 1-year or 3-year term. Any usage beyond the commitment is charged at regular On-Demand rates.

    You have the benefit of saving costs on running any EC2 instance within an EC2 instance family in a chosen Region. Unlike Reserved Instances, however, you don't need to specify up front what EC2 instance type and size (for example, m5.xlarge), OS, and tenancy to get a discount. Further, you don't need to commit to a certain number of EC2 instances over a 1-year or 3-year term. Additionally, the EC2 Instance Savings Plans don't include an EC2 capacity reservation option.

  • Spot Instances

    Spot Instances are ideal for workloads with flexible start and end times, or that can withstand interruptions. Spot Instances use unused Amazon EC2 computing capacity and offer you cost savings at up to 90% off of On-Demand prices. Suppose that you have a background processing job that can start and stop as needed (such as the data processing job for a customer survey).The unavailable capacity might delay the launch of your background processing job.

  • Dedicated hosts

    Dedicated Hosts are physical servers with Amazon EC2 instance capacity that is fully dedicated to your use.

    You can use your existing per-socket, per-core, or per-VM software licenses to help maintain license compliance. You can purchase On-Demand Dedicated Hosts and Dedicated Hosts Reservations. Of all the Amazon EC2 options that were covered, Dedicated Hosts are the most expensive.

Scaling Amazon EC2

Scalability involves beginning with only the resources you need and automatically responding to changing demand by scaling out or in. As a result, you pay for only the resources you use. You don’t have to worry about a lack of computing capacity to meet your customers’ needs. The AWS service that provides automatic scaling for Amazon EC2 instances is Amazon EC2 Auto Scaling.

Within Amazon EC2 Auto Scaling, you can use two approaches: dynamic scaling and predictive scaling.

  • Dynamic scaling responds to changing demand.

  • Predictive scaling automatically schedules the right number of Amazon EC2 instances based on predicted demand.

Elastic Load Balancing

Elastic Load Balancing is the AWS service that automatically distributes incoming application traffic across multiple resources, such as Amazon EC2 instances. A load balancer acts as a single point of contact for all incoming web traffic to your Auto Scaling group. For example, if you have multiple Amazon EC2 instances, Elastic Load Balancing distributes the workload across the multiple instances so that no single instance has to carry the bulk of it.

Although Elastic Load Balancing and Amazon EC2 Auto Scaling are separate services, they work together to help ensure that applications running in Amazon EC2 can provide high performance and availability.

Messaging and queuing

Suppose that you have an application with tightly coupled components. These components might include databases, servers, the user interface, business logic, and so on. This type of architecture can be considered a monolithic application.

In this approach to application architecture, if a single component fails, other components fail, and possibly the entire application fails.

To maintain application availability when a single component fails, you can design your application through a microservices approach.

n a microservices approach, application components are loosely coupled. In this case, if a single component fails, the other components continue to work because they are communicating with each other. The loose coupling prevents the entire application from failing. Two services facilitate application integration: Amazon Simple Notification Service (Amazon SNS) and Amazon Simple Queue Service (Amazon SQS).

Amazon Simple Notification Service (Amazon SNS) is a publish/subscribe service. Using Amazon SNS topics, a publisher publishes messages to subscribers.

Amazon Simple Queue Service (Amazon SQS) is a message queuing service. Using Amazon SQS, you can send, store, and receive messages between software components, without losing messages or requiring other services to be available.

Serverless computing

The term “serverless” means that your code runs on servers, but you do not need to provision or manage these servers. Another benefit of serverless computing is the flexibility to scale serverless applications automatically. An AWS service for serverless computing is AWS Lambda.

While using AWS Lambda, you pay only for the compute time that you consume. Charges apply only when your code is running.

Amazon Elastic Container Service (Amazon ECS)

Containers provide you with a standard way to package your application's code and dependencies into a single object. Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container management system that enables you to run and scale containerized applications on AWS.

Amazon Elastic Kubernetes Service (Amazon EKS)

Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed service that you can use to run Kubernetes on AWS.

AWS Fargate

AWS Fargate is a serverless compute engine for containers. It works with both Amazon ECS and Amazon EKS. When using AWS Fargate, you do not need to provision or manage servers. You pay only for the resources that are required to run your containers.

Global Infrastructure

AWS global infrastructure

When determining the right Region for your services, data, and applications, consider the following four business factors.

  • Compliance with data governance and legal requirements

    For example, if your company requires all of its data to reside within the boundaries of the UK, you would choose the London Region.

  • Proximity to your customers

    Selecting a Region that is close to your customers will help you to get content to them faster.

  • Available services within a region

    Sometimes, the closest Region might not have all the features that you want to offer to customers.

  • Pricing

    Suppose that you are considering running applications in both the United States and Brazil. The way Brazil’s tax structure is set up, it might cost 50% more to run the same workload out of the São Paulo Region compared to the Oregon Region.

Availability zones

An Availability Zone is a single data center or a group of data centers within a Region. Availability Zones are located tens of miles apart from each other. This is close enough to have low latency (the time between when content requested and received) between Availability Zones. However, if a disaster occurs in one part of the Region, they are distant enough to reduce the chance that multiple Availability Zones are affected.

Edge locations

An edge location is a site that Amazon CloudFront uses to store cached copies of your content closer to your customers for faster delivery.

Instead of requiring your customers to get their data from Brazil, you can cache a copy locally at an edge location that is close to your customers in China.

When a customer in China requests one of your files, Amazon CloudFront retrieves the file from the cache in the edge location and delivers the file to the customer. The file is delivered to the customer faster because it came from the edge location near China instead of the original source in Brazil.

Provisioning AWS resources

  • AWS Management Console

    The AWS Management Console is a web-based interface for accessing and managing AWS services. You can quickly access recently used services and search for other services by name, keyword, or acronym. The console includes wizards and automated workflows that can simplify the process of completing tasks.
    You can also use the AWS Console mobile application to perform tasks such as monitoring resources, viewing alarms, and accessing billing information. Multiple identities can stay logged into the AWS Console mobile app at the same time.

  • AWS Command Line Interface

    To save time when making API requests, you can use the AWS Command Line Interface (AWS CLI). AWS CLI enables you to control multiple AWS services directly from the command line within one tool. By using AWS CLI, you can automate the actions that your services and applications perform through scripts.

  • Software Development Kits

    SDKs make it easier for you to use AWS services through an API designed for your programming language or platform. SDKs enable you to use AWS services with your existing applications or create entirely new applications that will run on AWS.

AWS Elastic Beanstalk

With AWS Elastic Beanstalk, you provide code and configuration settings, and Elastic Beanstalk deploys the resources necessary to perform the following tasks:

  • Adjust capacity

  • Load balancing

  • Automatic scaling

  • Application health monitoring

AWS CloudFormation

With AWS CloudFormation, you can treat your infrastructure as code. This means that you can build an environment by writing lines of code instead of using the AWS Management Console to individually provision resources.

AWS CloudFormation provisions your resources in a safe, repeatable manner, enabling you to frequently build your infrastructure and applications without having to perform manual actions.

Networking

Amazon Virtual Private Cloud (Amazon VPC)

A networking service that you can use to establish boundaries around your AWS resources is Amazon Virtual Private Cloud (Amazon VPC). Amazon VPC enables you to provision an isolated section of the AWS Cloud.

Internet gateway

To allow public traffic from the internet to access your VPC, you attach an internet gateway to the VPC. You can think of an internet gateway as being similar to a doorway that customers use to enter the coffee shop. Without an internet gateway, no one can access the resources within your VPC.

Virtual private gateway

To access private resources in a VPC, you can use a virtual private gateway. A virtual private gateway enables you to establish a virtual private network (VPN) connection between your VPC and a private network, such as an on-premises data center or internal corporate network.

AWS Direct Connect

AWS Direct Connect is a service that lets you to establish a dedicated private connection between your data center and a VPC. The private connection that AWS Direct Connect provides helps you to reduce network costs and increase the amount of bandwidth that can travel through your network.

Subnets and network access control lists

A subnet is a section of a VPC in which you can group resources based on security or operational needs.

Public subnets contain resources that need to be accessible by the public, such as an online store’s website.

Private subnets contain resources that should be accessible only through your private network, such as a database that contains customers’ personal information and order histories.

A network ACL (access control list) is a virtual firewall that controls inbound and outbound traffic at the subnet level.

By default, your account’s default network ACL allows all inbound and outbound traffic, but you can modify it by adding your own rules. For custom network ACLs, all inbound and outbound traffic is denied until you add rules to specify which traffic to allow. Additionally, all network ACLs have an explicit deny rule. This rule ensures that if a packet doesn’t match any of the other rules on the list, the packet is denied.

Network ACLs perform stateless packet filtering. They remember nothing and check packets that cross the subnet border each way: inbound and outbound.

Security groups

A security group is a virtual firewall that controls inbound and outbound traffic for an Amazon EC2 instance.

By default, a security group denies all inbound traffic and allows all outbound traffic. You can add custom rules to configure which traffic should be allowed; any other traffic would then be denied.

Security groups perform stateful packet filtering. They remember previous decisions made for incoming packets.

Domain Name System (DNS) & Route53

DNS resolution is the process of translating a domain name to an IP address. Amazon Route 53 is a DNS web service.

Amazon Route 53 connects user requests to infrastructure running in AWS (such as Amazon EC2 instances and load balancers). It can route users to infrastructure outside of AWS. Another feature of Route 53 is the ability to manage the DNS records for domain names. You can register new domain names directly in Route 53. You can also transfer DNS records for existing domain names managed by other domain registrars.

Suppose that AnyCompany’s application is running on several Amazon EC2 instances. These instances are in an Auto Scaling group that attaches to an Application Load Balancer.

  1. A customer requests data from the application by going to AnyCompany’s website.

  2. Amazon Route 53 uses DNS resolution to identify AnyCompany.com’s corresponding IP address, 192.0.2.0. This information is sent back to the customer.

  3. The customer’s request is sent to the nearest edge location through Amazon CloudFront.

  4. Amazon CloudFront connects to the Application Load Balancer, which sends the incoming packet to an Amazon EC2 instance.

Storage and databases

Instance store and Elastic Block Store (EBS)

An instance store provides temporary block-level storage for an Amazon EC2 instance. An instance store is disk storage that is physically attached to the host computer for an EC2 instance, and therefore has the same lifespan as the instance. When the instance is terminated, you lose any data in the instance store.

Amazon Elastic Block Store (Amazon EBS) is a service that provides block-level storage volumes that you can use with Amazon EC2 instances. If you stop or terminate an Amazon EC2 instance, all the data on the attached EBS volume remains available. To attach an Amazon EC2 instance to an EBS volume, both the Amazon EC2 instance and the EBS volume must reside within the same Availability Zone.

Object storage

In object storage, each object consists of data, metadata, and a key. The data might be an image, video, text document, or any other type of file. When you modify a file in block storage, only the pieces that are changed are updated. When a file in object storage is modified, the entire object is updated.

Amazon Simple Storage Service (Amazon S3)

Amazon Simple Storage Service (Amazon S3) is a service that provides object-level storage. Amazon S3 stores data as objects in buckets. Amazon S3 offers unlimited storage space. The maximum file size for an object in Amazon S3 is 5 TB. When you upload a file to Amazon S3, you can set permissions to control visibility and access to it. You can also use the Amazon S3 versioning feature to track changes to your objects over time.

Amazon S3 storage classes

With Amazon S3, you pay only for what you use. You can choose from a range of storage classes to select a fit for your business and cost needs. When selecting an Amazon S3 storage class, consider these two factors:

  • How often you plan to retrieve your data

  • How available you need your data to be

There are eight storage classes:

  • S3 Standard

    • Designed for frequently accessed data

    • Stores data in a minimum of three Availability Zones

  • S3 Standard-Infrequent Access

    • Ideal for infrequently accessed data

    • Similar to Amazon S3 Standard but has a lower storage price and higher retrieval price

  • S3 One Zone-Infrequent Access

    • Stores data in a single Availability Zone

    • Has a lower storage price than Amazon S3 Standard-IA

  • S3 Intelligent-Tiering

    • Ideal for data with unknown or changing access patterns

    • Requires a small monthly monitoring and automation fee per object

  • S3 Glacier Instant Retrieval

    • Works well for archived data that requires immediate access

    • Can retrieve objects within a few milliseconds

  • S3 Glacier Flexible Retrieval

    • Low-cost storage designed for data archiving

    • Able to retrieve objects within a few minutes to hours

  • S3 Glacier Deep Archive

    • Lowest-cost object storage class ideal for archiving

    • Able to retrieve objects within 12 hours

  • S3 Outposts

    • Creates S3 buckets on Amazon S3 Outposts

    • Makes it easier to retrieve, store, and access data on AWS Outposts

Amazon Elastic File System (Amazon EFS)

In file storage, multiple clients can access data that is stored in shared file folders. In this approach, a storage server uses block storage with a local file system to organize files. Clients access data through file paths. Compared to block storage and object storage, file storage is ideal for use cases in which a large number of services and resources need to access the same data at the same time.

Amazon Elastic File System (Amazon EFS) is a scalable file system used with AWS Cloud services and on-premises resources. As you add and remove files, Amazon EFS grows and shrinks automatically. It can scale on demand to petabytes without disrupting applications. Amazon EFS is a regional service. It stores data in and across multiple Availability Zones.

Amazon Relational Database Service (Amazon RDS)

Relational databases use structured query language (SQL) to store and query data. Amazon Relational Database Service (Amazon RDS) is a service that enables you to run relational databases in the AWS Cloud.

Amazon RDS is a managed service that automates tasks such as hardware provisioning, database setup, patching, and backups. Many Amazon RDS database engines offer encryption at rest (protecting data while it is stored) and encryption in transit (protecting data while it is being sent and received).

Amazon RDS is available on six database engines. Supported database engines include:

  • Amazon Aurora

  • PostgreSQL

  • MySQL

  • MariaDB

  • Oracle Database

  • Microsoft SQL Server

Amazon Aurora

Amazon Aurora is an enterprise-class relational database. It is compatible with MySQL and PostgreSQL relational databases. It is up to five times faster than standard MySQL databases and up to three times faster than standard PostgreSQL databases.

Consider Amazon Aurora if your workloads require high availability. It replicates six copies of your data across three Availability Zones and continuously backs up your data to Amazon S3.

Amazon DynamoDB

Nonrelational databases are sometimes referred to as “NoSQL databases” because they use structures like key-value pairs to organize data.

Amazon DynamoDB is a key-value database service. It delivers single-digit millisecond performance at any scale. DynamoDB is serverless, which means that you do not have to provision, patch, or manage servers. You also do not have to install, maintain, or operate software.

As the size of your database shrinks or grows, DynamoDB automatically scales to adjust for changes in capacity while maintaining consistent performance.

Amazon Redshift

Amazon Redshift is a data warehousing service that you can use for big data analytics. It offers the ability to collect data from many sources and helps you to understand relationships and trends across your data.

AWS Database Migration Service

AWS Database Migration Service (AWS DMS) enables you to migrate relational databases, nonrelational databases, and other types of data stores. The source and target databases can be of the same type or different types. During the migration, your source database remains operational, reducing downtime for any applications that rely on the database.

Amazon DocumentDB

Amazon DocumentDB is a document database service that supports MongoDB workloads. (MongoDB is a document database program.)

Amazon Neptune

Amazon Neptune is a graph database service.

Amazon Quantum Ledger Database (QLDB)

Amazon Quantum Ledger Database (Amazon QLDB) is a ledger database service. You can use Amazon QLDB to review a complete history of all the changes that have been made to your application data.

Amazon Managed Blockchain

Amazon Managed Blockchain is a service that you can use to create and manage blockchain networks with open-source frameworks.

Amazon ElastiCache

Amazon ElastiCache is a service that adds caching layers on top of your databases to help improve the read times of common requests. It supports two types of data stores: Redis and Memcached.

Amazon DynamoDB Accelerator

Amazon DynamoDB Accelerator (DAX) is an in-memory cache for DynamoDB. It helps improve response times from single-digit milliseconds to microseconds.

Security

AWS Shared Responsibility Model

AWS is responsible for the security of some of the objects. Responsible 100% for those. The others, you are responsible 100% for their security. This is what's known as the shared responsibility model. The shared responsibility model divides into customer responsibilities (commonly referred to as “security in the cloud”) and AWS responsibilities (commonly referred to as “security of the cloud”).

When using AWS services, you, the customer, maintain complete control over your content. You are responsible for managing security requirements for your content, including which content you choose to store on AWS, which AWS services you use, and who has access to that content. You also control how access rights are granted, managed, and revoked.

The security steps that you take will depend on factors such as the services that you use, the complexity of your systems, and your company’s specific operational and security needs. Steps include selecting, configuring, and patching the operating systems that will run on Amazon EC2 instances, configuring security groups, and managing user accounts.

AWS operates, manages, and controls the components at all layers of infrastructure. This includes areas such as the host operating system, the virtualization layer, and even the physical security of the data centers from which services operate.

AWS is responsible for protecting the global infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure includes AWS Regions, Availability Zones, and edge locations.

AWS manages the security of the cloud, specifically the physical infrastructure that hosts your resources, which include:

  • Physical security of data centers

  • Hardware and software infrastructure

  • Network infrastructure

  • Virtualization infrastructure

Although you cannot visit AWS data centers to see this protection firsthand, AWS provides several reports from third-party auditors. These auditors have verified its compliance with a variety of computer security standards and regulations.

AWS Identity and Access Management (IAM)

AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely.

When you first create an AWS account, you begin with an identity known as the root user. The root user is accessed by signing in with the email address and password that you used to create your AWS account. It has complete access to all the AWS services and resources in the account.

An IAM user is an identity that you create in AWS. It represents the person or application that interacts with AWS services and resources. It consists of a name and credentials. By default, when you create a new IAM user in AWS, it has no permissions associated with it.

An IAM policy is a document that allows or denies permissions to AWS services and resources. IAM policies enable you to customize users’ levels of access to resources.

Follow the security principle of least privilege when granting permissions. By following this principle, you help to prevent users or roles from having more permissions than needed to perform their tasks.

An IAM group is a collection of IAM users. When you assign an IAM policy to a group, all users in the group are granted permissions specified by the policy.

An IAM role is an identity that you can assume to gain temporary access to permissions.

In IAM, multi-factor authentication (MFA) provides an extra layer of security for your AWS account

AWS Organizations

Suppose that your company has multiple AWS accounts. You can use AWS Organizations to consolidate and manage multiple AWS accounts within a central location.

When you create an organization, AWS Organizations automatically creates a root, which is the parent container for all the accounts in your organization.

In AWS Organizations, you can centrally control permissions for the accounts in your organization by using service control policies (SCPs). SCPs enable you to place restrictions on the AWS services, resources, and individual API actions that users and roles in each account can access.

Organizational units

In AWS Organizations, you can group accounts into organizational units (OUs) to make it easier to manage accounts with similar business or security requirements. When you apply a policy to an OU, all the accounts in the OU automatically inherit the permissions specified in the policy.

By organizing separate accounts into OUs, you can more easily isolate workloads or applications that have specific security requirements. For instance, if your company has accounts that can access only the AWS services that meet certain regulatory requirements, you can put these accounts into one OU. Then, you can attach a policy to the OU that blocks access to all other AWS services that do not meet the regulatory requirements.

AWS Artifact

AWS Artifact is a service that provides on-demand access to AWS security and compliance reports and select online agreements. AWS Artifact consists of two main sections: AWS Artifact Agreements and AWS Artifact Reports.

Suppose that your company needs to sign an agreement with AWS regarding your use of certain types of information throughout AWS services. You can do this through AWS Artifact Agreements. In AWS Artifact Agreements, you can review, accept, and manage agreements for an individual account and for all your accounts in AWS Organizations.

AWS Artifact Reports provide compliance reports from third-party auditors. These auditors have tested and verified that AWS is compliant with a variety of global, regional, and industry-specific security standards and regulations. AWS Artifact Reports remains up to date with the latest reports released.

AWS Shield

AWS Shield is a service that protects applications against DDoS attacks. AWS Shield provides two levels of protection: Standard and Advanced.

AWS Shield Standard automatically protects all AWS customers at no cost. It protects your AWS resources from the most common, frequently occurring types of DDoS attacks.

AWS Shield Advanced is a paid service that provides detailed attack diagnostics and the ability to detect and mitigate sophisticated DDoS attacks.

AWS Key Management Service (KMS)

AWS Key Management Service (AWS KMS) enables you to perform encryption operations through the use of cryptographic keys. You can use AWS KMS to create, manage, and use cryptographic keys. You can also control the use of keys across a wide range of services and in your applications. With AWS KMS, you can choose the specific levels of access control that you need for your keys.

AWS WAF

AWS WAF is a web application firewall that lets you monitor network requests that come into your web applications.

AWS WAF works together with Amazon CloudFront and an Application Load Balancer. AWS WAF works in a similar way to block or allow traffic. However, it does this by using a web access control list (ACL) to protect your AWS resources.

Amazon Inspector

Amazon Inspector helps to improve the security and compliance of applications by running automated security assessments. It checks applications for security vulnerabilities and deviations from security best practices, such as open access to Amazon EC2 instances and installations of vulnerable software versions. After Amazon Inspector has performed an assessment, it provides you with a list of security findings. The list prioritizes by severity level, including a detailed description of each security issue and a recommendation for how to fix it.

Amazon GuardDuty

Amazon GuardDuty is a service that provides intelligent threat detection for your AWS infrastructure and resources. It identifies threats by continuously monitoring the network activity and account behavior within your AWS environment.

Monitoring and Analytics

Amazon CloudWatch

Amazon CloudWatch is a web service that enables you to monitor and manage various metrics and configure alarm actions based on data from those metrics. AWS services send metrics to CloudWatch. CloudWatch then uses these metrics to create graphs automatically that show how performance has changed over time.

With CloudWatch, you can create alarms(opens in a new tab) that automatically perform actions if the value of your metric has gone above or below a predefined threshold.

The CloudWatch dashboard feature enables you to access all the metrics for your resources from a single location.

AWS CloudTrail

AWS CloudTrail records API calls for your account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, and more. With CloudTrail, you can view a complete history of user activity and API calls for your applications and resources. Events are typically updated in CloudTrail within 15 minutes after an API call.

Within CloudTrail, you can also enable CloudTrail Insights. This optional feature allows CloudTrail to automatically detect unusual API activities in your AWS account.

AWS Trusted Advisor

AWS Trusted Advisor is a web service that inspects your AWS environment and provides real-time recommendations in accordance with AWS best practices. Trusted Advisor compares its findings to AWS best practices in five categories: cost optimization, performance, security, fault tolerance, and service limits.

Pricing and Support

AWS Free Tier

The AWS Free Tier enables you to begin using certain services without having to worry about incurring costs for the specified period.

Three types of offers are available:

  • Always Free

    These offers do not expire and are available to all AWS customers.

  • 12 Months Free

    These offers are free for 12 months following your initial sign-up date to AWS.

  • Trials

    Short-term free trial offers start from the date you activate a particular service. The length of each trial might vary by number of days or the amount of usage in the service.

AWS Pricing Calculator

The AWS Pricing Calculator lets you explore AWS services and create an estimate for the cost of your use cases on AWS. You can organize your AWS estimates by groups that you define. A group can reflect how your company is organized, such as providing estimates by cost center.

Billing dashboard

Use the AWS Billing & Cost Management dashboard t

o pay your AWS bill, monitor your usage, and analyze and control your costs.

  • Compare your current month-to-date balance with the previous month, and get a forecast of the next month based on current usage.

  • View month-to-date spend by service.

  • View Free Tier usage by service.

  • Access Cost Explorer and create budgets.

  • Purchase and manage Savings Plans.

  • Publish AWS Cost and Usage Reports.

Consolidated Billing

The consolidated billing feature of AWS Organizations enables you to receive a single bill for all AWS accounts in your organization. By consolidating, you can easily track the combined costs of all the linked accounts in your organization. The default maximum number of accounts allowed for an organization is 4, but you can contact AWS Support to increase your quota, if needed. Another benefit of consolidated billing is the ability to share bulk discount pricing, Savings Plans, and Reserved Instances across the accounts in your organization.

AWS Budgets

In AWS Budgets, you can create budgets to plan your service usage, service costs, and instance reservations. The information in AWS Budgets updates three times a day.

AWS Cost Explorer

AWS Cost Explorer is a tool that lets you visualize, understand, and manage your AWS costs and usage over time. You can apply custom filters and groups to analyze your data.

AWS Support

AWS offers four different Support plans to help you troubleshoot issues, lower costs, and efficiently use AWS services.

You can choose from the following Support plans to meet your company’s needs:

  • Basic

  • Developer

  • Business

  • Enterprise On-Ramp

  • Enterprise

Basic Support

Basic Support is free for all AWS customers. It includes access to whitepapers, documentation, and support communities. With Basic Support, you can also contact AWS for billing questions and service limit increases.

With Basic Support, you have access to a limited selection of AWS Trusted Advisor checks. Additionally, you can use the AWS Personal Health Dashboard, a tool that provides alerts and remediation guidance when AWS is experiencing events that may affect you.

Developer, Business, Enterprise On-Ramp, and Enterprise Support

The Developer, Business, Enterprise On-Ramp, and Enterprise Support plans include all the benefits of Basic Support, in addition to the ability to open an unrestricted number of technical support cases. These Support plans have pay-by-the-month pricing and require no long-term contracts.

  • Developer Support

    Customers in the Developer Support plan have access to features such as:

    • Best practice guidance

    • Client-side diagnostic tools

    • Building-block architecture support, which consists of guidance for how to use AWS offerings, features, and services together

  • Business Support

    Customers with a Business Support plan have access to additional features, including:

    • Use-case guidance to identify AWS offerings, features, and services that can best support your specific needs

    • All AWS Trusted Advisor checks

    • Limited support for third-party software, such as common operating systems and application stack components

  • Enterprise Support

    In addition to all features included in the Basic, Developer, Business, and Enterprise On-Ramp support plans, customers with Enterprise Support have access to:

    • A designated Technical Account Manager to provide proactive guidance and coordinate access to programs and AWS experts

    • A Concierge support team for billing and account assistance

    • Operations Reviews and tools to monitor health

    • Training and Game Days to drive innovation

    • Tools to monitor costs and performance through Trusted Advisor and Health API/Dashboard

The Enterprise plan also provides full access to proactive services, which are provided by a designated Technical Account Manager:

  • Consultative review and architecture guidance

  • Infrastructure Event Management support

  • Cost Optimization Workshop and tools

  • Support automation workflows

  • 15 minutes or less response time for business-critical issues

Technical Account Manager

The TAM is your primary point of contact at AWS. TAMs provide expert engineering guidance, help you design solutions that efficiently integrate AWS services, assist with cost-effective and resilient architectures, and provide direct access to AWS programs and a broad community of experts.

AWS Marketplace

AWS Marketplace is a digital catalog that includes thousands of software listings from independent software vendors. You can use AWS Marketplace to find, test, and buy software that runs on AWS.

Migration and Innovation

AWS Cloud Adoption Framework

At the highest level, the AWS Cloud Adoption Framework (AWS CAF) organizes guidance into six areas of focus, called Perspectives. In general, the Business, People, and Governance Perspectives focus on business capabilities, whereas the Platform, Security, and Operations Perspectives focus on technical capabilities.

  • Business Perspective

    The Business Perspective ensures that IT aligns with business needs and that IT investments link to key business results.

  • People Perspective

    The People Perspective supports development of an organization-wide change management strategy for successful cloud adoption.

  • Governance Perspective

    The Governance Perspective focuses on the skills and processes to align IT strategy with business strategy. This ensures that you maximize the business value and minimize risks.

  • Platform Perspective

    The Platform Perspective includes principles and patterns for implementing new solutions on the cloud, and migrating on-premises workloads to the cloud.

  • Security Perspective

    The Security Perspective ensures that the organization meets security objectives for visibility, auditability, control, and agility.

  • Operations Perspective

    The Operations Perspective helps you to enable, run, use, operate, and recover IT workloads to the level agreed upon with your business stakeholders.

Migration Strategies

When migrating applications to the cloud, six of the most common migration strategies that you can implement are:

  • Rehosting

    Rehosting also known as “lift-and-shift” involves moving applications without changes.

  • Replatforming

    Replatforming, also known as “lift, tinker, and shift,” involves making a few cloud optimizations to realize a tangible benefit. Optimization is achieved without changing the core architecture of the application.

  • Refactoring/re-architecting

    Refactoring involves reimagining how an application is architected and developed by using cloud-native features.

  • Repurchasing

    Repurchasing involves moving from a traditional license to a software-as-a-service model.

  • Retaining

    Retaining consists of keeping applications that are critical for the business in the source environment

  • Retiring

    Retiring is the process of removing applications that are no longer needed.

Artificial Intelligence

AWS offers a variety of services powered by artificial intelligence (AI).

For example, you can perform the following tasks:

  • Convert speech to text with Amazon Transcribe.

  • Discover patterns in text with Amazon Comprehend.

  • Identify potentially fraudulent online activities with Amazon Fraud Detector.

  • Build voice and text chatbots with Amazon Lex.

Machine learning

Traditional machine learning (ML) development is complex, expensive, time consuming, and error prone. AWS offers Amazon SageMaker to remove the difficult work from the process and empower you to build, train, and deploy ML models quickly.

Amazon Q Developer

Amazon Q Developer is a machine learning-powered code generator that provides you with code recommendations in real time. As you write code, Amazon Q Developer analyzes your code and comments as you write code in your integrated development environment (IDE).

AWS Well Architected Framework

The AWS Well-Architected Framework(opens in a new tab) helps you understand how to design and operate reliable, secure, efficient, and cost-effective systems in the AWS Cloud. It provides a way for you to consistently measure your architecture against best practices and design principles and identify areas for improvement. The Well-Architected Framework is based on six pillars:

  • Operational excellence

    Operational excellence is the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.

    Design principles for operational excellence in the cloud include performing operations as code, annotating documentation, anticipating failure, and frequently making small, reversible changes.

  • Security

    The Security pillar is the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.

    When considering the security of your architecture, apply these best practices:

    • Automate security best practices when possible.

    • Apply security at all layers.

    • Protect data in transit and at rest.

  • Reliability

    Reliability is the ability of a system to do the following:

    • Recover from infrastructure or service disruptions

    • Dynamically acquire computing resources to meet demand

    • Mitigate disruptions such as misconfigurations or transient network issues

Reliability includes testing recovery procedures, scaling horizontally to increase aggregate system availability, and automatically recovering from failure.

  • Performance efficiency

    Performance efficiency is the ability to use computing resources efficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve.

    Evaluating the performance efficiency of your architecture includes experimenting more often, using serverless architectures, and designing systems to be able to go global in minutes.

  • Cost optimization

    Cost optimization is the ability to run systems to deliver business value at the lowest price point.

    Cost optimization includes adopting a consumption model, analyzing and attributing expenditure, and using managed services to reduce the cost of ownership.

  • Sustainability

    Sustainability is the ability to continually improve sustainability impacts by reducing energy consumption and increasing efficiency across all components of a workload by maximizing the benefits from the provisioned resources and minimizing the total resources required.


References

  1. https://skillbuilder.aws/learn/94T2BEN85A/aws-cloud-practitioner-essentials

  2. https://youtube.com/playlist?list=PLAkmEquH84DtZuD28-OS7iJHCJ5qvgG1K&si=NP2DM1wbbaAC9irB

0
Subscribe to my newsletter

Read articles from Viraj Vijaykumar Dalave directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Viraj Vijaykumar Dalave
Viraj Vijaykumar Dalave

I am a student learning DevOps and Cloud Computing. My blogs and articles are primarily a platform for me to post whatever I am learning. My passion to explain things in simple words also makes me use this platform as a way to teach fellow learners if possible. I would love to receive feedback from most people coming across my articles.