๐Ÿš€ Exciting Day 5 of My AWS DevOps Engineer Professional Journey! ๐Ÿš€

Nirav RaychuraNirav Raychura
8 min read

Greetings, fellow tech enthusiasts! Today marks another thrilling chapter in my AWS DevOps certification journey, and I'm eager to share the knowledge gained on Day 5 through Stรฉphane Maarek's Udemy course.

๐Ÿ’ก Course Progress - Day 5: Delving into OpsWorks, Lambda, API Gateway, ECS, ECR, and EKS!

As we navigate through diverse AWS services, let's dive into the wealth of insights acquired and the hands-on experiences gained.

๐Ÿ” Key Learnings

๐Ÿ›  Overview of OpsWorks

AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments.

๐Ÿ”„ OpsWorks Lifecycle Events

AWS OpsWorks Stacks provides a set of lifecycle events that you can use to customize the deployment and configuration of your instances. Each layer has a set of five lifecycle events, each of which has an associated set of recipes that are specific to the layer. When an event occurs on a layerโ€™s instance, AWS OpsWorks Stacks automatically runs the appropriate set of recipes. Here are the five lifecycle events:

  1. Setup: This event occurs after a started instance has finished booting. AWS OpsWorks Stacks runs recipes that set the instance up according to its layer. For example, if the instance is a member of the Rails App Server layer, the Setup recipes install Apache, Ruby Enterprise Edition, Passenger, and Ruby on Rails.

  2. Configure: This event occurs on all of the stackโ€™s instances when one of the following occurs: an instance enters or leaves the online state, you associate an Elastic IP address with an instance or disassociate one from an instance, or you attach an Elastic Load Balancing load balancer to a layer, or detach one from a layer. AWS OpsWorks Stacks responds to the Configure event by running each layerโ€™s Configure recipes, which update the instancesโ€™ configuration to reflect the current set of online instances.

  3. Deploy: This event occurs when you run a Deploy command, typically to deploy an application to a set of application server instances. The instances run recipes that deploy the application and any related files from its repository to the layerโ€™s instances.

  4. Undeploy: This event occurs when you run an Undeploy command, typically to remove an application from a set of application server instances. The instances run recipes that remove the application and any related files from its repository from the layerโ€™s instances.

  5. Shutdown: This event corresponds to the Shutdown lifecycle event, which runs the instanceโ€™s Shutdown recipes. These recipes perform tasks such as shutting down services, but they do not stop the instance.

Note: According to the AWS, OpsWorks will get discontinued after May 26, 2024. If you have any more questions regarding to the end of life of OpsWorks you can click here.

๐Ÿ“Š OpsWorks CloudWatch Event Integration

AWS OpsWorks Stacks supports Amazon CloudWatch Logs to simplify the process of monitoring logs on multiple instances. You enable CloudWatch Logs at the layer level in AWS OpsWorks Stacks. CloudWatch Logs integration works with Chef 11.10 and Chef 12 Linux-based stacks.

๐Ÿงน Cleanup of OpsWorks

To prevent incurring additional charges to your AWS account, you can delete the AWS resources that were used for this walkthrough. These AWS resources include the AWS OpsWorks Stacks stack and the stackโ€™s components. To delete the stack, you can use the AWS Management Console or the AWS CLI.


๐ŸŒ Lambda Function Fundamentals

AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time that you consume - there is no charge when your code is not running.

๐Ÿ”„ Versioning and Aliasing in Lambda Functions

You can create aliases for your Lambda function. A Lambda alias is a pointer to a function version that you can update. The functionโ€™s users can access the function version using the alias Amazon Resource Name (ARN). When you deploy a new version, you can update the alias to use the new version, or split traffic between two versions.

๐Ÿ” Working with Environment Variables in Lambda Functions

You can use environment variables to store secrets and configuration values for your Lambda function. Environment variables are key-value pairs that you can dynamically pass to your function code at runtime. You can use environment variables to store sensitive information such as database passwords or API keys.

๐Ÿ“ˆ Lambda Concurrency

AWS Lambda automatically scales your application in response to incoming requests. Concurrency is the number of requests that your function is serving at any given time. AWS Lambda automatically provisions enough capacity to handle the request volume, so that your function can scale without requiring you to manage any infrastructure.

๐Ÿ“‚ Mounting File System in Lambda

You can mount an Amazon Elastic File System (Amazon EFS) file system to your AWS Lambda function. This enables your function to read and write data to the file system. To mount an Amazon EFS file system to your Lambda function, you can use the amazon-efs-utils package.

๐ŸŒ Cross-Account File System Mounting

To mount an Amazon EFS file system from another AWS account, you can use AWS Resource Access Manager (RAM). RAM enables you to share your Amazon EFS file systems across AWS accounts. You can use RAM to create a resource share that grants access to your Amazon EFS file system to another AWS account.


๐Ÿš€ Introduction to API Gateway

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. With API Gateway, you can create RESTful APIs that enable real-time two-way communication applications.

๐Ÿ“Š Stages and Deployment in API Gateway

API Gateway enables you to create multiple stages for your APIs, such as development, test, and production. Each stage is a snapshot of your API that you can manage independently. You can deploy your API to a stage by creating a deployment.

๐Ÿ“„ OpenAPI Integration with AWS API Gateway

OpenAPI is a specification for building APIs. To learn more about OpenAPI click here. You can use OpenAPI to define your API Gateway APIs. To import an OpenAPI definition file into API Gateway, you can use the AWS::ApiGateway::RestApi resource type in your AWS CloudFormation template 1. You can then use the AWS::ApiGateway::Deployment resource type to deploy the API.

๐Ÿ“ˆ API Gateway Caching

API Gateway caching is a technique that involves storing the responses from API calls in a cache and serving them directly from the cache instead of making repeated requests to the backend services. This caching mechanism can greatly reduce the response time and alleviate the load on your backend systems.

๐Ÿšฆ Canary Deployment in API Gateway

Canary deployment is a software development strategy in which a new version of an API is deployed for testing purposes, and the base version remains deployed as a production release for normal operations on the same stage. In a canary release deployment, total API traffic is separated at random into a production release and a canary release with a pre-configured ratio. The updated API features are only visible to API traffic through the canary.

๐Ÿ“Š API Gateway Monitoring, Logging, and Tracing

Monitoring is an important part of maintaining the reliability, availability, and performance of API Gateway and your AWS solutions. You should collect monitoring data from all of the parts of your AWS solution so that you can more easily debug a multi-point failure if one occurs. AWS provides several tools for monitoring your API Gateway resources and responding to potential incidents, such as Amazon CloudWatch Logs, Amazon CloudWatch Alarms, Access Logging to Kinesis Data Firehose, AWS CloudTrail, AWS X-Ray, and AWS Config.


๐ŸŒ Introduction to AWS ECS

Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications. As a fully managed service, Amazon ECS comes with AWS configuration and operational best practices built-in. You can run and scale your container workloads across AWS Regions in the cloud, and on-premises, without the complexity of managing a control plane.

๐Ÿ”„ Types of ECS

There are two types of Amazon ECS launch types: Amazon ECS on EC2 and AWS Fargate. Amazon ECS on EC2 allows you to manage your own EC2 instances to run containers, providing maximum control and flexibility. AWS Fargate, on the other hand, abstracts the underlying infrastructure, making it a serverless option ideal for simplified deployments.

๐Ÿ“ˆ ECS Auto Scaling

Amazon ECS Auto Scaling is a service that automatically scales your Amazon ECS tasks based on the demand of your applications. You can use Amazon ECS Auto Scaling to automatically adjust the number of tasks in your service based on the metrics and thresholds that you specify.

Note: This is different from EC2 Auto Scaling. This is based on TASKS not EC2 INSTANCE.

๐Ÿ“Š Logging in ECS

You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon ECS. To enable logging for your Amazon ECS tasks, you can use the awslogs log driver in your task definition. You can then use CloudWatch Logs to view and analyze your log data.

๐ŸŒ Introduction to AWS ECR

Amazon Elastic Container Registry (Amazon ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR eliminates the need to operate your own container repositories or worry about scaling the underlying infrastructure.

๐ŸŒ Introduction to AWS EKS

Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Amazon EKS runs the Kubernetes management infrastructure for you across multiple AWS availability zones to eliminate a single point of failure.

๐Ÿ“Š Logging in EKS

You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon EKS. To enable logging for your Amazon EKS cluster, you can use the awslogs log driver in your Kubernetes pod definition. You can then use CloudWatch Logs to view and analyze your log data.


โœจ The Journey Continues: As Day 5 wraps up, I'm excited about the depth of understanding gained and the practical skills acquired. Stay tuned for more updates as my AWS DevOps journey continues to unfold!


If you have any doubts or suggestion or any questions let's connect on LinkedIn or Twitter(X).

2
Subscribe to my newsletter

Read articles from Nirav Raychura directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Nirav Raychura
Nirav Raychura

๐Ÿš€ Tech Enthusiast since 2014 | Cloud Maestro with expertise in AWS, Azure, GCP, and Oracle Cloud โ˜๏ธ | Navigating the cloud landscape since 2022 | Holder of 8 Cloud Certificates ๐Ÿ… | BCA Graduate ๐ŸŽ“ | Proficient in the programming languages of C, C++, GO, VB, and more ๐Ÿ–ฅ๏ธ | Entrepreneur with a focus on servers, NAS, firewalls, networking, and CCTV ๐ŸŒ | Architecting the future of tech, one line of code at a time.