AWS Best Practices - Part 3

Rahul wathRahul wath
5 min read

In this blog, we will cover key practices for managing EC2, VPC, ELB, RDS, and Elastic Cache in AWS. We will discuss the importance of tagging resources for better organization, using termination protection to prevent accidental instance deletions, and the benefits of setting up VPCs for enhanced security and control. You'll learn how to save costs with reserved instances, secure access through locked-down security groups, and release unused Elastic IPs.

We will also dive into ELB practices such as SSL termination and pre-warming for high traffic. Additionally, we'll cover RDS failover event subscriptions and the use of configuration endpoints in Elastic Cache to dynamically scale your caching solutions. These best practices ensure a well-architected, secure, and scalable AWS infrastructure.

EC2/VPC - Elastic Compute Cloud / Virtual Private Cloud

  • Tag Everything

    Pretty much everything can be given tags, use them! They’re great for organising things, make it easier to search and group things up. You can also use them to trigger certain behaviour on your instances, for example a tag of env=debug could put your application into debug mode when it deploys, etc.

  • Termination Protection : Use termination protection for non-auto-scaling instances.

    If you have any instances which are one-off things that aren’t under auto-scaling, then you should probably enable termination protection, to stop anyone from accidentally deleting the instance. I’ve had it happen, it sucks, learn from my mistake!

  • Use VPC

    Setting up a VPC seems like a pain at first, but once you get stuck in and play with it, it’s surprising easy to set up and get going. It provides all sorts of extra features over EC2 that are well worth the extra time it takes to set up a VPC. First, you can control traffic at the network level using ACLs, you can modify instance size, security groups, etc. without needing to terminate an instance. You can specify egress firewall rules (you cannot control outbound traffic from normal EC2). But the biggest thing is that you have your own private subnet where your instances are completely cut off from everyone else, so it adds an extra layer of protection.

  • Reserved Instances : Use reserved instances to save big $$$.

    Reserving an instance is just putting some money upfront in order to get a lower hourly rate. It ends up being a lot cheaper than an on-demand instance would cost. So if you know you’re going to be keeping an instance around for 1 or 3 years, it’s well worth reserving them. Reserved instances are a purely logical concept in AWS, you don’t assign a specific instance to be reserved, but rather just specify the type and size, and any instances that match the criteria will get the lower price.

  • Lock Security Groups : Lock down your security groups.

    Don’t use 0.0.0.0/0 if you can help it, make sure to use specific rules to restrict access to your instances. For example, if your instances are behind an ELB, you should set your security groups to only allow traffic from the ELBs, rather than from 0.0.0.0/0. You can do that by entering “amazon-elb/amazon-elb-sg” as the CIDR (it should auto-complete for you). If you need to allow some of your other instances access to certain ports, don’t use their IP, but specify their security group identifier instead (just start typing “sg-” and it should auto-complete for you).

  • Release EIPs : Don’t keep unassociated Elastic IPs.

    You get charged for any Elastic IPs you have created but not associated with an instance, so make sure you don’t keep them around once you’re done with them.


ELB - Elastic Load Balancer

  • Terminate SSL : Terminate SSL on the load balancer.

    You’ll need to add your SSL certificate information to the ELB, but this will take the overhead of SSL termination away from your servers which can speed things up. Additionally, if you upload your SSL certificate, you can pass through the HTTPS traffic and the load balancer will add some extra headers to your request (x-forwarded-for, etc), which are useful if you want to know who the end user is. If you just forward TCP, then those headers aren’t added and you lose the information.

  • Pre-Warm ELB : Pre-warm your ELBs if you’re expecting heavy traffic.

    It takes time for your ELB to scale up capacity. If you know you’re going to have a large traffic spike (selling tickets, big event, etc), you need to “warm up” your ELB in advance. You can inject a load of traffic, and it will cause ELB to scale up and not choke when you actually get the traffic, however AWS suggest you contact them instead to prewarm your load balancer.


RDS - Relational Database Service

  • Failover Event Subscription : Set up event subscriptions for failover.

    If you’re using a Multi-AZ setup, this is one of those things you might not think about which ends up being incredibly useful when you do need it.


Elastic Cache

  • Configuration Endpoints : Use the configuration endpoints, instead of individual node endpoints.

    Normally you would have to make your application aware of every Memcached node available. If you want to dynamically scale up your capacity, then this becomes an issue as you will need to have some way to make your application aware of the changes. An easier way is to use the configuration endpoint, which means using an AWS version of a Memcached library that abstracts away the auto-discovery of new nodes.


Source: https://roadmap.sh/best-practices/aws

Stay Tuned!

Be sure to follow and subscribe for more updates and upcoming blogs.

Follow me on LinkedIn 🔗 and Hashnode ✍️!

0
Subscribe to my newsletter

Read articles from Rahul wath directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Rahul wath
Rahul wath

An experienced DevOps Engineer understands the integration of operations and development in order to deliver code to customers quickly. Has Cloud and monitoring process experience, as well as DevOps development in Windows, Mac, and Linux systems.