RDS Example Using AWS CDK

Mikaeel KhalidMikaeel Khalid
10 min read

In this blog post, I will walk through provisioning an RDS database instance and connecting to it from an EC2 instance.

Since both EC2 and RDS services require a Virtual Private Cloud (VPC), we'll also cover setting up a custom VPC with appropriate subnets. Specifically, the RDS instance will reside in an ISOLATED (private) subnet, while the EC2 instance will be deployed within a PUBLIC subnet.

💡
You can find the complete source code for this blog post on GitHub.

Perquisites

  1. AWS account with appropriate permissions

  2. AWS CLI installed and configured

  3. Node.js and AWS CDK v2 installed

  4. Basic TypeScript knowledge

Project Setup

First, create a new CDK project:

mkdir deploy-rds-ec2 
cd deploy-rds-ec2 
cdk init app --language typescript

Let's start by setting up our Virtual Private Cloud (VPC) and an EC2 instance. Open the file lib/deploy-rds-ec2-stack.ts and add the following implementation:

import {
  App, 
  CfnOutput,
  Duration, 
  RemovalPolicy,
  Stack, 
  StackProps
} from 'aws-cdk-lib';
import {
  AmazonLinuxGeneration, 
  AmazonLinuxImage,
  Instance, 
  InstanceClass,
  InstanceSize, 
  InstanceType,
  IpAddresses, 
  KeyPair,
  Peer, 
  Port,
  SecurityGroup,
  SubnetType, 
  Vpc
} from 'aws-cdk-lib/aws-ec2';
import {
  Credentials, 
  DatabaseInstance,
  DatabaseInstanceEngine, 
  PostgresEngineVersion
} from 'aws-cdk-lib/aws-rds';

export class DeployRDSEC2Stack extends Stack {
  constructor(scope: App, id: string, props?: StackProps) {
    super(scope, id, props);

    // create a vpc
    const vpc = new Vpc(this, 'main-vpc', {
      ipAddresses: IpAddresses.cidr('10.0.0.0/16'),
      natGateways: 0,
      maxAzs: 2,
      subnetConfiguration: [
        {
          name: 'public-subnet-1',
          subnetType: SubnetType.PUBLIC,
          cidrMask: 24,
        },
        {
          name: 'isolated-subnet-1',
          subnetType: SubnetType.PRIVATE_ISOLATED,
          cidrMask: 28,
        },
      ],
    });

    // create a security group for the EC2 instance
    const ec2InstanceSG = new SecurityGroup(this, 'ec2-instance-sg', {
      vpc,
    });

    ec2InstanceSG.addIngressRule(
      Peer.anyIpv4(),
      Port.tcp(22),
      'allow SSH connections from anywhere',
    );

    // importing your SSH key
    const keyPair = KeyPair.fromKeyPairName(
      this,
      'key-pair',
      'ec2-key-pair',
    );

    // create the EC2 instance
    const ec2Instance = new Instance(this, 'ec2-instance', {
      vpc,
      vpcSubnets: {
        subnetType: SubnetType.PUBLIC,
      },
      securityGroup: ec2InstanceSG,
      instanceType: InstanceType.of(
        InstanceClass.BURSTABLE2,
        InstanceSize.MICRO,
      ),
      machineImage: new AmazonLinuxImage({
        generation: AmazonLinuxGeneration.AMAZON_LINUX_2,
      }),
      keyPair,
    });
  }
}

Let's review the above code snippet:

  1. We've configured our VPC with both PUBLIC and ISOLATED subnet groups.

    Instances within a PUBLIC subnet have internet access and are reachable from the internet through an internet gateway. This setup is suitable for resources like our EC2 instance, which we'll deploy in the PUBLIC subnet.

    In contrast, instances within an ISOLATED subnet do not have internet access and are not reachable from outside the VPC. These subnets are ideal for resources meant strictly for internal communication. Our RDS instance will reside in the ISOLATED subnet since it only needs to be accessed by our EC2 instance within the same VPC.

  2. We set up a Security Group specifically for our EC2 instance. This security group currently has one inbound rule configured, allowing SSH connections (port 22) from any IP address (0.0.0.0/0).

  3. We provisioned a t2.micro EC2 instance using the Amazon Linux 2 AMI and placed it in the PUBLIC subnet.

    Note that we included the keyPair property when creating the EC2 instance. This key pair will enable SSH access, allowing us to interact directly with our RDS database from the EC2 instance. Ensure that a key pair matching the specified name already exists in your default AWS region, otherwise the deployment process will fail.

    💡
    Before proceeding to set up the RDS instance, verify that a key pair named ec2-demo-key-pair already exists in your default AWS region. This step is essential to ensure successful deployment and SSH access to the EC2 instance.

    Let's now create a key pair named ec2-demo-key-pair in your default AWS region. Alternatively, you can replace this key name with an existing one already set up in your AWS account.

    Follow these steps to create the key pair:

    1. Open the EC2 Management Console in AWS.

    2. From the navigation pane, select Key Pairs.

    3. Click on Create key pair.

    4. Provide the key pair name as ec2-demo-key-pair.

    5. Select the file format suitable for your operating system:

      • .pem for Mac or Linux systems.

      • .ppk for Windows systems (commonly used with PuTTY).

Download and securely store the key file, as you'll need it to SSH into your EC2 instance.

After the key pair has been created, change to the directory it was downloaded in and change its permissions:

chmod 400 ec2-demo-key-pair.pem

Now, let's add the RDS instance to our stack. Place the following implementation directly below your EC2 instance definition in the code:

import {
  App, 
  CfnOutput,
  Duration, 
  RemovalPolicy,
  Stack, 
  StackProps
} from 'aws-cdk-lib';
import {
  AmazonLinuxGeneration, 
  AmazonLinuxImage,
  Instance, 
  InstanceClass,
  InstanceSize, 
  InstanceType,
  IpAddresses, 
  KeyPair,
  Peer, 
  Port,
  SecurityGroup,
  SubnetType, 
  Vpc
} from 'aws-cdk-lib/aws-ec2';
import {
  Credentials, 
  DatabaseInstance,
  DatabaseInstanceEngine, 
  PostgresEngineVersion
} from 'aws-cdk-lib/aws-rds';

export class DeployRDSEC2Stack extends Stack {
  constructor(scope: App, id: string, props?: StackProps) {
    super(scope, id, props);

    // ... rest of the code

    // create RDS Instance
    const dbInstance = new DatabaseInstance(this, 'db-instance', {
      vpc,
      vpcSubnets: {
        subnetType: SubnetType.PRIVATE_ISOLATED,
      },
      engine: DatabaseInstanceEngine.postgres({
        version: PostgresEngineVersion.VER_14,
      }),
      instanceType: InstanceType.of(
        InstanceClass.BURSTABLE3,
        InstanceSize.MICRO,
      ),
      credentials: Credentials.fromGeneratedSecret('postgres'),
      multiAz: false,
      allocatedStorage: 100,
      maxAllocatedStorage: 120,
      allowMajorVersionUpgrade: false,
      autoMinorVersionUpgrade: true,
      backupRetention: Duration.days(0),
      deleteAutomatedBackups: true,
      removalPolicy: RemovalPolicy.DESTROY,
      deletionProtection: false,
      databaseName: 'test-db',
      publiclyAccessible: false,
    });

    dbInstance.connections.allowFrom(ec2Instance, Port.tcp(5432));

    new CfnOutput(this, 'db-endpoint', {
      value: dbInstance.instanceEndpoint.hostname,
    });

    new CfnOutput(this, 'secret-name', {
      value: dbInstance.secret?.secretName!,
    });
  }
}

Let's review the above code snippet:

  1. We've created an RDS database by instantiating the DatabaseInstance class from the AWS CDK.

  2. The props we provided to the constructor are as follows:

nameDescription
vpcThe VPC in which the DB subnet group will be created
vpcSubnetsThe type of subnets the DB subnet group should consist of. In our case - ISOLATED subnets.
engineThe engine for the database. In our case - Postgres, version 13
instanceTypeThe class and size for the instance, in our case t3.micro
credentialsThe credentials for the admin user of the database. We've used the fromGeneratedSecret method and passed it a username of postgres, the password will be auto-generated and stored in secrets manager.
multiAzWhether the rds instance is a multi-AZ deployment, in our case we've set it to false, which is also the default value. For production workloads, you would most likely use a standby instance for high availability.
allocatedStorageThe allocated storage size of the database, in gigabytes. We set the value to 100 gigabytes, which is also the default
maxAllocatedStorageThe upper limit for storage auto scaling. In our case, we've set it to 105 gigabytes. By default, there is no storage auto-scaling
backupRetentionFor how many days automatic database snapshots should be kept. We've turned automated snapshots off, by setting the value to 0 days. The default value is 1 day.
deleteAutmtdBackupsSpecify whether automated backups should be deleted or retained when the rds instance is deleted. By default, automated backups are retained on instance deletion.
removalPolicyThe policy that should be applied if the resource is deleted from the stack or replaced during an update. By default the instance is deleted, but a snapshot of the data is retained.
deletionProtectionSpecify whether the DB instance should have termination protection enabled. By default it's set to true if removalPolicy is RETAIN, otherwise - false
databaseNameSpecify the name of the database
publiclyAccessibleSpecify whether the rds instance should be publicly accessible. Set to true by default for instances launched in PUBLIC subnet groups, falseotherwise.
  1. Next, we allowed connections to our RDS instance, on port 5432, from the security group of the EC2 instance

  2. We created 2 outputs:

  • The database hostname that we'll use to connect to our RDS instance

  • The name of the secret that stores the password of the postgres user

Deploying the RDS and EC2 Instance

Let's deploy the stack and test our RDS instance:

npx aws-cdk deploy --outputs-file ./stack-outputs.json

We've directed the outputs into a file named stack-outputs.json located in the root directory.

After approximately five minutes, AWS completes provisioning all the resources.

Upon inspecting the RDS instance's security group, you'll notice it permits inbound connections on port 5432 specifically from the security group assigned to our EC2 instance. This configuration ensures our EC2 instance can securely communicate with the RDS database after we establish an SSH session.

Before initiating an SSH connection to the EC2 instance, you'll need the database user's password, which is securely stored as a secret in AWS Secrets Manager.

To retrieve this secret:

  • Using the AWS Management Console:

    1. Open the Secrets Manager service.

    2. Select your secret, and click Retrieve Secret Value to reveal the password.

  • Using AWS CLI:

    1. Replace YOUR_SECRET_NAME with the secretName output value from the stack-outputs.json file, and run this command to retrieve the secret:

    2.  aws secretsmanager get-secret-value \
         --secret-id YOUR_SECRET_NAME --output yaml
      

Copy and store the returned password safely, as you'll need it to access your database from the EC2 instance.

Connecting to an RDS from EC2 instance

Let’s now SSH into our EC2 instance and connect to the RDS database.

  1. Open your terminal and navigate to the directory where you saved the ec2-key-pair (or ec2-demo-key-pair) private key file.

  2. Set the correct permissions on the key file (if needed):

     chmod 400 ec2-demo-key-pair.pem
    
  3. SSH into the EC2 instance using the public IP address from the stack-outputs.json file:

     ssh -i ec2-demo-key-pair.pem ec2-user@<EC2_PUBLIC_IP>
    

    Replace <EC2_PUBLIC_IP> with the actual public IP address of your EC2 instance.

Once connected, you'll be inside the EC2 instance, ready to install a PostgreSQL client and connect to your RDS database.

sudo amazon-linux-extras install epel -y

sudo amazon-linux-extras install postgresql10 -y

sudo yum install postgresql postgresql-server -y

Now we can connect to the RDS instance. Replace YOUR_DB_ENDPOINT with the value of dbEndpoint from the stack-outputs.json file, alternatively grab the Endpoint value from the RDS management console.

psql -p 5432 -h YOUR_DB_ENDPOINT -U postgres

You will be prompted for the password of the postgres user. Paste the value you grabbed from Secrets Manager and you should be connected to the RDS instance.

Let's list the databases:

\l

We can see that rds has created our database with the name test-db.

Let's connect to it.

# get current database
SELECT current_database();

# connect to test-db database
\c test-db

Let's create a table and insert a few rows in it.

CREATE TABLE IF NOT EXISTS demotable (id SERIAL PRIMARY KEY, text TEXT NOT NULL);

INSERT INTO demotable (text) VALUES ('hello world');

Finally, let's print the records from the demotable table of our RDS instance.

SELECT * FROM demotable;

We were able to successfully connect and interact with our RDS instance from an EC2 instance.

💡
Don't forget to delete the resources you have provisioned, to avoid incurring charges.

Cleanup

cdk destroy

or

npx aws-cdk destroy
💡
Double-check the Secrets Manager console to ensure the database secret is deleted. Also, verify in the RDS console that all manual snapshots have been removed to avoid unnecessary charges.

Cost Optimization Tips

  1. Use t3.micro instances for development ($0.036/hr)

  2. Schedule shutdowns for non-production instances

  3. Use Provisioned IOPS (io1) only for high-performance needs

  4. Monitor storage autoscaling thresholds

  5. Delete unused instances with cdk destroy

Troubleshooting Common Issues

Connection Timeouts

  • Verify security group rules

  • Check route tables in private subnets

  • Test with VPC Security Group ID instead of IP ranges

Storage Autoscaling Not Working

  • Ensure maxAllocatedStorage > allocatedStorage

  • Verify CloudWatch metrics are enabled

  • Check RDS storage modification permissions

High CPU Utilization

  • Enable Performance Insights

  • Check for missing indexes

  • Scale instance size if needed

Final Thoughts

Using AWS CDK to provision RDS instances provides several advantages:

  • Version control for infrastructure changes

  • Repeatable deployments across environments

  • Type safety through TypeScript

  • Simplified maintenance with infrastructure-as-code

By combining RDS with CDK, teams can achieve both developer productivity and operational excellence. For more complex scenarios, consider adding:

  • Database migration workflows

  • Blue/green deployment patterns

  • Custom CloudWatch dashboards

  • Automated failover testing

Remember to always test backups and disaster recovery procedures!

Explore the official AWS CDK documentation for more advanced use cases.

0
Subscribe to my newsletter

Read articles from Mikaeel Khalid directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Mikaeel Khalid
Mikaeel Khalid

I am a Software and Certified AWS & GCP Engineer, I am passionate about solving complex technical problems. My goal here is to help others by sharing my learning journey.