Goodbye Bastion, Hello Zero-Trust: Our Journey to Simplified RDS Access


Connecting to a private AWS database shouldn’t feel like hacking through a jungle of jump boxes and VPNs. In our team’s early days, though, that was our reality. This post is a candid look at how we improved the developer experience and security of accessing Amazon RDS databases – moving from an old-school Windows bastion (jump box) to AWS’s shiny new Verified Access, and finally landing on a surprisingly simple solution with AWS Systems Manager Session Manager. We’ll cover what worked, what didn’t, and how you can set up a smooth, secure database access workflow no matter your experience level.
Background: The Old Bastion Setup (RDP into RDS)
Not long ago, our developers accessed private RDS databases by RDP-ing into a Windows “bastion” host in AWS. This bastion was an EC2 instance in a public subnet acting as a jump box. Team members would Remote Desktop into it, then use database GUI tools (like SQL Server Management Studio or pgAdmin) installed on that bastion to connect to the actual RDS instances in private subnets. It was the traditional solution to avoid exposing databases directly, but it came with plenty of headaches:
Clunky User Experience: Engineers couldn’t use their own machines or preferred tools directly. They had to operate via a remote Windows desktop, often suffering lag and limited clipboard sharing. It felt like working through a periscope rather than directly on your workstation.
Security Risks: The bastion needed an open RDP port (3389) accessible (albeit restricted by IP). This inherently increases risk – if the security group was misconfigured or an exploit found in RDP, our private DB network could be exposed . With more remote work, the chances of someone poking a hole in the firewall for convenience grew .
Maintenance Burden: A Windows server requires constant care – OS patching, user account management, and even handling RDP license limits if multiple people use it . We had to keep the DB client software up-to-date on the bastion too. All this ops overhead for a box that didn’t do any “real” work, except letting us in.
Figure: Traditional approach using a bastion host (an EC2 jump box in a public subnet) to reach a private Amazon RDS database. Developers’ traffic goes from the corporate network (or internet) to the bastion, then onward to the database. This requires opening RDP/SSH access to the bastion, which introduces management overhead and potential security exposure.
It was clear this setup didn’t scale well for our growing team. We wanted a way to connect to RDS directly from our laptops, without that clumsy remote hop – but still keeping the databases locked down from the internet. VPN was one option, but managing a full-blown VPN client and infrastructure felt heavy. In late 2024, AWS announced something that caught our attention as a possible answer.
Trying AWS Verified Access for Direct Database Connectivity
When AWS released Verified Access (AVA), it sounded like a game-changer. AWS Verified Access is a service built on zero-trust principles that lets users connect securely to internal applications without a VPN . Initially it was only for web (HTTP) apps, but as of re:Invent 2024, it expanded to support non-HTTP endpoints – including RDS databases . The promise was VPN-less, policy-controlled access to private resources, with fine-grained checks on each connection (user identity, device security posture, etc.). For our use case, the appeal was huge:
Engineers could run their favorite database GUI directly on their laptop and connect to the RDS endpoint as if they were in the office network. No more RDP hop – better user experience and productivity.
Security would actually improve: Verified Access would evaluate every login attempt against security policies (who you are, whether your device is trusted, etc.), only then broker a connection . It’s based on “never trust, always verify” principles, meaning even if someone somehow got credentials, if they weren’t on an approved device or didn’t meet policy, access would be denied.
We could eliminate the exposed bastion entirely. Verified Access acts as a managed gatekeeper in AWS’s cloud, so no need for an open port in our VPC for RDP or SSH.
Setting up AWS Verified Access for our databases involved a few pieces. First, we needed to integrate it with our SSO identity provider (AWS IAM Identity Center in our case) as a “trust provider”. This let Verified Access confirm our engineers’ identities via SSO login. Next, we created a Verified Access instance and defined an endpoint for our RDS. AWS now allows an RDS instance (or cluster or proxy) to be a target for Verified Access . We then set up an access policy – in our test, we kept it simple: allow members of our engineering SSO group who passed MFA. Verified Access can get very granular (checking device OS, patch level, etc.), but we started basic just to get it working.
One critical component was deploying the AWS Verified Access client (also called the Connectivity Client) on our laptops . This is a small app that runs on the user’s machine to facilitate the connection. It encrypts and funnels traffic from the laptop to AWS Verified Access, including attaching the user’s identity and device info, so that AWS can decide if the traffic is allowed . In essence, it’s like a smart VPN client but application-specific and ephemeral. We installed the client, and it prompted us to log in via our SSO in a browser. Once authenticated, the client established a secure tunnel to AWS.
From a user standpoint, after launching the Verified Access client and logging in, they could open their database tool (say, DBeaver or DataGrip), and connect to the database’s endpoint (we used the regular RDS hostname) on the default port. The Verified Access client transparently routed that connection through AWS to our VPC. It really felt like magic the first time – my pgAdmin on my MacBook connected to a Postgres in a private subnet without any SSH tunnels or VPN, and with AWS handling the security behind the scenes.
Figure: AWS Verified Access.
Initial benefits we observed:
Night-and-day UX improvement: Everyone could use their own IDE/GUI, with native performance. Running queries or browsing tables was as snappy as if on a local network.
No more shared jump box: Each engineer authenticated individually via SSO. There was no single chokepoint server to maintain or that could be compromised to gain broader access – Verified Access only let that one user’s session through, and only to the specific database endpoint we configured.
Auditing and control: Verified Access logs every access request. We could enforce multi-factor auth and even device compliance (e.g., only allow up-to-date company laptops). It’s true zero-trust: every new connection is verified against policies rather than implicitly trusted once on a VPN.
The Downsides of Verified Access in Practice
This pilot with AWS Verified Access was promising, but as we dug deeper and scaled it out, we hit some challenges that made us reconsider relying on it long-term:
Client Software Limitations: Since it was a new service, the Verified Access connectivity client had a few rough edges. It was only available for Windows and Mac at first – our one engineer on Linux was out of luck . (AWS hinted Linux support was coming, but it wasn’t there yet.) Additionally, the client lacked a friendly GUI; we had to configure it by dropping a JSON config file onto the machine (no simple one-click setup) . This was manageable for our tech-savvy team, but not exactly polished.
Complexity of Policies: Writing policies in AWS Verified Access uses AWS Cedar (a policy language). It’s powerful but introduced a learning curve. Simple policies were fine, but anything custom required understanding a new syntax and debugging in a new console. For a small team, this felt like overkill just to allow database access for devs.
Cost Concerns: Perhaps the biggest factor – cost. AWS Verified Access is a managed service you pay for per application endpoint and per hour. In our case, each private RDS we wanted to enable access to counted as an application endpoint. The pricing in our region came out to about $0.27 per hour per app plus a small per-GB data charge . That means roughly $200 per month for each database. In a dev/test/prod scenario with multiple databases, we were looking at several hundreds of dollars monthly just for this convenience. Compared to a simple EC2 bastion (which might be ~$50 or less per month) it was an order of magnitude more expensive. As a startup, that was hard to justify beyond initial testing.
Operational Maturity: Being a very new service, we encountered a few hiccups – occasional client disconnects and once an identity sync issue that blocked a login until we reset the client. AWS support was helpful, but it reminded us that we were early adopters on the bleeding edge. We had to ask: did we want to be pioneers here, or use something more battle-tested?
Weighing these downsides, we decided to explore alternatives. We loved the idea of ditching the bastion and having direct access, but maybe there was a simpler way to get there without the cost and complexity of Verified Access. It turned out, the solution was something we already had at our fingertips in AWS.
Switching Gears to AWS SSM Session Manager
After our trial with Verified Access, we took a step back and reexamined the problem. We wanted secure, easy access to private RDS from our laptops, and we wanted to minimize infrastructure and maintenance. AWS actually provides a feature for secure remote access that we had used before for shell access: AWS Systems Manager Session Manager (SSM Session Manager). Could we use it for database access? The answer was yes – and it was surprisingly straightforward.
AWS Session Manager lets you open a shell or tunnel to an EC2 instance without any SSH keys or open ports, by using an SSM Agent installed on the instance . What many don’t realize is that Session Manager can also handle port forwarding. In late 2022, AWS added the ability to forward traffic not just to the instance itself, but through the instance to another host – essentially an SSH tunnel-like capability, but over the SSM channel . This is perfect for our use case: we can use a lightweight EC2 instance as a private relay to the database, and Session Manager will securely connect our laptop to that instance and pipe the traffic to the RDS.
Here’s how we built our Session Manager solution, step by step, and how it addressed our needs:
1. Setting Up a Small EC2 “Tunnel” Instance
First, we launched a tiny EC2 instance in the same VPC and private subnet as our RDS. (We jokingly call this our “bastion”, but it’s not accessible like a traditional one – no inbound access at all.) Important details for this instance:
Instance Type & OS: We chose an Amazon Linux 2 t4g.nano (very cheap, ~$4/month). Amazon Linux comes with the SSM Agent pre-installed, which saved setup time.
SSM IAM Role: We attached the AmazonSSMManagedInstanceCore IAM policy via an instance role. This grants the instance permission to communicate with the SSM service. With this, the SSM Agent on the instance can register itself and receive Session Manager connection requests. (No SSH keys needed at all – authentication will be handled by IAM and SSM.)
Security Groups: The instance’s security group was locked down. We did not allow any inbound ports from anywhere (not even SSH from our IP). We only allowed outbound traffic. Specifically, outbound rules allowed HTTPS (port 443) so the agent could reach SSM’s endpoints, and allowed outbound to the RDS’s port. The RDS’s security group in turn allowed inbound from this instance’s security group on the database port. This way, the EC2 can talk to the database internally, but nothing external can talk to the EC2.
Networking: We gave the instance access to the internet only via an SSM VPC endpoint (and a VPC endpoint for EC2 messages), instead of a NAT gateway. This is an optional step, but it means the SSM Agent traffic goes through a private VPC endpoint to AWS, which is more secure and avoids NAT data charges. (If you skip VPC endpoints, the agent will use the NAT to reach the Systems Manager API, which is fine but costs a bit more and traverses the internet.)
At this point, we had an SSM-managed instance in the private subnet. Think of it as a potential one-to-one replacement of the old bastion – except it’s not exposed to the world at all. Now we needed to actually use it to reach the database from our laptops.
2. Starting a Session Manager Port Forward
AWS provides a CLI command to open a Session Manager session. Instead of a normal shell session, we will start a port forwarding session. Here’s an example command we use (in a Bash script on our laptops) to connect to one of our PostgreSQL databases:
# Variables for clarity
INSTANCE_ID="i-0123456789abcdef0" # The EC2 instance acting as our SSM tunnel
RDS_ENDPOINT="mydatabase.cluster-abcdefghijkl.us-east-1.rds.amazonaws.com"
DB_PORT=5432
aws ssm start-session \
--target "$INSTANCE_ID" \
--document-name "AWS-StartPortForwardingSessionToRemoteHost" \
--parameters "host=$RDS_ENDPOINT,portNumber=$DB_PORT,localPortNumber=$DB_PORT"
Let’s break down what this does:
aws ssm start-session: This initiates an SSM Session Manager session from our machine. (Make sure you’ve configured your AWS CLI with credentials/SSO that have permission to use Session Manager on that instance.)
--target: The ID of the EC2 instance we launched. This tells AWS which instance’s SSM Agent should handle the session.
--document-name "AWS-StartPortForwardingSessionToRemoteHost": This is an AWS-provided session document that knows how to set up port forwarding to a specified remote host. It’s essentially a pre-built SSM action for tunneling.
--parameters "host=...,portNumber=...,localPortNumber=...": Here we provide the RDS host and port we want to reach, and which local port to use on our laptop. In our example, we set host to the RDS endpoint DNS name, portNumber to 5432 (the DB’s port), and localPortNumber also to 5432. This means the SSM Agent on the EC2 will open a connection to mydatabase...:5432 (our RDS), and forward that back through the session to localhost:5432 on our laptop .
When we run this command, a few things happen behind the scenes:
The AWS CLI calls the SSM service, which in turn signals the SSM Agent on our instance to start a port forwarding session. Because our instance can reach the RDS internally, it successfully connects to the database’s host and port.
The CLI also starts a local proxy listening on the specified localPortNumber (5432). You’ll see output like “Starting session with SessionId …” and “Port 5432 opened for session … Waiting for connections…” . This means everything is set – the tunnel is up and idle, waiting for you to connect.
We keep that terminal running (the session stays active). Now on our local machine, we can connect to localhost:5432 and it will actually reach the RDS through the tunnel.
At this point, the experience is exactly like using Verified Access (or a VPN). I can fire up my database client on my laptop, but now I point it to 127.0.0.1:5432 (or a localhost alias), with the usual database credentials. Boom – I’m connected to the private RDS. The Session Manager tunnel carries all the traffic. From the database’s perspective, it sees a connection coming from the EC2 instance’s IP (since that instance is acting as the client on its behalf). From my perspective, it feels local.
One great aspect of Session Manager is that all of this is done using my AWS IAM credentials. If I’m authenticated with AWS (for example via AWS SSO login or access keys), I don’t need to juggle any SSH keys or bastion passwords. Permissions to use Session Manager can be controlled via IAM policies (for instance, only allow certain IAM roles to start sessions to that instance). And every session is logged in AWS CloudTrail (and even Session Manager can be set to log full console output to S3/CloudWatch if needed). So we gained auditability without much effort – an improvement over the old bastion where RDP logins were somewhat opaque.
Figure: Using AWS Systems Manager Session Manager to create a secure tunnel from a client to an RDS database via a private EC2 instance. The EC2 “bastion” lives in a private subnet with no inbound ports open. The Session Manager agent on it connects out to AWS, allowing authorized users to start an encrypted session. This lets us forward a local port on our laptop to the remote database securely.
Cost impact: Remember the cost comparison that motivated us? Here’s how it played out:
The Session Manager approach requires a small EC2 instance running 24/7. Our t4g.nano plus storage costs about $5 per month. We could even stop it out of hours, but at that price it’s not worth the hassle.
Session Manager itself doesn’t cost extra; it’s a feature of AWS Systems Manager. There is no hourly charge for sessions, and data transfer is minimal (just the database traffic which we’d have anyway; it might incur tiny charges if it goes through a NAT or VPC endpoint, but those are pennies).
Versus Verified Access, which would have been around $0.27/hour each for our databases (≈$200/month per DB) , the savings are enormous. Even factoring in the old Windows bastion cost (say ~$50/month), Session Manager is an order of magnitude cheaper. Essentially, we got nearly the same functionality for almost no cost in our AWS bill.
3. Smoothing the Workflow (Making it Easy for Engineers)
Running a long CLI command to start the tunnel was fine for us, but we wanted to make this as seamless as possible – especially for new engineers who might not be AWS CLI wizards. We took a couple of steps to streamline usage on our laptops:
Bash Script & Alias: We wrapped the aws ssm start-session command in a simple shell script (connect-db.sh) and put it in our team’s internal toolkit repository. It accepts the environment or database name as an argument, so it knows which instance and host to target. For example: connect-db.sh prod reporting-db would fetch the appropriate instance ID and DB host from a config and run the above command. Developers can alias this in their shell, so bringing up the tunnel is one short command away. Each script execution opens a new terminal window with the session (so we remember to close it when done).
Auto-Connect on macOS (Launch Agent): For those frequently connecting to a dev database, we created a Launch Agent on macOS to automatically start the tunnel at login. This uses a .plist file in ~/Library/LaunchAgents. Here’s a snippet of what that looks like:
<!-- ~/Library/LaunchAgents/com.mycompany.ssm-tunnel.plist -->
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.mycompany.ssm-tunnel</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/bin/aws</string>
<string>ssm</string>
<string>start-session</string>
<string>--target</string>
<string>i-0123456789abcdef0</string>
<string>--document-name</string>
<string>AWS-StartPortForwardingSessionToRemoteHost</string>
<string>--parameters</string>
<string>host=mydatabase.cluster-abcdefghijkl.us-east-1.rds.amazonaws.com,portNumber=5432,localPortNumber=5432</string>
</array>
<key>RunAtLoad</key><true/>
<key>KeepAlive</key><true/>
<key>StandardOutPath</key><string>/tmp/ssm-tunnel.log</string>
<key>StandardErrorPath</key><string>/tmp/ssm-tunnel.err</string>
</dict>
</plist>
In plain English, this Launch Agent definition does: when I log in, run the AWS CLI session manager command with the given parameters. The RunAtLoad means start it automatically, and KeepAlive means if it crashes or the session drops, launchd will restart it. We log output to /tmp for debugging. After loading this (launchctl load -w ~/Library/LaunchAgents/com.mycompany.ssm-tunnel.plist), the developer gets a persistent tunnel in the background. They can now connect to the DB anytime without even thinking about the tunnel – it’s just there. (We set KeepAlive so that if the session times out after inactivity, it will try to reconnect. One caveat: Session Manager sessions do have a max duration, a few hours, so the agent will reconnect a few times a day in the background.)
Using SSH Config (alternate method, which we used eventually): Another neat trick is to use the SSH client as a wrapper for Session Manager. This might sound odd since we said “no SSH”, but it’s just leveraging the SSH command as a convenient way to manage tunnels. By adding an entry in ~/.ssh/config that calls the Session Manager proxy, one can bring up a tunnel with a simple ssh invocation. For example:
Host rds-tunnel
HostName i-0123456789abcdef0
User ec2-user
ProxyCommand aws ssm start-session --target %h --document-name AWS-StartPortForwardingSessionToRemoteHost --parameters "host=mydatabase.cluster-abcdefghijkl.us-east-1.rds.amazonaws.com,portNumber=5432,localPortNumber=5432"
- With such an entry, running ssh -N rds-tunnel will trigger the AWS CLI to start the session (the %h is replaced with the instance ID as HostName). The -N flag tells SSH not to execute a remote command (since we aren’t actually going to log in; we just want the tunnel). This is a bit of a hack and still requires the AWS CLI, but some GUI tools can invoke SSH tunnels this way as well.
4. Results: A Happy Team with Secure Access
Once we rolled out the Session Manager solution, feedback from the team was very positive. It achieved what we wanted:
Greatly improved UX: Just like with Verified Access, engineers use their local tools and don’t have to maintain a remote VM workspace. Whether it’s a newbie using a point-and-click SQL client or a veteran automating a psql script, they run it from their machine as if the database were local. Onboarding a new engineer to access the DB is as simple as: “Install AWS CLI (or our helper script), run this command, and you’re in.”
Tight Security (no more open holes): We completely shut down the old bastion. No RDP, no SSH – nothing is exposed. The EC2 instance is invisible from the internet. Session Manager uses an encrypted TLS connection initiated from the inside, and requires the user to auth with AWS. This removed a major attack surface. As AWS’s own best practices note, Session Manager eliminates the need for bastion hosts or open inbound ports . We also benefit from audit logs; we can see which user opened a session at what time in AWS CloudTrail, and even log the I/O if we wanted to inspect what commands are run (for shell sessions).
Low Maintenance: The EC2 tunnel instance is about as low-touch as it gets. Amazon Linux 2 applies security patches on reboot easily; we can also bake an AMI periodically with updates if we ever needed to replace it. The SSM Agent updates itself automatically via AWS Systems Manager. There are no user accounts or keys on this instance to manage. In fact, the instance runs with no human login at all. If we want to administer it, we’d use Session Manager to get a shell. This dramatically reduces the admin overhead compared to the old Windows bastion that needed active user management and patching. And unlike Verified Access, there’s no separate client software for us to deploy to everyone – just the ubiquitous AWS CLI.
Cost Savings: We already calculated the stark difference – on the order of maybe $10/month vs $200-$600/month for our scenario. Over a year, that’s thousands saved, which matters for our budget. We’re effectively paying only for a tiny instance and using an AWS service that’s free (covered by the fact we use AWS in general). For larger orgs, the cost argument might be different, but for us this was a huge win.
Room for Expansion: With this setup, if we add more databases or even other internal services (e.g., an ElastiCache Redis, or an internal HTTP service), we have options. We can either use the same EC2 as a multi-purpose tunnel (starting separate sessions for different targets as needed), or create more instances if we want isolation per environment. Since it’s so cheap, spinning up one per environment or per service is not an issue. Session Manager even allows tunneling RDP or SSH if we ever needed GUI or console access to an instance – it’s versatile.
Conclusion: Lessons Learned and Tips
Our journey from a clunky bastion to a modern access solution taught us a few valuable lessons:
“New and shiny” isn’t always “better for us.” AWS Verified Access is a powerful service and no doubt the future for many zero-trust network scenarios. If we had strict device compliance requirements or a larger enterprise setup, its policy-based access and deep integration with corporate identity might be worth the cost. But in our case, the simpler Session Manager approach covered 90% of our needs at a fraction of the complexity and cost. It was a reminder that tried-and-true tools can sometimes beat bleeding-edge solutions, depending on the context.
User experience matters, but balance it with security and cost. We were determined to improve UX for our engineers, and we did – moving away from the old jump box improved quality of life significantly. However, we had to also consider security (ensuring the new solution wasn’t trading one risk for another) and cost. We found a sweet spot where UX, security, and cost were all satisfied. Whenever you introduce a new access method, evaluate it holistically: how will users feel about it, is it actually secure, and does it justify the expense?
AWS Session Manager is underrated. Many engineers know Session Manager as “that thing you can use instead of SSH to get a console”. But its port forwarding capability is a game-changer for scenarios like database access. It enabled us to implement a lightweight bastion-as-a-service without maintaining complex infrastructure. If you’re still using old bastion hosts or SSH tunnels, give Session Manager a serious look – it can simplify your life. As AWS’s security blog notes, Session Manager can eliminate the need for bastions and open ports while still giving you necessary access .
Automation makes perfect. Once you set up a solution like this, invest a bit of time to script it and integrate it into your team’s workflows. Our use of Launch Agents and simple CLI wrappers means nobody is fumbling with long commands or forgetting to start their tunnel. New hires get a smooth experience from day one (“Just run this script and you’re connected”). Little quality-of-life improvements go a long way in adoption of a new tool.
In the end, our team now connects to our databases securely, quickly, and with minimal fuss. We retired the fragile old Windows jump box and significantly cut down our attack surface. We also saved money, which is always a nice bonus. And when AWS improves Verified Access (maybe a fully managed client, Linux support, lower costs?), we’ll be ready to re-evaluate it. But for now, Session Manager has become our go-to solution for remote access to cloud resources.
If you’re in a similar boat – juggling bastions, VPNs, or pondering AWS Verified Access – I hope our story helps you find the approach that works best for you. Sometimes the solution is hiding in plain sight (in our case, in the AWS CLI we were already using). Happy connecting, and may all your database queries be speedy and secure!
Subscribe to my newsletter
Read articles from Pawan Sawalani directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
