Ansible EC2 Automation with Passwordless SSH – Control Node + Workstation + Git & NGINX Installation

Hello to Everyone Visiting my Blog, In this project, I wanted to move beyond just launching cloud servers and documenting on Hashnode, I wanted to automate how they talk, connect, and get configured, just like in a real production DevOps setup.

So I built a mini remote management lab using two EC2 instances on AWS:

One as a Control-Host Node (where I installed Ansible)

The other as a Work-Station Node (the machine being managed)

But here’s the key, I did not want to rely on manual SSH logins , what i am saying is that i do not want to open two terminals and SSH into the EC2 Manually as we would do, that is not scalable.

So I configured passwordless SSH between both servers, allowing Ansible to connect and manage the workstation Node automatically.

Then I took it further:

I used Terraform to provision the infrastructure and generate the SSH key.

I opened port 80 on the workstation via security group so NGINX could serve traffic.

I wrote Ansible playbooks to install Git and NGINX on the managed node i.e. Workstation Node no manual configuration needed!

This project was designed to simulate how real DevOps teams manage remote servers at scale, securely, repeatably, and automatically.

Theory Corner

This project is focused on showing how DevOps engineers use automation to manage remote servers in the cloud. Rather than manually SSH into a server and run commands, we use Ansible to automate those tasks in a repeatable and scalable way.

Ansible is a powerful automation tool that connects to remote servers using SSH and executes predefined tasks written in YAML playbooks. To make this work efficiently, the control machine must be able to connect to the managed machine without needing a password each time. This is where passwordless SSH becomes essential. It allows seamless, secure communication between machines using SSH key pairs.

In real-world infrastructure, Ansible usually sits on a bastion host or control server and manages multiple remote nodes or virtual machines. These nodes could be application servers, web servers, or even database servers.

To fully simulate this kind of environment, we first provisioned our infrastructure using Terraform, an Infrastructure as Code tool. Terraform created two EC2 instances and generated a private SSH key for access. We also configured the security group to allow HTTP traffic on port 80, this step was also included in the terraform code block, this is important for testing web services like NGINX.

Once the EC2 instances were live, we set up a secure passwordless SSH connection from the control node to the managed workstation. This setup enables Ansible to run commands without human input. We then wrote two playbooks. One to install Git, and another to install and start NGINX, allowing us to verify automation from both a package installation and a web server perspective.

This project brings together core DevOps skills including cloud provisioning, key management, remote access, infrastructure automation, and configuration management, all using open-source tools.

Prerequisites

Before starting this project, make sure the following requirements are in place. These steps form the foundation for everything that comes later.

i. EC2 Infrastructure Provisioned with Terraform or via Amazon Web Console

You must have created two EC2 instances using a working Terraform script or on the Amazon Web Console, One EC2 instance will act as the Control Host, where Ansible will be installed and executed. The second EC2 instance will serve as the Workstation, which is the remote node that Ansible will manage.

The Terraform script will also handle other configurations such as the creation of a security group and assigning public IP addresses to the instances. these can also be done on the Amazon GUI.

ii. Private SSH Key Output from Terraform

Your Terraform script should include a provision to output the private key used to connect to both EC2 instances. This can be done by adding an output block in your Terraform code to display the private key on the terminal when the infrastructure is created. The command to retrieve it is: terraform output private_key

Save the output of this command as a .pem file. This file allows you to SSH into both the control host and the workstation securely.

You can as well create and download your Key pair Manually if you are using the Amazon Console GUI.

iii. Port 80 Opened for NGINX Access

When configuring the Terraform security group, you must allow incoming HTTP traffic on port 80. This is important because later in the project, NGINX will be installed on the workstation and needs port 80 open to be accessible from a web browser.

Without this configuration, the NGINX welcome page will not be visible when you try to load the workstation's IP in a browser. . This step can also be done on the Amazon GUI to provision the Security Group to allow incoming HTTP traffic on port 80.

iv. Ubuntu 20.04 or Later AMIs

Both EC2 instances should be based on Ubuntu 20.04 LTS or a later version. This ensures compatibility with APT package management, Ansible installation steps, and service commands used in the project.

v. Basic Familiarity with SSH and Terminal Navigation

You should be comfortable with basic terminal operations such as SSH-ing into a server, switching users, navigating file systems, and running shell commands. This will make the process of configuring each EC2 instance smoother and more understandable.

Step 1. Open Two Terminal Windows and SSH into Both EC2 Instances

To begin this project, you need to connect to both EC2 instances at the same time. This will make it easier to switch between the Control Host and the Workstation as you configure them.

You can use either PowerShell or Git Bash on your local machine. Both options work well. The important thing is to have two terminal windows open side by side.

Terminal Setup:

  1. The first terminal will be used to manage the Control Host.

  2. The second terminal will be used to manage the Workstation.

SSH into the Control Host:

in the first terminal, run the following command to connect to the Control Host " ssh -i path/to/your-key.pem ubuntu@<Control_Host_Public_IP> "

Replace: path/to/your-key.pem with the full path to your .pem private key file

<Control_Host_Public_IP> with the actual public IP of the EC2 instance

SSH into the Workstation

In the second terminal, run: " ssh -i path/to/your-key.pem ubuntu@<Workstation_Public_IP> "

Replace the placeholders with the correct values, just as you did for the control host.

Expected Result

After running both commands, you should be logged into each EC2 instance successfully and see the Ubuntu terminal prompt.

Step 2: Rename the Control Host and Workstation Host

Renaming your servers helps you easily identify them while working in the terminal so that you do not make mistakes when running Ansible tasks. This is especially helpful when managing multiple servers. e.g. 2,3,,4,5 or more servers.

What We’ll Do:

Rename the Control Host to Control-Host

Rename the Workstation Host to Work-Station

Head to the Control Host terminal run this command " sudo hostnamectl set-hostname Control-Host "

This command updates the hostname of the server from ubuntu@<Control_Host_Public_IP> to "ubuntu@Control-Host" after you are done updating , logout of the EC2 and log Back in to apply changes.

On the Workstation Terminal

Run this command: " sudo hostnamectl set-hostname workstation "

Again, the hostname will take effect once you logout and log back in.

Step 3: Create the ansible User on Both EC2 Instances.

Step 3: Create the ansible User on Both EC2 Instances

To securely manage remote servers with Ansible, it's best practice to create a dedicated user called "ansible". This user will have administrative privileges and will be used for all SSH connections and automation tasks.

You’ll need to perform the same set of commands on both the Control-Host and the Work-Station.

What We’ll Do:

  1. Create a new user named ansible

  2. Add the user to the sudo group to allow administrative tasks

  3. Set a password for the new user

Commands to Run on Both EC2 Instances:

sudo useradd -m -s /bin/bash ansible

sudo usermod -aG sudo ansible

sudo passwd ansible

What does this command do?

useradd -m -s /bin/bash ansible: This creates a new user named "ansible", gives them a home directory, and sets their shell to bash.

usermod -aG sudo ansible: This adds the ansible user to the sudo group, allowing them to run commands with administrative privileges.

passwd ansible: This sets a login password for the ansible user. You will be prompted to enter and confirm the new password.

Please Note: Very Important Make sure you repeat these commands on both the Control-Host and the Work-station.

Step 4: Step 4: Switch to the ansible User on Both EC2 Instances

Now that the ansible user has been created on both the Control-Host and Work-Station, the next step is to switch into that user account so you can begin working directly as ansible.

This is important because all future steps, including SSH key generation, Ansible installation, and running playbooks everything will be performed under this new user "ansible"

On Both EC2 Instances

Run this command " sudo su - ansible "

You’ll know it worked if your terminal prompt changes to something like:

ansible@Control-Host:~$

ansible@Work-Station:~$

Step 5: Generate an SSH Key Pair on the Control Host.

To allow Ansible to connect to the Work-Station automatically, you need to generate an SSH key pair on the Control-Host. This key will be used to authenticate without typing a password each time.

On the Control Host (as the ansible user)

Run this command " ssh-keygen -t rsa -b 4096"

Just press Enter through all prompts.

This will create two files:

~/.ssh/id_rsa → your private key (keep safe!)

~/.ssh/id_rsa.pub your public key (this is what you’ll copy to Work-station)

NOTE: Do not Customize the Keypair Names or Add Authentication for the Keypairs this ensures compatibility with the rest of the automation steps.

Step 6: Manually Create the .ssh Folder on the Work-Station

Now that the SSH key pair has been generated on the Control-Host, we need to prepare the Work-Station to receive the public key. This means manually creating the .ssh directory and setting the correct permissions.

This must be done while logged in as the ansible user on the Workstation.

On the Workstation Terminal (as ansible user)

Run this commands

mkdir -p ~/.ssh

chmod 700 ~/.ssh

touch ~/.ssh/authorized_keys

chmod 600 ~/.ssh/authorized_keys

chown -R ansible:ansible ~/.ssh

What this does:

mkdir -p ~/.ssh : Creates the .ssh directory inside the home folder if it does not already exist

chmod 700 ~/.ssh : Sets the folder permissions so that only the ansible user can access it

touch ~/.ssh/authorized_keys : Creates an empty file that will later store the public key from the Control Host

chmod 600 ~/.ssh/authorized_keys : Restricts the file so that only the owner can read or write to it

chown -R ansible:ansible ~/.ssh : Ensures the entire .ssh folder and its contents are owned by the ansible user

Please Note: If any of these permissions are incorrect, SSH will reject the key, and passwordless login will fail. These settings are required and strict for a reason.

Step 7: Copy and Paste the Control Host’s Public Key into the Workstation

In this step, you will copy the public key that was generated earlier on the Control-Host and paste it into the authorized_keys file on the Workstation.

This is what will allow the Control-Host to SSH into the Work-Station without asking for a password.

On the Control-Host Terminal (as ansible)

Run this command to display your public key: cat ~/.ssh/id_rsa.pub

On the Workstation Terminal (as ansible)

Now to paste the copied key into the authorized_keys file run this command : nano ~/.ssh/authorized_keys

This opens up a nano text editor , Paste the public key into the editor and click save.

If you prefer using vi, open with vi ~/.ssh/authorized_keys and use i to insert, then esc+:wq+enterto save and exit.

Please Note: This key is what proves the identity of the Control-Host. When you SSH from the control machine later, the Work-Station will recognize the key and allow access without prompting for a password.

Step 8: Final Permissions Check to Ensure Everything Works

SSH is very strict about file and folder permissions. Even if everything else is set up correctly, wrong permissions will break passwordless login.

So now that you’ve pasted the public key into the authorized_keys file, let’s make sure the entire .ssh directory and its contents have the right permissions.

On Work-station (as ansible), run again to be sure:

chmod 700 ~/.ssh

chmod 600 ~/.ssh/authorized_keys

chown -R ansible:ansible ~/.ssh

This makes sure permissions are exactly what SSH expects, and these permissions are strict and required for SSH to trust the key and allow access without a password.

Once this is done, your Work-Station is fully ready to accept passwordless connections from the Control-Host.

Step 9: Test Passwordless SSH from the Control Host to the Workstation

Now that everything has been configured, user setup, key generation, public key transfer, and permissions, it’s time to confirm that the Control Host can SSH into the Workstation without asking for a password.

On the Control Host Terminal (as ansible)

Run this command to connect to the Workstation: ssh ansible@<Workstation_Public_IP>

Replace <Workstation_Public_IP> with the actual public IP address of your workstation EC2 instance.

What Should Happen

You should be logged in immediately as the ansible user on the Workstation

You should not be asked for a password

Your prompt should now show something like: " ansible@workstation:~$ " as show in the screenshot below.

If it still prompts you for a password:

Double-check that you are running the command from the Control Host, logged in as ansible

Confirm that the public key was correctly pasted into ~/.ssh/authorized_keys on the Workstation

Verify the permissions again:

chmod 700 ~/.ssh

chmod 600 ~/.ssh/authorized_keys

chown -R ansible:ansible ~/.ssh

Check that the SSH key being used is the default one created at ~/.ssh/id_rsa and id_rsa.pub

If everything is correct, SSH will recognize the key and grant access without a password prompt.

Once you’re in, this show your passwordless SSH setup is officially working.

Step 10: Install Ansible on the Control Host

Now that the Control-Host can connect to the Work-Station without a password, it's time to install Ansible, the tool that will handle all your remote automation.

Please Note: You only need to install Ansible on the Control Host, not on the Workstation.

On the Control-Host (as the ansible user)

Run the following commands step by step:

sudo apt update && sudo apt upgrade -y : This updates your package list and upgrades installed packages to their latest versions.

sudo apt install -y software-properties-common : This installs a utility needed to manage repositories.

sudo add-apt-repository --yes --update ppa:ansible/ansible : This adds the official Ansible PPA (Personal Package Archive) to your system.

sudo apt install -y ansible: This installs Ansible itself.

Verify the Installation

After installation, run: ansible --version

Now that Ansible is installed, you're ready to create your inventory file and start running playbooks.

Step 11: Create the Ansible Inventory File

The inventory file tells Ansible which machines to manage, how to reach them, and which SSH user to use. This is a required configuration for running any playbook or command.

By default, Ansible looks for this file at: /etc/ansible/hosts

Check if the directory exists

Run this command to check if Ansible’s config folder exists: “ ls /etc/ansible

If you see an error or the folder is missing, create it manually: sudo mkdir -p /etc/ansible

Create and Edit the Inventory File:

Now open the hosts file for editing with this command " sudo nano /etc/ansible/hosts "

Once you Open the Nano file you will see some already existing Ansible configuration in the file, just scroll down to the last part of the page and Paste the following into the file config.

[web]

<Workstation_Public_IP> ansible_user=ansible

Replace <Workstation_Public_IP> with the actual public IP address of your Workstation EC2 instance

Save and Exit the Nano File.

Step 12: Test the Connection Between Control Host and Workstation Using Ansible

After installing Ansible and creating your inventory file in: /etc/ansible/hosts

It's time to make sure Ansible can successfully communicate with the Workstation over SSH.

So on the Control-Host (as ansible), Run this command :” ansible all -m ping

You Should see an expected output with Success "ping pong"

This means Ansible:

Found the IP in your inventory

Connected via SSH using the ansible user

Executed the ping module successfully.

Step 13: Create an Ansible Playbook to Install Git on the Managed Host

Ansible playbooks are written in YAML format and define the tasks you want to automate. In this case, the goal is to install Git on the Workstation node using a playbook executed from the Control-Host.

Prerequisite: Passwordless Sudo Access on Workstation

Before running any playbook that uses sudo, make sure the ansible user on the Workstation can execute commands without being prompted for a password.

On the Workstation (as ansible)

Run this command : “ sudo visudo

At the bottom of the file, add this line:

ansible ALL=(ALL) NOPASSWD:ALL

Then save and exit:

This allows Ansible to run elevated tasks without asking for a sudo password.

On Control Host: Create the Playbook File

Still on the Control Host as ansible

Create a file with command : nano install_git.yml

Paste the following content:

—-

- name: Install Git on Work-Station

hosts: web

become: yes

tasks:

- name: Ensure Git is installed

apt:

name: git

state: present

update_cache: yes

Explanation:

  • hosts: web → Targets the [web] group defined in your inventory file

  • become: yes → Elevates privilege to run with sudo

  • tasks: → A list of tasks Ansible should perform

  • apt: → The Ansible module used to manage packages on Ubuntu/Debian systems

Save and Exit the File

Run the Playbook

Now the next thing to do is to execute the playbook: run this command " ansible-playbook install_git.yml "

You’ll see detailed output showing the playbook connecting, elevating privilege, and installing Git.

Confirm Git Was Installed

SSH into your Work-Station Server and you will run this command : git --version

Step 14: Create an Ansible Playbook to Install NGINX on the Managed Host

Just like you did for Git, you’ll now write a new playbook to:

Install NGINX on the Work-Station

Ensure the NGINX service is running

Enable it to start on boot

Let you access the NGINX welcome page via browser (http://<Workstation_IP>)

On Control-Host (as ansible)

We will Create the playbook file run this command : “ nano install_nginx.yml “ It opens up a Nano text file .

Then Paste the following content:

—-

- name: Install and Start NGINX on managed host

hosts: web

become: yes

tasks:

- name: Install NGINX

apt:

name: nginx

state: present

update_cache: yes

- name: Ensure NGINX is running

service

name: nginx

state: started

enabled: yes

Explanation

apt: module is used to install NGINX using Ubuntu’s package manager

update_cache: yes ensures the apt cache is refreshed

service: ensures NGINX is not just installed, but also started and enabled to auto-start on reboot

Save and Exit

Run the Playbook

On the Control-Host run the command :" ansible-playbook install_nginx.yml "

If all goes well, you should see this output :

changed: [<Workstation_IP>]

Step 15. Test NGINX from Your Browser

In your browser, go to: http://<Workstation_Public_IP>

Replace <Workstation_Public_IP> with the Public IP of your Work-Station

You should see the default NGINX welcome page.

Reminder: Port 80 must be open in your Security Group. You already set this via Terraform during provisioning or Manually While provisioning the Ec2 Instances on Amazon Web Console.

Conclusion

In this project, I built a real-world Ansible automation lab using two EC2 instances on AWS. One served as the Control Host where Ansible was installed, and the other as the Workstation being managed remotely.

The highlight of this setup was establishing passwordless SSH access between both nodes using SSH key-based authentication. This made it possible for Ansible to connect and automate tasks without manual password input, a core practice in professional DevOps environments.

Using Terraform, I provisioned the infrastructure, generated the SSH private key, and exposed port 80 to enable web traffic. Then, using Ansible playbooks, I was able to:

  1. Installed Git on the managed host to enable version control operations

  2. Installed and configured NGINX, and confirmed its functionality by viewing the welcome page in a web browser

This project brought together the key concepts of infrastructure provisioning, secure connectivity, configuration management, and hands-on automation all using open-source tools.

Disclaimer: This project was designed and implemented by me using real DevOps workflows. I used AI as an assistant for infrastructure generation and syntax accuracy, but I customized, deployed, and documented everything manually as part of my learning and real-world simulation.

Aut viam inveniam aut faciam.

I shall either find a way or make one. (Hannibal)

See you Guys on the next one.

0
Subscribe to my newsletter

Read articles from Stillfreddie Techman directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Stillfreddie Techman
Stillfreddie Techman