Infrastructure Automation with Ansible for Configuration Management

Table of contents
- Why Ansible for Configuration Management?
- Setting Up Ansible
- Understanding Ansible Inventory
- Writing Your First Ansible Playbook
- Using Variables for Flexibility
- Variable Files and Best Practices
- Organizing with Roles
- Automated Application Deployment Example
- Managing Multiple Environments
- Security Best Practices
- Integrating with CI/CD Pipelines
- Final Thoughts
In modern DevOps and cloud-native environments, speed and reliability are non-negotiable. Infrastructure teams must provision servers, configure applications, and manage deployments in a way that is consistent, repeatable, and auditable. Manual server configuration, with its tendency to introduce errors and inconsistencies, has become a relic of the past. This is where infrastructure automation comes in — and one of the most popular tools for this is Ansible.
Ansible is widely used for configuration management, application deployment, and orchestration because it is agentless, easy to learn, and powerful enough to handle complex workflows. In this article, we will explore how Ansible can be used to automate server configurations and deploy applications, focusing on:
Writing Ansible Playbooks
Setting up static and dynamic inventory
Using variable files to make automation reusable
Organizing projects with roles for clean and scalable deployments
We will walk through examples so you can start building your own automated configuration pipelines.
Why Ansible for Configuration Management?
Before diving into the technical details, it is worth understanding why Ansible is a go-to tool for DevOps engineers and infrastructure teams.
Agentless architecture
Ansible does not require a dedicated agent running on target machines. It uses SSH for Linux and WinRM for Windows, which reduces complexity and security concerns.Human-readable YAML syntax
Playbooks are written in YAML, which makes them easy to read and write. Even non-developers can understand them.Idempotent execution
Ansible ensures that running the same playbook multiple times produces the same result, without duplicating changes.Cross-platform and extensible
Works with on-premises servers, cloud instances, containers, and network devices. You can write your own modules or use thousands of community-contributed ones.Integration with CI/CD pipelines
Ansible can be integrated into tools like Jenkins, GitHub Actions, or GitLab CI for fully automated deployments.
Setting Up Ansible
If you are starting from scratch, installing Ansible is straightforward.
On Linux or macOS:
pip install ansible
On Ubuntu/Debian:
sudo apt update
sudo apt install ansible -y
To confirm:
ansible --version
Ansible works from a control node (your laptop, a CI/CD runner, or a dedicated management server) and connects to managed nodes (your infrastructure servers).
Understanding Ansible Inventory
The inventory tells Ansible which machines to manage and how to connect to them. There are two main types:
1. Static Inventory
A simple hosts.ini
file might look like this:
[webservers]
web1 ansible_host=192.168.1.10 ansible_user=ubuntu
web2 ansible_host=192.168.1.11 ansible_user=ubuntu
[dbservers]
db1 ansible_host=192.168.1.12 ansible_user=ubuntu
Here:
Groups like
[webservers]
allow targeting multiple servers at once.ansible_host
is the IP or hostname.ansible_user
is the SSH username.
You can test connectivity with:
ansible all -i hosts.ini -m ping
2. Dynamic Inventory
Static inventories work for small setups, but in cloud environments, server IPs change frequently. Dynamic inventory scripts fetch real-time server lists from providers like AWS, GCP, or Azure.
For AWS, install the EC2 plugin:
pip install boto3 botocore
Then configure a file aws_ec2.yaml
:
plugin: aws_ec2
regions:
- us-east-1
filters:
tag:Environment: dev
keyed_groups:
- key: tags.Name
Run:
ansible-inventory -i aws_ec2.yaml --list
This approach ensures Ansible always knows your current infrastructure without manually updating IP addresses.
Writing Your First Ansible Playbook
An Ansible Playbook defines tasks to be executed on target machines. For example, to install Nginx on web servers:
---
- name: Install and configure Nginx
hosts: webservers
become: yes
tasks:
- name: Install Nginx
apt:
name: nginx
state: present
update_cache: yes
- name: Ensure Nginx is running
service:
name: nginx
state: started
enabled: yes
Breakdown:
hosts
: Which inventory group to target.become: yes
: Run tasks with sudo.tasks
: Steps executed in order.
Run it with:
ansible-playbook -i hosts.ini install_nginx.yaml
Using Variables for Flexibility
Hardcoding values in playbooks is not scalable. Instead, use variables to make them reusable.
In playbooks:
vars:
package_name: nginx
In group_vars/webservers.yaml:
package_name: nginx
In host_vars/web1.yaml:
package_name: apache2
This way, you can change configurations per group or host without touching the playbook logic.
Variable Files and Best Practices
A clean project structure for variables:
inventory/
hosts.ini
group_vars/
webservers.yaml
dbservers.yaml
host_vars/
web1.yaml
db1.yaml
Example group_vars/webservers.yaml
:
web_port: 80
web_server_package: nginx
Tasks can now reference these:
- name: Install web server
apt:
name: "{{ web_server_package }}"
state: present
Organizing with Roles
As projects grow, a single playbook can become messy. Roles help you organize tasks, handlers, variables, and templates into reusable modules.
Role structure:
roles/
webserver/
tasks/main.yaml
handlers/main.yaml
vars/main.yaml
templates/
files/
Example roles/webserver/tasks/main.yaml
:
---
- name: Install web server
apt:
name: "{{ web_server_package }}"
state: present
- name: Deploy index.html
copy:
src: index.html
dest: /var/www/html/index.html
Example playbook using the role:
---
- name: Configure Web Servers
hosts: webservers
become: yes
roles:
- webserver
Automated Application Deployment Example
Let’s build a more complete example to deploy a Python Flask app.
Project Structure
inventory/
hosts.ini
group_vars/
webservers.yaml
roles/
flask_app/
tasks/main.yaml
templates/systemd.service.j2
files/
app.py
requirements.txt
deploy_flask.yaml
inventory/hosts.ini
[webservers]
web1 ansible_host=192.168.1.10 ansible_user=ubuntu
group_vars/webservers.yaml
app_dir: /opt/flask_app
python_version: python3
roles/flask_app/tasks/main.yaml
---
- name: Install dependencies
apt:
name: "{{ item }}"
state: present
update_cache: yes
loop:
- "{{ python_version }}"
- "{{ python_version }}-pip"
- name: Create application directory
file:
path: "{{ app_dir }}"
state: directory
- name: Copy application files
copy:
src: "{{ item }}"
dest: "{{ app_dir }}/"
loop:
- app.py
- requirements.txt
- name: Install Python dependencies
pip:
requirements: "{{ app_dir }}/requirements.txt"
executable: pip3
- name: Configure systemd service
template:
src: systemd.service.j2
dest: /etc/systemd/system/flask_app.service
- name: Start Flask service
systemd:
name: flask_app
state: started
enabled: yes
roles/flask_app/templates/systemd.service.j2
[Unit]
Description=Flask App
After=network.target
[Service]
User=ubuntu
WorkingDirectory={{ app_dir }}
ExecStart={{ python_version }} app.py
Restart=always
[Install]
WantedBy=multi-user.target
deploy_flask.yaml
---
- name: Deploy Flask Application
hosts: webservers
become: yes
roles:
- flask_app
Run:
ansible-playbook -i inventory/hosts.ini deploy_flask.yaml
This playbook:
Installs Python and pip
Creates the application directory
Copies the app code
Installs dependencies
Configures a systemd service
Starts the application
Managing Multiple Environments
In real-world scenarios, you may have dev, staging, and production environments. Ansible supports this with separate inventories and variable sets.
Example:
inventories/
dev/
hosts.ini
group_vars/
staging/
hosts.ini
group_vars/
Run for staging:
ansible-playbook -i inventories/staging/hosts.ini deploy_flask.yaml
Security Best Practices
- Use Ansible Vault to encrypt sensitive variables like passwords or API keys.
ansible-vault create group_vars/webservers/secrets.yaml
Limit privilege escalation with
become
only where needed.Use SSH keys instead of passwords for secure connections.
Integrating with CI/CD Pipelines
Ansible can be triggered automatically in a CI/CD pipeline:
Example with GitHub Actions:
name: Deploy with Ansible
on: [push]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install Ansible
run: pip install ansible
- name: Run Playbook
run: ansible-playbook -i inventory/hosts.ini deploy_flask.yaml
This allows every code push to automatically configure servers and deploy applications.
Final Thoughts
Ansible provides a simple yet powerful approach to infrastructure automation. With playbooks, variable files, inventory management, and roles, you can:
Consistently configure servers across environments
Automate deployments without manual intervention
Scale configurations as your infrastructure grows
By integrating Ansible into your workflows, you save time, reduce human errors, and enable rapid, repeatable deployments.
If you are just getting started, begin with small playbooks, then move to structured roles and dynamic inventory. Before long, you will be managing hundreds of servers with the same ease as managing one.
Subscribe to my newsletter
Read articles from John Abioye directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
