Automating AWS Usage Tracking with Ansible and Cron — My First DevOps Project

Rikshith kumarRikshith kumar
4 min read

Introduction

In my DevOps learning journey, as someone diving head-first into the world of DevOps, I wanted my first hands-on project to be more than a “hello world.” I set out to build something useful: a lightweight, automated AWS usage tracker — and I did it with nothing but Ansible, cron, and some clever scripting.

In this post, I’ll Walk you through:

  • Why I built it

  • How I used Ansible and shell scripting

  • What I learned

  • How you can try it yourself

💡 Why I Built This

I don’t use personal local machine right now to learn DevOps; I had an option where I could setup a virtual machine and start learning DevOps using my VM. So, to setup a virtual machine I can choose any of the cloud providers where VM services are offered. I was sticking to AWS as my cloud provider. AWS provides EC2 instance where we could setup a whole virtual machine and use it.

When learning AWS and exploring free-tier resources, I found myself asking:

“How much am I actually using, and could I hit the limits?”

I was checking manually whether I have hit the free tier limit. For example, If I have to stick to free tier EC2 instance, I can only create an EC2 instance where the disk space allocated will be of maximum 30 GiB, by launching an EC2 instance with the default settings, I can create a maximum of 3 instances. So, to keep track of these particulars, I would have to enter the AWS console and look for the information.

Rather than manually checking usage in the console, I decide to:

  • Automate AWS CLI calls to fetch usage metrics

  • Store them in a CSV log for tracking trends

  • Schedule the entire thing with cron

  • Manage it all through an idempotent Ansible playbook

Project Breakdown

  1. AWS CLI Setup via Ansible

I wrote playbooks that:

  • Installs AWS CLI

  • Configures credentials and default region

  • Downloads my usage tracking script

install-aws-cli.yml

- name: Install AWS CLI
  hosts: all
  become: true
  tasks:
    - name: Install dependencies
      apt:
        name:
          - curl
          - unzip
        state: present

    - name: Download AWS CLI
      get_url:
         url: https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip
         dest: /tmp/awscliv2.zip

    - name: Unzip installer
      unarchive:
        src: /tmp/awscliv2.zip
        dest: /tmp
        remote_src: yes

    - name: Run installer
      command: /tmp/aws/install

configure-aws.yml

---
- name: Set up AWS CLI credentials
  hosts: all
  become: true

  vars:
    aws_user: ubuntu
    aws_dir: "/home/{{ aws_user }}/.aws"

  tasks:
    - name: Create AWS config directory
      file:
        path: "{{ aws_dir }}"
        state: directory
        owner: "{{ aws_user }}"
        mode: '0700'

    - name: Write AWS credentials file
      copy:
        dest: "{{ aws_dir }}/credentials"
        content: |
          [default]
          aws_access_key_id = {{ lookup('env', 'AWS_ACCESS_KEY_ID') }}
          aws_secret_access_key = {{ lookup('env', 'AWS_SECRET_ACCESS_KEY') }}
        owner: "{{ aws_user }}"
        mode: '0600'

    - name: Write AWS config file
      copy:
        dest: "{{ aws_dir }}/config"
        content: |
          [default]
          region = {{ lookup('env', 'AWS_DEFAULT_REGION') }}
          output = json
        owner: "{{ aws_user }}"
        mode: '0600'

schedule-cron.yml

Below code snippet is related to scheduling AWS usage tracker every day at 6 pm meaning every day at 6 pm, log files will be generated in the destination path where we can view the log file for understanding the usage.


---
- name: Schedule AWS usage tracker with cron
  hosts: all
  become: true

  vars:
    script_path: /home/ubuntu/aws-usage-tracker/track-usage.sh
    log_path: /var/log/aws_usage.log
    cron_time: "0 18 * * *"   # 6:00 PM daily
    user: ubuntu

  tasks:
    - name: Ensure tracker script is executable
      file:
        path: "{{ script_path }}"
        mode: '0755'
        owner: "{{ user }}"
        group: "{{ user }}"
        state: file

    - name: Create cron job to run tracker script
      cron:
        name: "Run AWS usage tracker"
        user: "{{ user }}"
        minute: "0"
        hour: "18"
        job: "{{ script_path }} >> {{ log_path }} 2>&1"

Track-usage.sh

#!/bin/bash

DATE=$(date +%F_%H-%M)
LOGFILE="aws-usage-$DATE.log"

echo "AWS Usage Report - $DATE" > $LOGFILE
echo "==========================" >> $LOGFILE

# EC2 Instances
echo -e "\n🖥️ EC2 Instances:" >> $LOGFILE
aws ec2 describe-instances \
  --query "Reservations[*].Instances[*].{ID:InstanceId, Type:InstanceType, State:State.Name}" \
  --output table >> $LOGFILE

# EBS Volumes
echo -e "\n💽 EBS Volumes:" >> $LOGFILE
aws ec2 describe-volumes \
  --query "Volumes[*].{ID:VolumeId, Size:Size, State:State}" \
  --output table >> $LOGFILE

# S3 Buckets
echo -e "\n📦 S3 Buckets:" >> $LOGFILE
aws s3api list-buckets \
  --query "Buckets[*].{Name:Name, Created:CreationDate}" \
  --output table >> $LOGFILE

The Above Code snippets explain how “How I used Ansible and shell scripting”.

What I Learned

  • How to create idempotent Ansible playbooks

  • How to use cron with Ansible to schedule automation tasks

  • How to interact with AWS services via CLI

  • Why clean logging and tagging make debugging easier

Try It Yourself

I’ve uploaded the full project on GitHub with:

  • Playbooks

  • The tracker script

  • Instructions to run it from an EC2 control node

You can fork it and build your own cloud dashboards from it

Edit this text

Edit this text

0
Subscribe to my newsletter

Read articles from Rikshith kumar directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Rikshith kumar
Rikshith kumar