Day 57 of 90 Days of DevOps Challenge: Building Scalable Infrastructure with Terraform Modules

Vaishnavi DVaishnavi D
4 min read

On Day 56 of my DevOps journey, I focused on improving the quality and maintainability of my Terraform scripts. I learned how to eliminate hardcoded values by using input variables, retrieve critical resource information through output variables, and organize code into logical components for better modularity.

I also explored dynamic provisioning by creating multiple EC2 instances using count, locals, and list indexing. This approach helped me understand how infrastructure can scale effectively using just a few lines of well-structured code.

Today’s objective was to understand and implement Terraform Modules, which promote code reusability, clean architecture, and environment-specific customization. These are crucial aspects when managing infrastructure at scale across development, testing, and production environments.

What Are Terraform Modules?

A Terraform module is a collection of .tf files organized in a directory that encapsulates a specific set of infrastructure resources. At a minimum, a module can consist of a single main.tf, but best practice is to divide the module into three components:

  • main.tf: Core resource definitions

  • inputs.tf: Input variable declarations

  • outputs.tf: Output variable declarations

Modules allow you to reuse logic across multiple parts of a project or even across different projects altogether. They also enable separation of concerns, which makes your infrastructure codebase cleaner, easier to manage, and more scalable.

Project Structure with Modules

To practice modules, I built a sample Terraform project structured as follows:

-App/
│
├── provider.tf               # AWS provider and region config
├── main.tf                   # Root module that invokes child modules
├── outputs.tf                # Aggregated outputs from child modules
│
└── modules/
    ├── ec2/
    │   ├── main.tf
    │   ├── inputs.tf
    │   └── outputs.tf
    │
    └── s3/
        ├── main.tf
        ├── inputs.tf
        └── outputs.tf

In the root main.tf, I used the module block to invoke each module and link the logic together:

module "my_ec2" {
  source = "./modules/ec2"
}

module "my_s3" {
  source = "./modules/s3"
}

Each child module was responsible for provisioning its respective resources, while the root module served as the execution point, maintaining separation of responsibility between components.

Reusability and Modularity in Action

By encapsulating logic inside reusable modules, I can now:

  • Deploy the same infrastructure repeatedly across different environments

  • Abstract complexity and avoid code duplication

  • Share modules across teams and repositories

  • Make changes in a single module and have it reflected wherever it's used

This approach mirrors how software is developed using packages or libraries. Modules in Terraform function similarly; you build once and use many times.

Multi-Environment Support

A key challenge in real-world DevOps is managing multiple environments, each with different infrastructure requirements. For example:

  • DEV / SIT / QA environments might require smaller EC2 instances (t2.micro, t2.medium)

  • UAT / PROD environments typically need more powerful configurations (t2.large, t2.xlarge)

To handle these variations, I created environment-specific variable files such as:

  • inputs-dev.tf

  • inputs-qa.tf

  • inputs-uat.tf

  • inputs-prod.tf

These files contain variables tailored to each environment’s needs.

Example: inputs-dev.tf

variable "instance_type" {
  default = "t2.medium"
}

Example: inputs-prod.tf

variable "instance_type" {
  default = "t2.xlarge"
}

At execution time, I passed the respective variable file using the --var-file flag:

# Apply for DEV environment
terraform apply --var-file=inputs-dev.tf

# Apply for PROD environment
terraform apply --var-file=inputs-prod.tf

This method ensures loose coupling between logic and data, making the Terraform codebase reusable and environment-agnostic. It also simplifies configuration management, especially in CI/CD pipelines, where you can swap files programmatically based on the target environment.

Key Benefits Realized

Here’s what I achieved by implementing Terraform modules and environment-based variable management:

Scalability: I can now deploy EC2, S3, or RDS modules across environments without duplicating logic
Reusability: The same module can be used by different teams or projects with minimal changes
Separation of Concerns: Each module is self-contained, making debugging and updates easier
Flexibility: Environment-specific configurations are managed externally, ensuring code portability
Efficiency: Infrastructure provisioning becomes faster and more organized, enabling continuous delivery

Final Thoughts

Day 57 marked a turning point in how I approach infrastructure with Terraform. By introducing modular architecture and environment-specific configuration, I’ve laid the foundation for building infrastructure that is not just functional, but also extensible, collaborative, and production-ready.

Modules are not just a Terraform feature; they are a mindset shift. They promote the same best practices found in modern software engineering: modularity, reusability, and separation of concerns.

As I move forward, I’ll be diving deeper into Terraform backends, state management, and remote collaboration strategies, which are essential for team-based and enterprise-scale deployments.

0
Subscribe to my newsletter

Read articles from Vaishnavi D directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Vaishnavi D
Vaishnavi D