Tired of Your LLM Using Outdated Terraform Docs? There's a Fix for That! ๐


Hello !
You ask your favorite AI assistant to whip up some Terraform code for a new module, and it confidently spits out syntax that was deprecated six months ago. ๐คฆโโ๏ธ While LLMs are fantastic for boosting productivity, they often rely on their training data, which can be a frozen snapshot in time. This means they miss out on the latest features and best practices.
I've been testing a new tool from HashiCorp that directly tackles this problem: the terraform-mcp-server
. It essentially gives your LLM a direct, live line to the official Terraform documentation, ensuring the code it generates is accurate and up-to-date.
For my setup, I use Podman on my local machine, but you can use Docker or any container tool you prefer. And my go-to co-programmer for everything these days is Cline, which integrates with tools like this seamlessly.
Why is the Terraform MCP Server a Game-Changer?
The core issue is that LLMs, by default, refer to the data they were trained on. To make them more effective, we can give them "tools" they can use to fetch live information. The terraform-mcp-server
acts as this tool, an independent agent with a specific set of capabilities.
Think of it as giving your AI assistant a toolkit specifically for Terraform. Instead of guessing, it can now ask questions and get real-time answers. The tools it gets access to include:
get_latest_module_version
get_latest_provider_version
get_module_details
get_policy_details
get_provider_details
search_modules
search_policies
With these capabilities, the LLM can perform a thorough analysis before generating a single line of code. When I tested it on a GCP VPC module, it correctly used all the newest features without any hallucinated or outdated arguments. It was a beautiful thing to see!
The Setup is Super Simple
Getting this up and running is incredibly straightforward.
First, clone the repository from GitHub:
Bash
git clone https://github.com/hashicorp/terraform-mcp-server.git
Next, build the container image. I'm using
podman
, butdocker build
works just the same.Bash
cd terraform-mcp-server podman build -t terraform-mcp-server:dev .
Finally, run the server!
Bash
podman run -p 8080:8080 --rm terraform-mcp-server:dev
Note: The user's original notes included
-e TRANSPORT_MODE=streamable-http -e TRANSPORT_HOST=0.0.0.0
, which are the defaults and can be useful for specific network setups, but the basic command above works for most local use cases.
That's it! The server is now running and ready to accept requests from your AI tool.
Integrating with an AI Assistant like Cline
To get my assistant, Cline, to use this new tool, I just needed to add a simple configuration. This tells Cline how to run the terraform-mcp-server
whenever it needs to access Terraform information.
Here is the JSON configuration I used:
JSON
{
"mcpServers": {
"terraform": {
"command": "podman",
"args": [
"run",
"-i",
"--rm",
"terraform-mcp-server:dev"
],
"autoApprove": [
"list_tools"
]
}
}
}
Here is the code generated and reference: https://github.com/jothimanikrish/terraform-ai-gcp-vpc/tree/main/gcp-vpc-module
This configuration defines a tool named terraform
, specifies the podman
command to execute it, and automatically approves the initial tool listing.
This terraform-mcp-server
is a fantastic solution for anyone using LLMs in their Infrastructure as Code workflow. It bridges the gap between the static knowledge of the model and the dynamic, ever-evolving world of Terraform. No more copy-pasting docs or second-guessing AI-generated code.
Happy Terraforming!
Subscribe to my newsletter
Read articles from Jothimani Radhakrishnan directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Jothimani Radhakrishnan
Jothimani Radhakrishnan
A Software Product Engineer, Cloud enthusiast | Blogger | DevOps | SRE | Python Developer. I usually automate my day-to-day stuff and Blog my experience on challenging items.