Building a Lightweight IaC Misconfiguration Detector in Go


Introduction
Imagine pushing a Terraform file with a wildcard IAM role to production without realizing it. Oops. Now imagine getting a Slack notification seconds after the CI pipeline spots that misconfiguration. Much better, right?
This project will walk through the building and deployment of a lightweight Infrastructure-as-Code (IaC) misconfiguration scanner built in Go. This scanner integrates seamlessly into GitHub Actions and sends scan reports both as GitHub artifacts and to AWS S3—with Slack notifications to keep the team alert. The project guide is divided into four sections:
Scanner Architecture and Logic
GitHub Actions Pipeline Breakdown
Creating AWS IAM User and S3 Bucket for CI Pipeline
Slack Integration Setup
And the end of this guide, you'll have:
A working IaC security scanner
CI/CD automation with GitHub Actions
Cloud storage of scan results in S3
Real-time Slack alerts with links to the scan logs
Link to the project: https://github.com/RichardBenjamin/IAC-Scanner
Project Overview and Scanner Architecture
Project Structure
IAC-Scanner/
├── main.go # Entry point
├── scanner/
│ ├── scanner.go # Main scanning logic
│ ├── docker.go # Handles Dockerfile scans
│ ├── k8s.go # Handles Kubernetes YAML scans
│ └── terraform.go # Handles Terraform file scans
├── rules/
│ ├── rule.go # Common rule structure
│ ├── docker_rules.go # Rule definitions for Docker
│ ├── k8s_rules.go # Rule definitions for Kubernetes
│ └── tf_rules.go # Rule definitions for Terraform
├── test-files/ # Sample misconfigured files for testing
Scanner Architecture and Logic
The scanner project is written in Go and is structured in a modular way for clarity and ease of maintenance. It begins execution from the main.go
file, which is the starting point of the program. This file accepts a command-line argument that specifies the folder or file path to scan. It then passes this path to the RunScanner
function in scanner.go
.
main.go
func main() {
if len(os.Args) < 2 {
fmt.Println("Usage: iac-scan <path-to-scan>")
os.Exit(1)
}
path := os.Args[1]
scanner.RunScanner(path)
}
Inside scanner.go
, the RunScanner
function creates a file named scan-results.log
. This file will be used to store all scan results. The defer file.Close()
line ensures that once scanning is done, the file is properly closed—even if an error occurs later. This setup provides a clean way to collect and save output from the scanner
As it encounters each file, it checks the file name and extension to determine its type. For example:
Files ending in
.tf
are identified as Terraform filesFiles ending in
.yaml
or.yml
are assumed to be Kubernetes YAML filesAny file named
Dockerfile
or starting with "docker" is treated as a Dockerfile
Once the type is known, the corresponding handler function is called — CheckTerraform
, CheckKubernetesYAML
, or CheckDockerfile
. These functions are defined in their respective files: terraform.go
, k8s.go
, and docker.go
.
Each handler reads the content of the file and compares it with a list of predefined rules. These rules are written using regular expressions (regex), which are patterns used to search text. The rules are defined in separate files under the /rules/
folder. Each rule includes:
A
Name
to identify itA
Category
to group it (like permissions, secrets, etc.)A
Severity
level (LOW, MEDIUM, or HIGH)A
Pattern
which is a regex used to match against the file's contentsA
Message
to explain what was foundAn
Enabled
flag to turn the rule on or off
Rule Struct (in rule.go)
type Rule struct {
Name string
Category string
Severity string
Pattern string
Message string
Enabled bool
}
Rules are defined as structs and stored in slices, categorized by file type. A struct (short for structure) is a composite data type that groups together zero or more fields (variables) under a single name.
Sample of rules set in Kubernetes_rules.go
If a match is found between the rule and the file content, a function called ReportIssue
is triggered. This function prints the issue to the terminal and also writes it into a file named scan-results.log
. This makes it easy to view the output later or use it in CI/CD pipelines.
This approach makes the scanner easy to extend and organize allowing it to work well with automation tools like GitHub Actions. It also makes it fast and independent on external tools.
GitHub Actions Pipeline Breakdown
The GitHub Actions pipeline automates the entire scanning process every time someone pushes code or opens a pull request to the main
branch. The breakdown of the pipeline is below.
Checkout Code
- name: Checkout Code
uses: actions/checkout@v3
This step tells GitHub Actions to "check out" the repository code, meaning it pulls (clones) the contents of the repo into the runner. The runner is a temporary virtual machine that runs the workflow. Without this step, the runner would be empty and wouldn't know anything about the code!
Set up Go
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: 1.22
This step installs the Go programming language in the workflow runner. The scanner is written in Go, so the runner needs Go installed to compile it. The version is kept at 1.22 to make sure the behavior is consistent with what the scanner was built and tested with. That avoids weird bugs that can happen when using a different version.
Build Scanner
- name: Build Scanner
run: go build -o iac-scan
This command uses go build
to compile the source code in the repo and outputs an executable file named iac-scan
. The -o
flag stands for "output". It tells the go build
command what to name the executable file. Without -o
, Go would just name the file after the directory by default.
Run Scanner
- name: Run Scanner
run: ./iac-scan ./test-files > scan-results.log
The command ./iac-scan
runs the executable file named iac-scan
, which was compiled in the previous step. It takes ./test-files
as the input directory, instructing the scanner to analyze the files contained within it. The > scan-results.log
part redirects the scanner's output (stdout) into a file named scan-results.log
, effectively saving the scan results instead of displaying them in the console.
Upload Artifact
- name: Upload Artifact
uses: actions/upload-artifact@v2
with:
name: iac-scan-report
path: scan-results.log
This line itells GitHub Actions to use a prebuilt action called upload-artifact
, which is maintained by the official actions
team at GitHub. The @v2
part specifies that the code uses version 2 of that action. The actions/upload-artifact@v2
action saves the scan-results.log
file as a downloadable artifact named iac-scan-report
.
Artifacts are files that GitHub Actions workflow uploads and saves after the run is complete. After the workflow finishes, find and download this artifact by going to the Actions tab of the GitHub repository and selecting the relevant workflow run.
Fail if HIGH Severity Issues Are Found
- name: Fail if HIGH severity issues are found
id: scan_check
run: |
if grep -q "\[HIGH\]" scan-results.log; then
message="High severity issues detected in files!"
echo "message=$message" >> $GITHUB_OUTPUT
exit 1
else
message="No HIGH severity issues found."
echo "$message"
echo "message=$message" >> $GITHUB_OUTPUT
fi
The "Fail if HIGH severity issues are found" step in a GitHub Actions workflow checks the scan-results.log
file for any lines marked with [HIGH]
to identify critical security issues. Using a grep
command, it determines whether such issues exist and sets a corresponding message
output. If high- severity issues are detected, it records a failure message and exits with code 1
, intentionally failing the job to enforce security policies. If no such issues are found, it logs a success message and allows the workflow to continue.
Configure AWS Credentials
- name: Configure AWS credentials
if: always()
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
This step sets up the GitHub Actions environment with the necessary AWS credentials to interact with AWS services like S3. The aws-actions/configure-aws-credentials
action configures the AWS CLI with the secret keys stored in GitHub Secrets. The if: always()
condition ensures this step runs regardless of whether earlier steps fail or succeed—making it useful when uploading logs.
The --acl public-read
flag is included to make the uploaded file publicly accessible via a direct URL. This is useful for sharing the scan results externally without requiring authentication. The AWS credentials (AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
) are securely stored as GitHub Secrets and inserted into the workflow environment, so sensitive information are never exposed in the code.
Set log filename
- name: Set log filename
if: always()
id: logfile
run: echo "filename=logs/log-${{ github.run_id }}.txt" >> $GITHUB_OUTPUT
This step creates a dynamic filename for the scan log file to be stored in the S3 bucket. By assigning a unique name using the github.run_id
, each workflow run generates a distinct log file. The value is saved to an output variable called filename
, which can be referenced later using steps.logfile.outputs.filename
.
Upload to S3
- name: Upload to S3
run: aws s3 cp scan-results.log s3://$AWS_S3_BUCKET/scan-results.log --acl public-read
The Upload to S3
step uses the AWS CLI command aws s3 cp
to copy the scan-results.log
file into an S3 bucket. The destination is specified as s3://$AWS_S3_BUCKET/scan-results.log
, where $AWS_S3_BUCKET
is an environment variable pointing to the name of the S3 bucket.
Generate Pre-signed S3 URL
- name: Get S3 Pre-signed URL
if: always()
id: signed_url
run: |
url=$(aws s3 presign s3://my-ci-logs-bucket/${{ steps.logfile.outputs.filename }} --expires-in 3600)
echo "url=$url" >> $GITHUB_OUTPUT
This step creates a temporary, pre-signed URL that can be shared with others to access the scan log file in S3 without requiring AWS credentials. The link is valid for one hour (3600
seconds). The generated URL is stored in the output variable url
for use in Slack notifications or other steps.
Notify Slack
- name: Notify Slack
run: |
curl -X POST -H 'Content-type: application/json' \
--data "{
\"text\": \"*Scan Complete*\\n${{ steps.scan_check.outputs.message }}\\n
*S3 Logs:* <${{ steps.signed_url.outputs.url }}>
*GitHub Artifact:* <https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}>
\"}" \
${{ secrets.SLACK_WEBHOOK_URL2}}
The Notify Slack
step uses curl
to send a POST request to a Slack Incoming Webhook URL, which is stored securely as a GitHub Secret (SLACK_WEBHOOK_URL
). An Incoming Webhook URL is a special Slack-generated URL that allows external systems, like GitHub Actions, to send messages directly into a Slack channel.
The message sent is a simple JSON payload that posts a notification saying " Scan complete! S3 Logs: [link]" — with the link pointing to the publicly accessible scan report on S3 and adds a direct link to the GitHub Actions run that produced this scan. This allows the team to get immediate alerts in a Slack channel whenever a scan finishes, making it easy to respond quickly to issues without having to dig into logs manually.
By joining all these steps together, a pipeline that scans every commit or pull request for infrastructure misconfigurations and immediately alerts the team with a downloadable report.
Creating AWS IAM User and S3 Bucket for CI Pipeline
To connect GitHub Actions workflow to AWS, a AWS IAM user that has access to a specific S3 bucket where the scan reports will be stored is needed.
Step 1: Create IAM User for CI
Go to the AWS IAM Dashboard and click Users > Add user. Name the user something like github-ci-user
. Under Access Type, check:
Programmatic access (for access key and secret)
AWS Management Console access to manually log in later
Image showing the creation of IAM User
Step 2: Set User Permissions
On the next page, choose Attach policies directly and pick AmazonS3FullAccess
. This grants full access to all Amazon S3 resources within an AWS account. When this policy is attached to an IAM user, group, or role, it allows that entity to perform any action (s3:*
) on any bucket or object (*
) in Amazon S3.
Image showing policy attached to user
Step 3: Generate Access Keys
After the user is created, go to Security credentials tab > Create access key. Choose "Application running outside AWS". This tells AWS that the access key is intended for use in an environment that is not within the AWS like Github Actions. Copy the Access Key ID and Secret Access Key and store them because the secrets won’t be able to be viewed again, and leaking it can allow full programmatic access to the AWS resources depending on the permissions tied to the user.
Image showing the generation of Access Keys
Step 4: Add Access to GitHub Secrets
Go to the GitHub repository > Settings > Secrets and variables > Actions. Add:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_S3_BUCKET
Repository secrets in GitHub Actions workflow.
Step 5: Create S3 Bucket
Now go to the S3 service in AWS and click Create bucket. Enter a unique name (e.g., iac-ci-logs
) and select a region. Leave other settings default and click Create.
Image showing the creation of S3 bucket
Step 6: Make Bucket Object Public
To directly the Slack notification to the report, allow public read access to the file. To do this add an ACL or edit the bucket policy.
Image showing the creation of S3 Bucket policy
Slack Integration Setup
To ensure that the team gets immediate feedback when IaC misconfigurations are found, slack is integrated into the CI process. This involves creating a Slack App that can send messages to the desired Slack channel using incoming webhooks and OAuth tokens.
Step 1: Create a Slack App
Go to Slack API and click Create New App. Choose From scratch. Then name the app (e.g., IAC-Scanner-Alert
) and select the appropriate Slack workspace.
Image showing the creation of slack app
Image showing the selection of App name and workspace
Step 2: Add Scopes and Permissions
Inside the app settings, go to OAuth & Permissions. Under Bot Token Scopes, add the following:
incoming-webhook
– To allow posting to channelschat:write
– To enable the app to send messages
Image showing OAuth Scopes section showing selected scopes
Step 3: Install App to Workspace
Scroll to the top and click Install App to Workspace. You'll be redirected to grant permissions. Choose the channel the app can post to (e.g., #all-devsecops
) and authorize.
Image showing permission request for a selected channel
Step 4: Copy Credentials
After installation, this will be provided:
A Bot User OAuth Token (starts with
xoxb-...
)A Webhook URL for the selected channel
Image showing the Bot User OAuth Token to be copied
Copy these and add them to the GitHub repository’s secrets as:
SLACK_WEBHOOK_URL
SLACK_BOT_TOKEN
Step 5: Test the Application
After the Slack has been set up and the channel has invited the app. Test the scanner alert by running the scanner with the sample misconfigured IaC files in the testing folder. The following alert would look like this:
Image showing the links to the log send to the slack channel through the app.
Errors Encountered and How I Handled Them
Error Encountered | How it was Handled |
Scan results not uploading to S3 | Verified IAM permissions and bucket policy; ensured filename path was correct |
GitHub Secrets not picked up in CI | Checked secrets configuration under repository settings and ensured correct naming |
Pre-signed URL not working | Adjusted the expiration time and ensured correct file path was used with aws s3 presign |
Slack alert not delivering | Fixed webhook URL and updated bot token permissions in the Slack app dashboard |
AWS S3 upload failed | Double-checked bucket name, region, and IAM user permissions; regenerated access keys if needed |
Scan results not appearing in GitHub artifacts | Fixed file path and ensured log file was created before artifact upload step ran |
False positive in rule match | Refined the regex pattern and tested against both vulnerable and safe sample files |
Conclusion
This project builds a Go-based IaC scanner that detects misconfigurations in Terraform, Kubernetes, and Docker files using regex rules. It automates its execution in a GitHub Actions pipeline, where results are saved as artifacts, pushed to AWS S3, and shared via Slack alerts. It’s a lightweight, approach to add security to CI/CD workflows and helps the team catch issues early. If you enjoyed this, feel free to share, comment, and connect!
Subscribe to my newsletter
Read articles from Kenechukwu Okeke directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Kenechukwu Okeke
Kenechukwu Okeke
I am a Cybersecurity and Cloud Security enthusiast passionate about automation, DevSecOps, and securing cloud infrastructures. I focus on building resilient and secure systems through security best practices and automation.