Create and initialise CodeCommit repositories with Terraform
In today’s fast-paced digital landscape, efficient collaboration and secure version control are indispensable for any software development project. With AWS CodeCommit, developers and cloud engineers are empowered with a robust and scalable solution for hosting secure Git repositories, seamlessly integrated into or close to their AWS environments.
Creating a repository within AWS CodeCommit is not a difficult process. This can be done either in the console or in code. By using code, one can standardize the creation of the CodeCommit repository as well as perform additional tasks around the repository. An example of this could be an initial sync of files towards newly created repositories. This can facilitate multiple use cases. An example is the initial sync towards the repo of License specific files, .gitignore
files specific to the requirements of the company, a structure specific setup when that's appropriate. It could be handy as well when multiple files need to initially synced to a newly create AWS CodeCommit repository from a landing zone specific solution on account vending solution.
So how can we achieve this? We will showcase this below using some Terraform sample code.
Solution
First we need to create a new AWS CodeCommit repository.
resource "aws_codecommit_repository" "repository" {
repository_name = var.repository_name
description = "In bulk created repositories"
}
Then we need to create a user that will be used by the pipeline deploying the Terraform code. The files that would typically be available within the pipeline will be synced using Git to the newly create repository. Below you can see a user, policy and attachment of the policy to the user. If the credentials used to create the repository can perform Git pull and push actions, this step can be skipped. Instead we can use the existing credentials in the last step.
resource "aws_iam_user" "pipeline_codecommit_user" {
name = "codecommit_user"
}
resource "aws_iam_policy" "codecommit_pullpush_policy" {
name = "codecommit_pullpush"
path = "/"
description = "CodeCommit policy for the pipeline_codecommit_user used by a CI/CD pipeline"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"codecommit:GitPull",
"codecommit:GitPush"
]
Effect = "Allow"
Resource = aws_codecommit_repository.repository.arn
},
]
})
}
resource "aws_iam_user_policy_attachment" "codecommit_policy_attachment" {
user = aws_iam_user.pipeline_codecommit_user.name
policy_arn = aws_iam_policy.codecommit_pullpush_policy.arn
}
We then need to create an access key of the aforementioned user. The key will be used in the next steps to sync the files.
resource "aws_iam_access_key" "pipeline_codecommit_user" {
user = aws_iam_user.pipeline_codecommit_user.name
}
Please note that using the above snippet, the generated access key and secret access key will be available as plain text in the state file, whether that is local or remote. There are ways to encrypt them with using a pgp key, but that goes beyond the scope of this post.
Finally, we can use the local-exec
provisioner to sync files to the remote Git repository. Assuming the files are available within a local folder called repo-init
, the code that would sync the files can be found below.
resource "null_resource" "git_clone" {
provisioner "local-exec" {
command = <<EOT
export AWS_ACCESS_KEY_ID=${aws_iam_access_key.pipeline_codecommit_user.id}
export AWS_SECRET_ACCESS_KEY=${aws_iam_access_key.pipeline_codecommit_user.secret}
git config --global credential.helper '!aws codecommit credential-helper $@'
git config --global credential.UseHttpPath true
git clone ${aws_codecommit_repository.repository.clone_url_http} ../temp-location
cp -r repo-init/**/* ../temp-location
cd ../temp-location
git add . && git commit -am "Initial commit" && git push origin main
rm -rf ../temp-location
EOT
}
}
Conclusion
In summary, AWS CodeCommit offers a robust solution for version control. With the aforementioned solution, one can complement it with automation capabilities like syncing files upon creation.
Main image by Freepik
*This article was originally posted here.
Subscribe to my newsletter
Read articles from Konstantinos Bessas directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Konstantinos Bessas
Konstantinos Bessas
I am a Cloud Architect, mostly focusing on Amazon Web Services. I have programming experience and a particular interest to deliver powerful and flexible solutions on Public Cloud Platforms that are backed 100% by Infrastructure as Code. I have been extensively working with the AWS Cloud Development Kit and the Serverless Framework. I have worked on several migration projects to AWS while supporting customers to containerize their applications leveraging Kubernetes or other native Public Cloud Container Management Platforms.