Auto-Unsealing HashiCorp Vault with GCP KMS and Deploying to Cloud Run

Table of contents
- Introduction
- Prerequisites
- Setting Up Environment Variables
- GCP Resources with Terraform
- Configuring Vault for Auto-Unsealing
- Creating the Vault Docker Container
- Building and Pushing the Docker Image Manually
- Automating Deployment with GitHub Actions
- Deploying to Cloud Run Manually
- Setting Up and Using the Vault CLI
- Migrating from Shamir Keys to GCP KMS Auto-Unsealing
- Setting Up Authentication Methods (Optional)
- Conclusion
- Next Steps
- Resources

Introduction
HashiCorp Vault is a powerful secrets management tool that helps organizations secure, store, and control access to tokens, passwords, certificates, and encryption keys. One challenge with managing Vault is the need to unseal it after each restart, which can be cumbersome in automated environments. This article demonstrates how to automate the Vault unsealing process using Google Cloud KMS and deploy the solution to Google Cloud Run for a serverless, scalable, and cost-effective setup.
I'll walk through the entire process, including:
Setting up GCP resources with Terraform
Configuring Vault for auto-unsealing with GCP KMS
Creating a Docker container for Vault
Deploying to Cloud Run
Automating deployment with GitHub Actions
Migrating from Shamir key shares to GCP KMS auto-unsealing
Prerequisites
Google Cloud Platform account with a project
GCP service account with appropriate permissions
Basic knowledge of Terraform, Docker, and Vault
HashiCorp Vault CLI installed locally
GitHub repository for CI/CD (optional)
Setting Up Environment Variables
Start by setting up environment variables for your deployment:
export PROJECT_ID=gcp_project_id
export GCP_LOCATION=europe-west1
export GCP_ARTIFACT_REGISTRY_NAME=docker-repository
export DOCKER_IMAGE=vault-server
export CLOUD_RUN_SERVICE_NAME=vault-server
GCP Resources with Terraform
Required IAM Roles for Vault Service Account
The Vault service account needs the following roles to interact with GCP services:
roles/cloudkms.viewer
roles/cloudkms.cryptoKeyEncrypterDecrypter or roles/cloudkms.signerVerifier
roles/secretmanager.secretAccessor
roles/storage.objectAdmin
Terraform Configuration
Create a Terraform configuration file to set up the necessary GCP KMS resources:
resource "google_kms_key_ring" "keyring" {
name = "${var.name}-keyring"
location = var.location
project = var.project
}
resource "google_kms_crypto_key" "key" {
name = "${var.name}-key"
key_ring = google_kms_key_ring.keyring.id
rotation_period = var.rotation_period
purpose = var.purpose
}
resource "google_kms_crypto_key_iam_binding" "iam" {
crypto_key_id = google_kms_crypto_key.key.id
role = "roles/cloudkms.cryptoKeyEncrypterDecrypter"
members = [
"serviceAccount:${var.vault_service_account}@${var.project}.iam.gserviceaccount.com"
]
}
variable "name" {
default = "vault-unseal"
}
variable "location" {
default = "global"
}
variable "project" {}
variable "rotation_period" {
default = "7776000s" # 90 days
}
variable "purpose" {
default = "ENCRYPT_DECRYPT"
}
variable "vault_service_account" {}
You'll also need to create a Google Cloud Storage bucket for Vault's storage backend:
resource "google_storage_bucket" "storage-bucket" {
name = var.bucket_name
location = var.location
force_destroy = var.force_destroy
uniform_bucket_level_access = var.uniform_bucket_level_access
public_access_prevention = var.public_access_prevention
storage_class = var.storage_class
versioning {
enabled = true
}
}
variable "bucket_name" {
default = "vault-server-bucket"
}
variable "location" {
default = "EU"
}
variable "uniform_bucket_level_access" {
type = bool
default = true
}
variable "storage_class" {
default = "STANDARD"
}
variable "force_destroy" {
type = bool
default = false
}
variable "public_access_prevention" {
default = "enforced"
}
Configuring Vault for Auto-Unsealing
Create a vault-config.hcl
file for your Vault configuration:
seal "gcpckms" {
project = "gcp_project_id"
region = "global"
key_ring = "vault-unseal-keyring"
crypto_key = "vault-unseal-key"
}
storage "gcs" {
bucket = "vault-server-bucket"
}
listener "tcp" {
address = "0.0.0.0:8080"
tls_disable = 0 # Enabling TLS
}
Note: Make sure to replace
gcp_project_id
with your actual GCP project ID. For production deployments, you should properly configure TLS with certificates rather than usingtls_disable = 0
.
Creating the Vault Docker Container
Dockerfile
Create a Dockerfile for the Vault container:
FROM hashicorp/vault:1.19.0
# Create a non-root user and group
RUN addgroup -S vaultgroup && adduser -S vaultuser -G vaultgroup
RUN mkdir -p /vault/config
COPY config/vault-config.hcl /vault/config/vault-config.hcl
# Set proper ownership for Vault directories and files
RUN chown -R vaultuser:vaultgroup /vault
# Use the non-root user
USER vaultuser
ENTRYPOINT ["vault", "server", "-config=/vault/config/vault-config.hcl"]
Make sure your project structure looks like this:
project/
├── config/
│ └── vault-config.hcl
├── Dockerfile
└── terraform/
└── main.tf
Building and Pushing the Docker Image Manually
If you're not using CI/CD, you can build and push the Docker image manually:
# Build Container
docker build -t "${GCP_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${GCP_ARTIFACT_REGISTRY_NAME}/${DOCKER_IMAGE}:latest" .
# Authenticate with Artifact Registry
gcloud auth configure-docker ${GCP_LOCATION}-docker.pkg.dev
# Push Container
docker push "${GCP_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${GCP_ARTIFACT_REGISTRY_NAME}/${DOCKER_IMAGE}:latest"
Automating Deployment with GitHub Actions
For a more robust deployment process, you can use GitHub Actions to automate the build and deployment of your Vault server. Create a file named .github/workflows/deploy-vault.yml
with the following content:
name: Vault Server Deployment
on:
push:
branches: [ vault-server ]
env:
GCP_WIF_PROJECT_ID: "org-wif-project"
GCP_WIF_PROJECT_NUMBER: ${{ secrets.GCP_WIF_PROJECT_NUMBER }}
GCP_WIF_POOL: "auth-server-gh-pool"
GCP_WIF_PROVIDER: "auth-server-gh-prov"
GCP_WIF_SA: "github-org-auth-sa"
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
PROJECT_LOCATION: europe-west1
PROJECT_ID: org-env-project
GCP_ARTIFACT_REGISTRY_NAME: docker-repository
DOCKER_IMAGE: vault-server
GCP_IMPERSONATED_SA_LIFETIME_TOKEN: 300 # 5 minutes
jobs:
deploy:
name: "Deploy Vault"
runs-on: ubuntu-latest
environment: test
permissions:
contents: 'read'
id-token: 'write'
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud@v1
- id: 'auth'
name: 'Authenticate to Google Cloud'
uses: 'google-github-actions/auth@v2'
with:
create_credentials_file: true
workload_identity_provider: 'projects/${{ env.GCP_WIF_PROJECT_NUMBER }}/locations/global/workloadIdentityPools/${{ env.GCP_WIF_POOL }}/providers/${{ env.GCP_WIF_PROVIDER }}'
service_account: '${{ env.GCP_WIF_SA }}@${{ env.GCP_WIF_PROJECT_ID }}.iam.gserviceaccount.com'
- name: Retrieve information on existing releases
id: get_release_info
run: |
RELEASE_TAG=$(curl -L -H "Accept: application/vnd.github+json" -H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" -H "X-GitHub-Api-Version: 2022-11-28" https://api.github.com/repos/${{ github.repository }}/releases/latest | jq -r '.tag_name')
echo "Latest release tag: $RELEASE_TAG"
MAJOR=$(echo $RELEASE_TAG | awk -F. '{print $1}')
MINOR=$(echo $RELEASE_TAG | awk -F. '{print $2}')
PATCH=$(echo $RELEASE_TAG | awk -F. '{print $3}')
PATCH=$((PATCH + 1))
NEXT_VERSION="${MAJOR}.${MINOR}.${PATCH}"
NEXT_VERSION=$(echo $NEXT_VERSION | sed 's/^v//') # Remove the "v" from the beginning of the version
echo "NEXT_VERSION=${NEXT_VERSION}" >> $GITHUB_ENV
echo "Next version: $NEXT_VERSION"
- name: Build Container
working-directory: ./vault-server
run: |-
docker build -t "${PROJECT_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${GCP_ARTIFACT_REGISTRY_NAME}/${DOCKER_IMAGE}:${{ env.NEXT_VERSION }}" .
- name: Authenticate Artifact Registry
run: |-
gcloud -q auth configure-docker ${{ env.PROJECT_LOCATION }}-docker.pkg.dev
- name: Push Container
run: |-
docker push "${PROJECT_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${GCP_ARTIFACT_REGISTRY_NAME}/${DOCKER_IMAGE}:${{ env.NEXT_VERSION }}"
- name: Deploy to cloud run
working-directory: ./vault-server
run: |-
gcloud run deploy vault-server \
--image "${PROJECT_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${GCP_ARTIFACT_REGISTRY_NAME}/${DOCKER_IMAGE}:${{ env.NEXT_VERSION }}" \
--platform managed \
--project ${PROJECT_ID} \
--region ${PROJECT_LOCATION} \
--no-allow-unauthenticated \
--service-account=vault-server-sa@${PROJECT_ID}.iam.gserviceaccount.com \
--update-secrets=/vault/credentials/gcp-vault-agent-sa.json=VAULT_AGENT_SA:latest \
--memory=1024Mi \
--cpu 1 \
--min-instances=0 \
--max-instances=3 \
--timeout=3600s \
--cpu-boost \
--port=8080 \
--ingress=all \
--execution-environment=gen2
- name: Create a release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.RELEASE_TOKEN }}
with:
tag_name: ${{ env.NEXT_VERSION }}
release_name: Version ${{ env.NEXT_VERSION }}
body: Release notes for vault version ${{ env.NEXT_VERSION }}
This workflow does the following:
Authenticates to Google Cloud using Workload Identity Federation
Retrieves the latest release version and increments it
Builds the Vault Docker image
Pushes the image to Google Artifact Registry
Deploys the image to Cloud Run
Creates a new GitHub release
Note: To use this workflow, you'll need to set up Workload Identity Federation and add the required secrets to your GitHub repository.
Deploying to Cloud Run Manually
If you're not using CI/CD, you can deploy to Cloud Run manually:
gcloud run deploy ${CLOUD_RUN_SERVICE_NAME} \
--image "${GCP_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${GCP_ARTIFACT_REGISTRY_NAME}/${DOCKER_IMAGE}:latest" \
--platform managed \
--project ${PROJECT_ID} \
--region ${GCP_LOCATION} \
--no-allow-unauthenticated \
--service-account=vault-server-sa@${PROJECT_ID}.iam.gserviceaccount.com \
--update-secrets=/vault/credentials/gcp-vault-agent-sa.json=VAULT_AGENT_SA:latest \
--memory=1024Mi \
--cpu 1 \
--min-instances=0 \
--max-instances=3 \
--timeout=3600s \
--cpu-boost \
--port=8080 \
--ingress=all \
--execution-environment=gen2
Security Note: In a production environment, you should keep
--no-allow-unauthenticated
for proper authentication for your Vault server rather than using--allow-unauthenticated
. Consider setting up Identity-Aware Proxy (IAP) or another authentication mechanism.
Setting Up and Using the Vault CLI
Install the Vault CLI locally to interact with your deployed Vault server:
# For macOS
brew tap hashicorp/tap
brew install hashicorp/tap/vault
# Verify installation
vault version
# Configure CLI to talk to your Vault server
export VAULT_ADDR="https://vault.yourdomain.com"
# Check Vault status
vault status
Migrating from Shamir Keys to GCP KMS Auto-Unsealing
If you're migrating an existing Vault installation from Shamir key shares to GCP KMS auto-unsealing, follow these steps:
Update your Vault configuration to include the
gcpckms
seal stanzaRestart Vault
Unseal Vault with the
-migrate
flag:
# Provide three of your five Shamir unseal keys with the -migrate flag
vault operator unseal -migrate <UNSEAL_KEY_1>
vault operator unseal -migrate <UNSEAL_KEY_2>
vault operator unseal -migrate <UNSEAL_KEY_3>
# Verify the migration was successful
vault status
After successful migration, you should see output similar to:
Key Value
--- -----
Seal Type gcpckms
Recovery Seal Type shamir
Initialized true
Sealed false
Total Recovery Shares 5
Threshold 3
Version 1.19.0
Build Date 2025-03-04T12:36:40Z
Storage Type gcs
Cluster Name vault-cluster-xxxxxx
Cluster ID sssss-5a16sc405-89c1-s333333ffffff
HA Enabled true
Note that the Shamir keys are now recovery keys for use in emergency situations.
Setting Up Authentication Methods (Optional)
After your Vault is up and running with auto-unsealing, you might want to configure authentication methods:
OIDC Authentication
# Enable OIDC auth method
vault auth enable oidc
# Configure OIDC
vault write auth/oidc/config \
oidc_discovery_url="https://oidc.yourdomain.com/realms/vault" \
oidc_client_id="vault-client" \
oidc_client_secret="your-client-secret" \
default_role="reader"
AppRole Authentication for Applications
# Enable AppRole auth method
vault auth enable approle
# Create a policy for the Vault agent
vault policy write vault-agent-policy - <<EOF
path "secret/data/*" {
capabilities = ["read"]
}
EOF
# Create an AppRole with the policy attached
vault write auth/approle/role/vault-agent \
token_policies="vault-agent-policy" \
token_ttl=1h \
token_max_ttl=2h \
secret_id_ttl=1h \
bind_secret_id=true
# Get Role ID
vault read auth/approle/role/vault-agent/role-id
# Generate a Secret ID
vault write -f auth/approle/role/vault-agent/secret-id
Conclusion
By configuring HashiCorp Vault with Google Cloud KMS for auto-unsealing and deploying it to Cloud Run, we've created a serverless, fully managed secrets management solution that automatically unseals itself after restarts. The CI/CD pipeline with GitHub Actions further enhances this setup by automating the deployment process, making version management easy and repeatable, and our use of GCP Secret Manager ensures that sensitive initialization data is securely stored.
This approach eliminates the operational burden of manual unsealing while maintaining the security benefits of Vault's seal mechanism. The combination of GCP KMS for auto-unsealing, Secret Manager for secure storage, Cloud Run for deployment, and GitHub Actions for CI/CD provides a scalable, resilient, and cost-effective secrets management solution that can grow with your organization's needs.
Next Steps
Set up proper TLS certificates for your Vault instance
Configure additional authentication methods as needed
Implement audit logging
Enhance your CI/CD pipeline with testing
Implement backup and disaster recovery procedures
Consider setting up Vault HA for higher availability
Resources
This article represents a practical implementation based on real-world experience deploying HashiCorp Vault on Google Cloud Platform. While this setup works well for many use cases, always assess your specific security requirements before implementing any secrets management solution in production.
Subscribe to my newsletter
Read articles from Merlin Saha directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Merlin Saha
Merlin Saha
Specialising in Cloud Architecture and Application Modernisation, Saha Merlin is a Cloud Solutions Architect and DevSecOps Specialist who helps organizations build scalable, secure, and sustainable infrastructure. With six years of specialized experience in highly regulated industries—split equally between insurance and finance—he brings deep understanding of compliance requirements and industry-specific challenges to his technical implementations. His expertise spans various deployment models including Container-as-a-Service (CaaS), Infrastructure-as-a-Service (IaaS), and serverless platforms that drive business outcomes through technical excellence. He strategically implements open source technologies, particularly when SaaS solutions fall short or when greater control and autonomy are essential to meeting business requirements. Saha integrates DevSecOps practices, Green IT principles to minimize environmental impact, and Generative AI to accelerate innovation. With a solid foundation in Software Engineering and nine years of diverse industry experience, he designs cloud-native solutions that align with both industry standards and emerging technological trends.