Install Velero on EKS cluster using Argocd
Hi All,
Today I am going to write a small article abou Veleroro installation on EKS cluster using Argocd.
Prerequisites:
Argocd should be installed on eks
Terraform should be set up to create resources on AWS
What is Velero?
It is tool which is used for backup and restore of your K8s-cluster resources and persistent volumes. You can also create backups on a schedule basis and can garbage collect old backups.
Installation Part:
- First, we are going to create S3-bucket and irsa roles for velero-based access using terraform:
Go to your repository which is used for creating resources on AWS using terraform and create a new file: velero_s3.tf
resource "aws_s3_bucket" "velero" {
bucket = "${var.environment}-test-velero"
tags = {
Name = "${var.environment}-test-velero"
Environment = var.environment
}
}
resource "aws_s3_bucket_public_access_block" "velero" {
bucket = aws_s3_bucket.velero.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
# we are using irsa module for defining all access to roles and later this role
# will be assumed by "velero" service account in k8s cluster
# https://github.com/terraform-aws-modules/terraform-aws-iam/blob/master/examples/iam-role-for-service-accounts-eks/main.tf
module "velero_irsa_role" {
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
role_name = "${var.environment}-velero"
attach_velero_policy = true
velero_s3_bucket_arns = ["arn:aws:s3:::${var.environment}-test-velero"]
oidc_providers = {
main = {
provider_arn = local.kube_info["oidc_provider_arn"]
namespace_service_accounts = ["velero:velero"]
}
}
tags = {
name = "velero"
environment = "${var.environment}"
}
}
Now, we have S3-bucket and IAM role in place, we will move forward to install the Velero helm chart on our cluster.
Here, I am installing the helm chart using ArgoCD in k8s-cluster, you can use your own way to install the helm chart in your cluster.
First I will create Appset in my GitHub repo which is used by Argocd:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: velero-stack
spec:
syncPolicy:
preserveResourcesOnDeletion: true
generators:
- clusters:
selector:
matchExpressions:
- key: environment
operator: In
values:
- "development"
template:
metadata:
name: '{{"{{"}}name{{"}}"}}-velero'
namespace: argocd
spec:
destination:
name: '{{"{{"}}name{{"}}"}}'
namespace: velero
project: default
source:
# used for defining velero helm chart configurattion
path: apps/velero
# Change repoURL with your argocd repo
repoURL: https://github.com/your-org/your-repo.git
targetRevision: HEAD
helm:
# Define helm file based on env:
valueFiles:
- 'values-{{"{{"}}name{{"}}"}}.yaml'
# Annotating velero serviceaccount with IAM-role
values: |
velero:
serviceAccount:
server:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::{{"{{"}}metadata.labels.aws-account-id{{"}}"}}:role/{{"{{"}}name{{"}}"}}-velero
syncPolicy:
automated:
prune: true
selfHeal: true
To understand the above file, you should have prior knowledge of Argocd, here we are using app set to deploy Argocd-application in a different cluster
Refer below link to know more about Argocd appset: Appset
Now Go to apps/velero
folder in your GitHub repo and create the below files:
Chart.yaml
apiVersion: v2
name: velero-umbrella
version: 0.1.0
dependencies:
- name: velero
version: 4.1.3
repository: https://vmware-tanzu.github.io/helm-charts/
And then, we will define values.yaml file to define helm values
values.yaml
velero:
configuration:
backupStorageLocation:
- name: dev-k8s-velero
provider: aws
bucket: development-test-velero
prefix: development
default: true
config:
region: us-east-1
volumeSnapshotLocation:
- name: dev-k8s-velero-snapshot
provider: aws
config:
region: us-east-1
# same service-account which we created in above irsa step
serviceAccount:
server:
name: velero
credentials:
# set this to false because we are using irsa based access
useSecret: false
# This is required for providing correct permissions to k8s cluster
podSecurityContext:
fsGroup: 65534
initContainers:
- name: velero-plugin-for-aws
image: velero/velero-plugin-for-aws:v1.7.0
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /target
name: plugins
In Above values.yaml file we have define backup storage location and volumesnapshotlocation, this will be used for storing your manifest and volume-snapshots, also we are using aws plugin for velero to support AWS cloud.
This should be enough to install it using Argocd.
Installation Without using Argocd:
Go to your terminal and run below cmd and you can use same values.yaml file which is used above
helm install my-velero vmware-tanzu/velero --version 4.1.3 -f values.yaml
This is basic installation, you can check velero helm chart for other conf, such as using restic etc.
\===============================================================
Basic Details about Velero Usage:
## Backup Storage Location(BSL)
It is the location which is used by velero to store backups. One bsl needs to be set default per cluster
To List all BSL:
velero backup-location get
## Volume Storage Location(VSL)
It is the location where velero will create snapshots of PV,it is defined entirely by provider-specific fields(AWS regios, Azure Resource group)
To List all VSL:
velero snapshot-location get
## How to take Backups of your service
- Backup ns and its’s objects
velero backup create my-backup - include-namespaces my-ns
- include resources matching the label selector
velero backup create my-backup - selector mytestlabels=true
- Detail Info about your backup
velero describe backup my-backup - details
- Exclude resources from backup
#Use this label in your resources to exlcude it from backup even if it match the selector
velero.io/exlude-from-backup=true
## How to restore a backup of your service
- Restore Backup
velero restore create my-restore --from-backup my-backup
- Describe Restore
velero restore describe my-restore --details
- Restoring into a different namespace
velero restore create my-restore --from-backup my-backup --namespace-mapping old-ns-1:new-ns-1
## Create Schedule for taking backup periodically
Run Below cmd, and you will get complete yaml for all your needs then you can specify this file with your service helm chart or in iac-repo
velero schedule create my-service-schedule - schedule="0 3 * * *" - selector mytest=true - include-namespaces my-ns - ttl 200h -o yaml
apiVersion: velero.io/v1
kind: Schedule
metadata:
creationTimestamp: null
name: my-service-schedule
namespace: velero
spec:
schedule: 0 3 * * *
template:
csiSnapshotTimeout: 0s
hooks: {}
includedNamespaces:
- my-ns
itemOperationTimeout: 0s
labelSelector:
matchLabels:
mytest: "true"
metadata: {}
ttl: 200h0m0s
useOwnerReferencesInBackup: false
status: {}
## Change storageclass on velero restore
- You can also change storage class of PV on restoring Velero backup of the snapshot, just need to define the below configmap in your velero ns
apiVersion: v1
kind: ConfigMap
metadata:
# any name can be used; Velero uses the labels (below)
# to identify it rather than the name
name: change-storage-class-config
# must be in the velero namespace
namespace: velero
# the below labels should be used verbatim in your
# ConfigMap.
labels:
# this value-less label identifies the ConfigMap as
# config for a plugin (i.e. the built-in change storage
# class restore item action plugin)
velero.io/plugin-config: ""
# this label identifies the name and kind of plugin
# that this ConfigMap is for.
velero.io/change-storage-class: RestoreItemAction
data:
# add 1+ key-value pairs here, where the key is the old
# storage class name and the value is the new storage
# class name.
gp2: gp3
That’s All Cheers!!
Subscribe to my newsletter
Read articles from Prasoon Mishra directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by