The Helm Gotcha that broke my pods

Muskan AgrawalMuskan Agrawal
2 min read

The Problem Statement

A few days back I hit a really head-scratching issue while deploying an app with Helm. At first glance, everything looked fine. The chart deployed without errors and two pod replicas spun up as expected. But here’s where it got weird:

  • The first pod always failed right away

  • The second pod, which was just a replica, went into Running every time without a problem

  • If I manually deleted the failing pod, the replacement would run cleanly

That pretty much confirmed my configuration and secrets were all correct. So why was only one pod cursed?

Here’s the error from the failing pod:

MountVolume.SetUp failed for volume "mysecrets" : rpc error: code = Unknown desc = failed to get secretproviderclass namespace/secret-provider-class, error: SecretProviderClass.secrets-store.csi.x-k8s.io "secret-provider-class" not found

The cluster was telling me that my SecretProviderClass didn’t exist. But I knew it was there, because when I deployed the resources manually with kubectl apply, everything worked perfectly.

That narrowed it down for me: the problem wasn’t the app or the secret setup. It had to be Helm itself.


The Dry Run That Solved It

Whenever I’m suspicious of Helm’s ordering logic, I reach for the --dry-run flag. Sure enough, running:

helm install myapp ./chart --dry-run --debug

exposed the real problem.

Helm doesn’t install your resources strictly in the order you might expect. Custom resources like SecretProviderClass often get applied after the pods are already created. So in my case, the very first pod scheduled would try to mount the secret before the SecretProviderClass even existed, and boom, it failed.

The second pod never hit this race condition because by then, Kubernetes had already processed the missing custom resource.


The Fix

The solution was to tell Helm that my SecretProviderClass needed to be created before any pods came up. Helm supports this through hooks.

By adding the following annotation to the SecretProviderClass manifest, Helm installs it as a pre-install hook:

annotations:
  "helm.sh/hook": "pre-install"

This ensures the custom resource is applied before the rest of the release (including pods) gets deployed. After making this change, both replicas moved into Running every time without any errors.


Key Takeaway

The lesson here is that Helm doesn’t guarantee resource ordering in the way you might assume. If your pods depend on a custom resource like SecretProviderClass, you need to be explicit about when it gets installed.

In my case, the mystery of the “cursed first pod” had nothing to do with Kubernetes misbehaving, it was Helm quietly installing things out of order.

0
Subscribe to my newsletter

Read articles from Muskan Agrawal directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Muskan Agrawal
Muskan Agrawal

Cloud and DevOps professional with a passion for automation, containers, and cloud-native practices, committed to sharing lessons from the trenches while always seeking new challenges. Combining hands-on expertise with an open mind, I write to demystify the complexities of DevOps and grow alongside the tech community.