🧠 Solving Kubernetes OOMCrashes in Go Apps Without Changing Code or Limits

Kaushal KishoreKaushal Kishore
3 min read

🧠 Solving Kubernetes OOMCrashes in Go Apps Without Changing Code or Limits

Running containerized applications in Kubernetes is great — until they start crashing with OOMKilled errors. I recently encountered one such challenge with a Go-based application, and here's how I solved it — without changing the code or bumping memory limits.


🔍 The Challenge

We had a Go HTTP server deployed on Kubernetes using a single pod (Deployment + ClusterIP service). It served incoming requests under constant load (4 concurrent clients).

Here's the catch:

  • Each request consumed ~100MB of memory

  • Memory limit set by the ops team: 512MB

  • The app started crashing after a few requests (~5–6)

Despite the math saying it should work — 4 x 100MB = 400MB < 512MB — the pod kept crashing with OOMKilled errors.


🧠 What Was Going Wrong?

Turns out, Go's garbage collector doesn’t always release memory back to the OS immediately. So even after a request is done, heap memory usage keeps growing until it crosses the memory limit.

Go's default GC behavior isn't aware of your container's memory constraints — unless you tell it.


✅ The Solution: Set GOMEMLIMIT

Introduced in Go 1.19+, the GOMEMLIMIT environment variable allows you to set a hard memory cap for the Go heap. This makes the garbage collector more aggressive, keeping memory usage well within the pod's limit.

We added it like this:

bashCopyEditkubectl set env deployment/memhog GOMEMLIMIT=350000000

This tells Go to trigger GC when heap usage reaches ~350MB, leaving headroom for stack, runtime, buffers, and OS overhead — all within the 512MB container limit.


📈 Results

After applying this fix:

  • 🚫 No more OOMKilled crashes

  • 🟢 Successfully handled 100+ requests under concurrent load

  • 🔁 No app code changes or limit adjustments

  • 🧹 Go GC kept the memory usage in control

Tested using:

bashCopyEditkubectl port-forward svc/memhog-service 8080:80
hey -n 100 -c 4 http://localhost:8080

✨ Takeaways

Go applications need explicit GC tuning in Kubernetes
✅ Use GOMEMLIMIT to align GC behavior with container memory limits
✅ Resource-limited environments require more than just code — they require understanding of the runtime
✅ Kubernetes + Go is a powerful combo, but needs observability and fine-tuning


🙏 Thanks to Iximius Labs & Ivan Velichko

Huge shoutout to Iximius Labs and Ivan Velichko (@iximius) for designing such a brilliant Kubernetes challenge lab. It really pushed me to think like a production-ready engineer!


💬 Got a Different Solution?

Have you tackled a similar issue with Go or another language in Kubernetes? Did you use sidecars, NGINX reverse proxies, or memory requests tuning?

I’d love to hear your take! Drop it in the comments or tag me on LinkedIn.


🔖 Tags & Hashtags

#GoLang #Kubernetes #DevOps #Containers #GOMEMLIMIT #MemoryManagement #CloudNative #K8sTips #IximiusLabs #IvanVelichko #TechBlog #Hashnode #ProductionDebugging #OOMKilled #GarbageCollector #Linux #EngineeringChallenges

0
Subscribe to my newsletter

Read articles from Kaushal Kishore directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Kaushal Kishore
Kaushal Kishore

Currently working as a Network Operations Center (NOC) Engineer at Verizon Networks under HCLTech as a Graduate Engineer Trainee (GET), I specialize in monitoring and maintaining critical network infrastructures. While I ensure seamless network uptime and resolve incidents at the provider level, I am also deeply passionate about transitioning into the DevOps space. With hands-on exposure to CI/CD pipelines, Docker, GitHub Actions, Ansible, and other modern DevOps tools, I am consistently upskilling to bridge the gap between operations and development. My journey reflects a dynamic shift from traditional NOC responsibilities to automation-driven DevOps workflows—combining reliability, efficiency, and innovation.