Debugging Latency Inside a Pod with curl

Muskan AgrawalMuskan Agrawal
3 min read

A while ago, I was helping troubleshoot a web service running in Kubernetes. Everything looked fine at the pod level : CPU and memory were stable, logs weren’t screaming, but customers kept complaining about slow responses. At first glance, it felt like “something inside the cluster,” but not the pod itself. That’s when I suspected the usual suspects: downstream systems.

Just because your pod is running doesn’t mean it’s the one guilty of slowness. Often, the pod is simply a messenger stuck waiting for something else to respond. One of the simplest ways to catch this is with curl.

Here’s how I use curl from inside a pod to figure out whether external dependencies like a database, or another service, are causing the latency.


Step 1: Exec into the pod

First, get into the pod that is making the downstream call:

kubectl exec -it <pod-name> -- /bin/sh

Step 2: Create a format file

Inside the pod, I like to prepare a small file where curl will log how long each stage of the connection takes. Create a file named test.txt and paste this:

     time_namelookup:  %{time_namelookup}s\n
        time_connect:  %{time_connect}s\n
     time_appconnect:  %{time_appconnect}s\n
    time_pretransfer:  %{time_pretransfer}s\n
       time_redirect:  %{time_redirect}s\n
  time_starttransfer:  %{time_starttransfer}s\n
                     ----------\n
          time_total:  %{time_total}s\n

What these mean in simple terms:

  • time_namelookup: DNS resolution time

  • time_connect: TCP connection time

  • time_appconnect: TLS handshake

  • time_starttransfer: Time until the first byte of response

  • time_total: End-to-end time including waiting for data


Step 3: Run the curl test

Now, just run curl against the dependent service:

curl -w "@test.txt" -o /dev/null -s https://<endpoint>:<port>

Here’s what happens:

  • -w "@test.txt" tells curl to print timings based on the format file

  • -o /dev/null drops the response body (we only care about timings)

  • -s keeps output quiet except for what we format


Step 4: Interpret the results

When you run this, you’ll get output like:

time_namelookup:  0.001256s
time_connect:  0.015244s
time_appconnect:  0.045678s
time_pretransfer:  0.045901s
time_starttransfer:  0.350111s
------------------------
time_total:  0.351879s

If time_connect or time_appconnect looks heavy, the network or TLS handshake is hurting you. If time_starttransfer is large while others are small, it usually means the downstream system (like the DB or third-party API) is slow to process.


Why I love this trick

It’s quick, requires no extra tools, and works from within the pod’s own network context. That last part is key, testing from your laptop might not reflect what the pod actually experiences inside the cluster. By curling directly in the pod, you measure real dependencies, in real conditions.

The next time an app feels slow, and you’re not sure if the pod itself is at fault, curl -w might just help you find the culprit in minutes.

💡 Pro tip: I often save a pre-written format file as a ConfigMap and mount it into debug pods, so I don’t have to keep typing this out.

Credit: I first came across this curl -w trick for detailed timing breakdowns in Joseph Scott’s blogpost: Timing Details With cURL. It’s a gem worth bookmarking.

0
Subscribe to my newsletter

Read articles from Muskan Agrawal directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Muskan Agrawal
Muskan Agrawal

Cloud and DevOps professional with a passion for automation, containers, and cloud-native practices, committed to sharing lessons from the trenches while always seeking new challenges. Combining hands-on expertise with an open mind, I write to demystify the complexities of DevOps and grow alongside the tech community.