OpenHandsđ¤˛: The Flawless Open-Source AI Coding Companion


Boy-oh-boy where do I start, this week has been overwhelming for me. I celebrated my 29th birthday, every new morning is a blessing. Grok3 got released and it keeps knocking my socks off. The best model till date. It has surpassed the Llmarena leaderboard and rightly so. I never imagined that x.ai would hit it out of the park. I have been tinkering with OpenHands all week.Solved an interesting use case. Let's dive deeper into it.
A happy accident
I started watching a Youtube video from sentdex about his experience using openhands and then I was intrigued by it. As I followed Devin saga and their ridiculous $500/month price I was excited to try openhands as it solves the same use-case of using a coding agent to accelerate your dev work. I tried it and it is delightfully good at doing what you are asking it to do. If you can articulate clearly(as the case with every LLM) what needs to be done. It can do it for you.
Openhands in action
As you can see from the video. It can get a lot of things done. We have a choice to use whichever LLM we want to solve the tasks. I am using gemini2.0-flash-exp because it is free and I will not get rate limited. The ideal candidate would be claude-sonnet 3.5. The agent would be as good as the LLM we are using. You can create a github bot and provide token here and you can ask it to raise PRs so that you can review them later. Openhands would be an excellent sidekick for every dev out there. Me being a Devops engineer would do anything to alleviate their pain points(poor devsđ) by equipping them with the best possible tooling with least additional costs.
Reading tea leaves
Okay, my hunch is that in the future, opensource models will be as good as closed source ones like R1 over o1. And every cloud provider out there will charge more for inference by hosting these models(both open and close source) than compared to self-hosting. Right now every company is experimenting with these models as base layer and creating applications on top of them and cloud providers are providing foundational models for dirt cheap. Once the applications reach the scale and cloud providers raise the prices, thatâs when companies realize that self-hosting models would be a better option and ends up doing significant code changes and rewiring. If companies explore options to self host and use from the get-go. They can save tons in coming years.
closed source model | only hosted on their hardware | no other choice but to use them | Think Grok3 |
closed source model | neutral cloud providers can host | You have limited choices. | Think claude-sonnet |
open source model | neutral cloud providers offering as a service | Here is where you have to think of self hosting it on cloud as Kubernetes cluster | Think llama3.3 on Groq |
open source model | self hosting on cloud | more control and freedom of choice. | Think llmariner or Kuberay |
open source model | own hardware | Câmon letâs be practicalđ |
We are in the subscription era both as consumers and as businesses. Imagine you run a company where you chose datadog and splunk over otel/prometheus, ELK stack and Grafana. During the upward trajectory everything looks fine. What if your business plateaued and you are looking to cut costs? What if you are in a downward trajectory and have to cut costs? You cannot hire devs then to rewrite apps so you can cut costs. You should not grab leaves after your hands are burnt(poorly translated idiomatic expression in Telugu).
I know, I digress a lotđ . Letâs get back to OpenHands
My use-case
So currently you can run OpenHands locally using the docker run command.
sudo docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.24-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.24
But what if I wanted to run it as a deployment in my Kubernetes cluster and serve developers from there? If I run models in the same cluster using llmariner(An excellent opensource solution to self-host, finetune and train genai models. Please do check it out). It would cut my latency by a lot. So I raised a bug in the OpenHands github page requesting for a pod definition file and here is the response I got from maintainers.
Hi @HighonAces , we don't have an open source version of deploying OpenHands on a Kubernetes cluster, but at All Hands we have a (paid) solution for deploying OpenHands to larger teams. If you'd be interested in having us help you deploy to a team please jump on the OpenHands slack and ping me and Rob*** and we could discuss more.
Honestly, I have no qualms about it. They developed a product. The developers from All Hands might have contributed a ton and they have every right to steer the path of OpenHands project as they see fit.
The kubehustle
So I took things into my own hands. The challenging thing about deploying OpenHands in K8s is that everytime you initiate an agent session on the browser, it creates a new docker container to serve as an agent. This is something different from typical usecases. It also directly mounts /var/run/docker.sock on the container which is strict no-go from security perspective. So it took me multiple attempts to get it right. And here is the pod definiton.
apiVersion: v1
kind: Pod
metadata:
name: openhands-app-v2 # Changed to avoid conflict
spec:
volumes:
- name: docker-socket
hostPath:
path: /var/run/docker.sock
type: Socket
- name: openhands-state
persistentVolumeClaim:
claimName: openhands-state-pvc
securityContext:
fsGroup: 42420
containers:
- name: openhands-app
image: docker.all-hands.dev/all-hands-ai/openhands:0.24
imagePullPolicy: Always
securityContext:
privileged: true
ports:
- containerPort: 3000
env:
- name: SANDBOX_RUNTIME_CONTAINER_IMAGE
value: "docker.all-hands.dev/all-hands-ai/runtime:0.24-nikolaik"
- name: LOG_ALL_EVENTS
value: "true"
- name: SANDBOX_HOST
value: "172.17.0.1" # Replace with your hostâs Docker bridge IP
- name: SANDBOX_PORT
value: "32315" # Explicitly set the port
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
- name: openhands-state
mountPath: /openhands-state # Adjusted to a typical path
# Optional: Use hostNetwork to simplify access
hostNetwork: true
This definition runs single container and it does not address the security concerns. If I were to run this in my company, I would run it on isolated nodes to decrease the threat surface. I was also working on another solution where we can run dind(Docker in Docker) as a sidecar container to mitigate security risks but it is not working right now.
apiVersion: v1
kind: Pod
metadata:
name: openhands-app-dind
spec:
volumes:
- name: docker-run
emptyDir: {} # Mount at /var/run for socket
- name: openhands-state
persistentVolumeClaim:
claimName: openhands-state-pvc-v2 # Updated PVC name for v2
securityContext:
fsGroup: 42420
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "host.docker.internal" # Map host.docker.internal to localhost
containers:
# Main application container
- name: openhands-app
image: docker.all-hands.dev/all-hands-ai/openhands:0.24
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: SANDBOX_RUNTIME_CONTAINER_IMAGE
value: "docker.all-hands.dev/all-hands-ai/runtime:0.24-nikolaik"
- name: LOG_ALL_EVENTS
value: "true"
- name: DOCKER_HOST
value: "unix:///var/run/docker.sock" # Use DinDâs socket
- name: SANDBOX_PORT
value: "32315" # Match the errorâs port (verify if correct)
volumeMounts:
- name: docker-run
mountPath: /var/run
- name: openhands-state
mountPath: /openhands-state
# DinD sidecar container
- name: dind
image: docker:dind
securityContext:
privileged: true
args:
- "--host=unix:///var/run/docker.sock" # Disable TCP
volumeMounts:
- name: docker-run
mountPath: /var/run
env:
- name: DOCKER_TLS_CERTDIR
value: ""
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1"
This is not working as expected. Even if it works, you would be trading latency with security as we will be running nested runtimes. Another concern is that Kubernetes has moved from Docker as de-facto runtime to containerd. I have to test the solution with containerd and update it.
Outro
The future looks bright for humanity as we are ushering into the AI era. Increased productivity always results in increased quality of life for everyone. I wish Openhands become de-faco agent in developer world.
Subscribe to my newsletter
Read articles from Srujan Reddy directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Srujan Reddy
Srujan Reddy
I am a Kubernetes Engineer passionate about leveraging Cloud Native ecosystem to run secure, efficient and effective workloads.