How I Got My Drone to Spot Golf Balls


I’m not a golfer…at all.
But a few weekends ago, I found myself standing on the tee, watching golf balls vanish into the horizon like tiny white UFOs.
That’s when the developer in me had a thought:
“Could I use my DJI Mini 4K drone, a live video stream, and machine learning to track these things in real time?”
Fast forward a few days, and I had built a working prototype:
Streaming live video from my drone to my laptop via RTMP
Running a Roboflow object detection workflow to spot golf balls
Logging detections with timestamps, confidence scores, and (mock) GPS coordinates
Visualizing everything in an interactive Streamlit dashboard — complete with a map of “detections”
This post walks you through exactly how I built it, the lessons I learned, and where I’m taking it next.
The Big Idea
This started as a weekend “what if” experiment and turned into a working prototype that:
Streams my drone’s video feed in real time
Runs each frame through a golf ball detection model
Logs and maps detections with GPS (mocked for now)
Here’s the high-level flow:
1. Real-time Drone Video Feed (via RTMP)
The DJI Fly app doesn’t have an SDK for my drone model, but it does let you live-stream video using RTMP.
I set up an Nginx RTMP server on my laptop so the drone’s video feed could flow directly into my Python script.
Key config snippet from nginx.conf
:
nginxCopyEditrtmp {
server {
listen 1935;
chunk_size 4096;
application live {
live on;
record off;
}
}
}
Then in Python:
pythonCopyEditrtmp_url = "rtmp://<your-ip>:1935/live/stream"
cap = cv2.VideoCapture(rtmp_url)
2. Golf Ball Detection with Roboflow
I trained a small object detection model on golf balls using Roboflow.
For speed, I deployed it using Roboflow’s Small Object Detection (SAHI) workflow — which is perfect for small, high-contrast objects like golf balls.
Minimal Python snippet:
pythonCopyEditfrom inference_sdk import InferenceHTTPClient
client = InferenceHTTPClient(
api_url="https://serverless.roboflow.com",
api_key="YOUR_API_KEY"
)
result = client.run_workflow(
workspace_name="projects-lznpz",
workflow_id="small-object-detection-sahi",
images={"image": frame}
)
I flew my test flights in my backyard and here is what an annotated capture looks like:
3. Visualizing Detections on a Map
Since I can’t yet get GPS from my DJI Mini 4K, I mocked GPS coordinates to simulate where the drone might be when it detects a ball.
I used Folium + Streamlit to create a live dashboard with:
Latest annotated detections
Confidence chart over time
Map markers for detections (mocked locations for now)
Minimal Folium example:
pythonCopyEditimport folium
map = folium.Map(location=[41.5081, -71.1279], zoom_start=17)
folium.Marker(
[41.5081, -71.1279],
popup="Golf Ball Detected!",
icon=folium.Icon(color="green", icon="info-sign")
).add_to(map)
map.save("map.html")
Obviously, you’ll just be looking at a gray map with some random points right now, but in the future, gathering GPS data from a drone with the SDK will be another battle:
Lessons Learned
This “just-for-fun” project ended up being a surprisingly solid prototype — combining video streaming, real-time inference, and mapping into a single pipeline.
It also reinforced something I love about modern dev tooling: with the right off-the-shelf components, you can go from idea to working proof-of-concept in a weekend.
Key takeaways from this build:
RTMP streaming from the DJI Fly app is smooth.
Once the Nginx RTMP server was set up, my drone’s video feed streamed to my laptop with low latency and minimal hiccups.Roboflow handles tiny targets like a champ.
Even with compression artifacts, a small dataset was enough for Roboflow’s model to detect golf balls mid-air or on grass.Mapping detections (even with fake GPS) feels futuristic.
Seeing a detection marker pop up in real-time makes this feel like more than just a hobby project — it’s closer to a field tool.
The real “aha” moment for me:
Building something like this no longer requires custom hardware or months of dev time. You can stand on the shoulders of Nginx, OpenCV, Roboflow, and Streamlit to get 80% of the way fast.
The Plan Moving Forward
While this prototype already works, there’s a clear roadmap to take it from fun weekend build to something with serious real-world utility.
Upgrade to a drone with full SDK support
With SDK access, I could get native GPS, camera control, and telemetry directly from the drone — no workarounds required.Integrate real GPS coordinates into detections
Moving from mocked GPS to real coordinates would enable accurate geotagging of each detection for retrieval or analysis.Run inference on edge hardware
Instead of streaming every frame to the cloud, a small edge device (Jetson Nano, Coral TPU) could run detections locally — faster, cheaper, and offline-capable.
Want to Build Your Own?
If you’re curious about trying this yourself — whether for golf balls, lost pets, or entirely different use cases — I’m happy to share my code snippets and setup steps.
Check out the demo video from my LinkedIn post.
Drop me a message here or connect with me LinkedIn!
Subscribe to my newsletter
Read articles from Anthony Dutra directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
