🚀 Build Your Own AI Agent with Local LLMs & Crew AI

📌 Introduction

Every organization—whether a new startup or a giant enterprise—relies on documentation.

From onboarding guides and internal tools to architecture blueprints and operational procedures, internal documentation is the backbone of smooth collaboration. But here's the catch: this knowledge is often scattered across countless files, wiki pages, and folders. Hunting down the right information feels like finding a needle in a haystack. Sound familiar?

Now, imagine if every team member could talk to an AI agent that understands all your internal docs and gives accurate answers—securely, instantly, and without exposing anything to the public internet. 🤯

Thanks to Crew AI and local LLMs, this isn’t just a fantasy anymore. In this blog, I’ll walk you through how you can build your own AI documentation agent, powered by open-source tools and inspired by an awesome video tutorial from Abhishek Veeramalla.

🧠 The Problem: Internal Docs Are Hard to Navigate

Imagine a new hire joining your team.

They’re handed a link to your internal documentation portal… 1,000+ pages long. It's all there—somewhere—but it takes ages to sift through.

Or maybe you’re trying to recall how your internal observability platform lets say “Upadhyay Tool” differs from Prometheus. You type keywords, you scroll, you skim… only to find bits and pieces.

That’s the real pain. The information exists, but accessing it efficiently? Not so much.

💡 The Solution: An Intelligent Agent That Knows Your Docs

Instead of manually searching or relying on static documentation, why not have an AI agent trained on your internal documents?

Here’s what this agent can do:

  • Accept natural language queries

  • Understand the context of your documentation

  • Retrieve and summarize relevant answers

  • Do all of this locally—without sending sensitive data to the cloud

That’s where Crew AI + local LLMs like Llama 3 come in. 👇

🧰 What Is Crew AI?

Crew AI is a free, open-source framework that simplifies building autonomous, role-based AI agents. It’s much easier to get started with than other agent frameworks like AutoGen or LangGraph.

Think of Crew AI as a way to define one or more agents, each with a role, knowledge, and a task. In our case, we'll use one agent with a specific role: “Documentation Expert.”

Crew AI also supports local LLMs via tools like Ollama, which means your data stays 100% private.

🛠️ Step-by-Step: Build Your Own Documentation AI Agent

1. 📝 Generate Sample Documentation

To simulate internal docs, Abhishek cleverly uses prompt engineering to generate mock documentation for a fictional observability tool named “Vala.”

This avoids using real data and keeps things reproducible.

Prompt highlights:

  • Generate docs about a fictional tool called “Upadhyay Tool”

  • Compare it with Prometheus

  • Format: PDF

  • ~1000 lines of content

2. ⚙️ Set Up Your Environment

Make sure you have:

  • Python 3.10–3.13

  • Virtual environment

python3 -m venv crew-docs
source crew-docs/bin/activate
pip install crewai

3. 📚 Use the Crew AI Examples Repo

Rather than building from scratch, use the Crew AI Examples GitHub repo.

Navigate to:

git clone "repo_name"
cd meta_quest_knowledge

Replace the default MetaQuest.pdf with your generated upadhyay_docs.pdf.

4. ✍️ Customize Agent Configuration

Update these files:

  • config/agents.yaml

  • config/tasks.yaml

  • main.py Replace the hardcoded question with:

  • crew.py Point to your PDF:

      knowledge_source="upadhyaya_documentation.pdf"
    

5. 🧠 Set Up a Local LLM with Ollama

Ollama is a beautiful CLI tool for running LLMs locally.

ollama pull llama3.1

Then create a .env file:

MODEL=llama3
MODEL_API_BASE=http://localhost:11434Start Ollama:

6. 🚀 Run the Agent

From the meta_quest_knowledge directory:

pip install uv
crewai install
crewai run

🛠️ If you hit a file naming error (pyproject.toml), rename it to meta_quest_knowledge instaed of “Meta Quest Knowledge” inside pyproject.toml as a workaround.

7. 💬 Interact with the Agent

Now ask away:

Q: What is Upadhyay Tool?

🎯 The agent will parse your PDF, extract relevant info, and give a clean, contextual answer. Just like that.

🔐 Why This Approach Rocks

Here’s why building a local LLM-powered agent is a game-changer:

Secure – No data leaves your infrastructure
Cost-effective – No API keys or rate limits
Fast & Flexible – Modify and extend however you want
Scalable – Easily plug into other tools (Slack bot, Web UI, etc.)

🔚 Final Thoughts: Unlock the Power of Your Docs

Building an internal knowledge agent isn't just a cool side project—it's a productivity powerhouse.

Thanks to Crew AI and Ollama, what used to require weeks of setup and proprietary APIs can now be done in a few hours, completely for free, and fully secure.

🎥 Massive thanks again to Abhishek Veeramalla for the inspiration. His video guide is a must-watch if you're diving into Crew AI or local LLMs.

💡 Now It’s Your Turn!

Try this out. Build a docbot for your company, your open-source project, or even your personal notes. Trust me—it’s addictive.

If you have questions, ideas, or want to show off your version, drop a comment below! 👇

10
Subscribe to my newsletter

Read articles from Anuj Kumar Upadhyay directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Anuj Kumar Upadhyay
Anuj Kumar Upadhyay

I am a developer from India. I am passionate to contribute to the tech community through my writing. Currently i am in my Graduation in Computer Application.