Unlocking the power of LangGraph in Multi agent systems

SidharthanSidharthan
8 min read

Forgot about the agents , workflow and all...we know the Telephone game..Generally One person thinks of a sentence and they pass it to the next person. It continues until it reaches the last person.The final person says it out Loud.But often hilariously distorted from the original. What if we see the AI agent communicate like this.

Consider this scenario:

When a user asks to book a flight from Mumbai to Delhi but the agent books a Bus from Delhi to Goa…why this happens due to

  • Not structuring the way of communication

  • Agents made assumptions

  • There was no central controller

  • No shared memory between agents

Now i think you will understand a little bit.Now move into the main core.

What LangGraph does:

LangGraph is a tool to build smart workflows where different AI agents talk to each other step-by-step -kind of like drawing a flowchart, but with AI doing the thinking.

Coming to solving the Telephone game problem:

IntentAgent : Receives raw input and identifies structured intent

  • No assumptions — it extracts facts

ValidationAgent: Confirms that all required fields are present

  • Adds decision logic

  • Allows retries if data is missing or invalid

BookingAgent: Books exactly what was requested

  • Uses validated, structured input

  • No hallucinated vacations or buses to Goa

By End:

LangGraph brings order to agent chaos.
It ensures that all agents follow a defined structure, access the same memory, and do exactly what the user asked - no more bus rides to the wrong state.

That should give you a solid sense of what LangGraph brings to the table.
With that context, here’s how my learning journey unfolded.

Day 1: Langchain Vs Langgraph | Two Chatbots

Langchain:

Langchain is a way of building LLM powered application by executing a sequence of functions in a chain.

An Example :

User: “What’s the summary of Section 4 of this paper?”,

  • Retrieve: Loads and splits the paper, fetches Section 4.

  • Summarize: Uses an LLM to summarize Section 4.

  • Answer: Returns the result, remembers the interaction.

LangGraph:

LangGraph is a specialized library within the LangChain ecosystem ,Specifically designed for building stateful multi-agent systems that can handle complex nonlinear workflows.

An Example :

1. Process Input (Central Node)

  • This is where all user interactions start.

  • It uses an LLM to interpret the user's message and decide what action to take next.

  • Based on the intent, it routes the input to one of the action nodes: ADD TASK , COMPLETE TASK , SUMMARIZE TASK

2. Action Nodes (Add / Complete / Summarize)

Each of these is a node in the LangGraph system:

  • Add Tasks: Updates the state by appending new tasks.

  • Complete Tasks: Marks existing tasks as done.

  • Summarize: Uses an LLM to generate a summary of the current task list.

Each of these nodes performs its logic and then returns control back to the Process Input node

3. State (Shared Memory)

  • This component holds the current task as a list.

  • It's accessible by all nodes — enabling updates, lookups, and summaries.

  • Ensures the assistant remembers the full context across multiple steps and interactions.

4. Graph Structure

  • Each box in the diagram is a node.

  • The lines/arrows represent edges (transitions between nodes).

  • After every action, control returns to the PROCESS INPUT node, allowing for continuous interactions.

LangGraph Architecture lets us create flexible stateful agents that can maintain context over extended interactions

2. Stateless Chatbot — No Memory

This is the simplest form of a chatbot:

  • It takes user input, invokes the LLM (Gemini), and responds.

  • But it forgets everything after each turn — no memory of past interactions.

Key Concepts:

  • Each agent.invole() call has only the current message.

  • Useful for tasks that don’t require context (e.g., one-shot Q&A bots).

3. Stateful Chatbot — With Memory

This version is a proper conversational agent:

  • It maintains a list of messages (user + AI) as its state.

  • The process() node appends the AI response to state after every message.

  • Supports multi-turn interactions and contextual follow-ups.

Highlights:

  • Used TypedDict to define the AgentState including both HumanMessage and AIMessage.

  • Saves full conversation history to a file (logging.txt).

  • Much closer to how modern chat assistants work!

DAY 02: ReAct Agent, State Reducers & Smart Drafting

1. State Management with Reducers

I explored how to manage evolving memory in agents using the reducer pattern.

  • Instead of mutating state directly, I now use a pure reducer function to handle updates.

  • This makes state transitions cleaner, easier to debug, and highly composable.

2. ReAct Agent for Reasoning + Tool Use

Goal: Build an agent that can reason using language and act using tools like add, multiply, and divide.

What I built:

  • Used the @tool decorator to define simple math tools.

  • Connected Gemini 2.0 Flash to these tools using bind_tools().

  • Used LangGraph's StateGraph to define:

    • agent node for reasoning

    • ToolNode to run tools

    • Conditional logic (should_continue) to loop if tools are needed or end if not.

3. Drafter Agent — Update & Save with Tools

Goal: Create a conversational agent that can draft, edit, and save documents based on user input.

Tools Used:

  • update_tool: Replaces the entire document content.

  • save: Writes the document to a .txt file and ends the conversation.

Agent Logic:

  • The agent node gathers instructions and invokes tools if needed.

  • The system prompt dynamically includes current document content, so the LLM always has context.

  • Conversation ends only when the save tool is used (detected via ToolMessage).

Day 03 : Building a RAG Agent with LangGraph + Gemini

A RAG Agent that:

  • Loads a PDF (NeuroDrive_Project_Proposal.pdf)

  • Splits the text into chunks

  • Stores the chunks in ChromaDB

  • Uses Google Generative AI Embeddings for vectorization

  • Sets up a LangGraph agent with tool access to the retriever

  • Streams a conversational experience where the LLM:

    • Answers directly if possible

    • Invokes a retriever tool if it needs context from the PDF

DAY 04 : Building the Gmail Foundation for MailRAG

In Depth Pipeline :

After defining the high-level pipeline for MailRAG, Day 04 was all about laying the groundwork: email access, threading, parsing, and reply management — all powered by the Gmail API.

  • Authenticate via Gmail API

  • Fetch unanswered emails (past 8 hrs)

  • Auto Reply or Draft with Thread Context

  • Extract clean text from HTML or plain content

  • Send or draft threaded replies via Gmail

Gmail Tools Integrated as expected

DAY 05 : Debugging the MailRAG Agent

Today was all about diving deeper into my MailRAG Email Agent. The agent is designed to process Gmail messages, classify them, and generate intelligent responses using RAG (Retrieval-Augmented Generation).

But as with any real-world build... things broke.

Hit a blocker:

While setting up RAG with Google Generative AI embeddings, I hit this cryptic error:

ProtoType object has no attribute 'DESCRIPTOR'

At first, I thought it was something I did wrong in the pipeline wiring. But no this rabbit hole was deeper.

So I Asked Around...

  • ChatGPT said:
    "A common cause is an incompatible version of the protobuf package relative to langchain_google_genai or the google.generativeai libraries."

  • Perplexity pointed out:
    "The current versions of Google Generative AI and LangChain are not fully compatible with Protobuf version 6.31.1. This is a known issue."

  • Claude explained it like this:
    "This error is due to a compatibility issue between the proto library and Google Generative AI embeddings in your LangChain setup—likely a version mismatch in the protobuf ecosystem."

I stepped back, rechecked the code, and decided to go back to basics walked through the docs and tutorial again.

Turns out it was just this:

StrOutputParser

instead of

StrOutputParser()

Yes. I forgot the parentheses. No protobuf issue. No version hell.

“Relying only on AI tools can’t always give you the real solution. Sometimes they hallucinate with confidence”

DAY 06 : MailRAG – Email Agent, Finished and Pushed

After days of building, debugging, and refining, I finally wrapped up the MailRAG – Email Agent project. This agent now auto-monitors Gmail, classifies emails, and triggers intelligent actions—all with minimal human input.

What MailRAG Does:

  • Product Inquiry → Triggers a RAG-based response using contextual knowledge

  • Complaints/Feedback → Routes to custom handlers for human-like care

  • Unrelated → Silently ignored to avoid noise

The only thing the user needs to do?
Click "Send"
That’s it. Everything else is handled by the agents.

Final Words:

Thanks for giving your valuable time to read!

Self-learning brings its own unique style of building and growing.
If you are like me give your hand, Let’s connect : Linkedin , X

Follow my journey via #100DaysofAIEngineer on X to see my daily work and progress.

Keep Building! Make your Hands Dirty!

Resources:

Tutorials:

https://github.com/Sidharth1743/Langgraph

#100DaysOfAIEngineer #LearnInPublic #AIEngineer #LangGraph #SelfLearning #DirtyHands

0
Subscribe to my newsletter

Read articles from Sidharthan directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Sidharthan
Sidharthan

• A guy from a super tier-2 government college with no mentors and guides just self learning. • Passionately exploring emerging trends and innovations, with a deep interest in safe and interpretable AI inspired by Anthropic’s vision. • Documenting my journey, experiments, and insights through a "learn in public" approach and every post reflects my personal perspective and hands-on experience.