How LangGraph Transformed My Coding Journey from Hello World to AI Assistants

“Do I really need another abstraction layer on top of LangChain?”
That was my first thought when I stumbled across LangGraph. It sounded like yet another library with a fancy name. But curiosity got the better of me. I installed it, opened up a Jupyter notebook, and decided to test it out. What I found wasn't just another wrapper it was a shift in how we build LLM-powered applications.
LangGraph forces you to think in terms of graphs, states, and transitions, not just chains of prompts. That change was subtle at first but powerful once it clicked.
This post walks through my hands-on journey with LangGraph: starting from scratch, experimenting in notebooks, and gradually building up to a tool-using agent that can collaboratively draft and save documents. I’ll show you not just what I built but how I thought about it, and where I’m headed next.
Why LangGraph? Why Now? 🤔
LangChain is great for building composable LLM workflows. But once your application needs control flow, state persistence, or tool usage over time, chains start feeling... linear.
LangGraph borrows from finite state machines and dataflow programming to give you something better suited to dynamic, stateful agents. You define:
A typed state (like a dict with constraints)
A graph of nodes, each a function that mutates state
Edges (both unconditional and conditional) that define flow
It’s like going from a basic to-do list app to Notion same primitives, wildly different capabilities.
Getting Started: Playing in the Notebook 📝
🟢 Part 1 — Hello World with State
I started with the simplest possible example: take a name, return a greeting.
class AgentState(TypedDict):
message: str
def greeting_node(state: AgentState) -> AgentState:
state["message"] = f"Hey {state['message']}, how is your day going?"
return state
Then you wire up the graph:
graph = StateGraph(AgentState)
graph.add_node("greeter", greeting_node)
graph.set_entry_point("greeter")
graph.set_finish_point("greeter")
We compile and ivoke the graph
app = graph.compile()
app.invoke({"message": "Rishee"})["message"]
# → "Hey Rishee, how is your day going?"
LangGraph also allows you to see the structure or flow of your Graph, here is the flow for upper block of code:
As you can see its a simple graph that goes start → greeter (our function) → end
🟢 Part 2 - Sequential Agent
After getting familiar with single-node logic, I wanted to make LangGraph handle multi-step reasoning. Think of it like a cooking recipe first you prep ingredients, then you cook.
So I split the logic into two nodes: one that Greets the user, and the other tells the user his age.
class SequentialAgent(TypedDict):
name: str
age: str
final: str
def first_node(state: SequentialAgent) -> SequentialAgent:
""" This is the first node"""
state['final'] = f"Hi {state['name']}, "
return state
def second_node(state: SequentialAgent) -> SequentialAgent:
""" This is the second node """
state['final'] = state['final'] + f"Your age is {state['age']}!"
return state
Then.. again we wire up the graph
graph = StateGraph(SequentialAgent)
graph.add_node("first_node", first_node)
graph.add_node("second_node", second_node)
graph.set_entry_point("first_node")
graph.add_edge("first_node", "second_node")
graph.set_finish_point("second_node")
app = graph.compile()
We compile and invoke..
info = app.invoke({"name": "Rishee", "age": "20"})
print(info['final'])
# Hi Rishee, Your age is 20!
This is how the above sequential agent looks like:
🟢 Part 3 - Conditional Graphs
The next level: making the agent branch based on some logic. Suppose I want the agent to either Add or Subtract based on the operator
class ConditionalAgent(TypedDict):
number1: int
operation: str
number2: int
finalNumber: str
def adder(state: ConditionalAgent) -> ConditionalAgent:
"""This node adds the tqo nummbers"""
state['finalNumber'] = state['number1'] + state['number2']
return state
def subbtractor(state: ConditionalAgent) -> ConditionalAgent:
"""This node subtacts the two numbers"""
state['finalNumber'] = state['number1'] - state['number2']
return state
def decide_next_node(state: ConditionalAgent) -> ConditionalAgent:
"""This node decides whch op to perform"""
if state['operation'] == '+':
return "addition_op"
elif state['operation'] == '-':
return "subtraction_op"
Then we compile and run..
graph = StateGraph(ConditionalAgent)
graph.add_node("add_node", adder)
graph.add_node("sub_node", subbtractor)
graph.add_node("decider", lambda state: state)
graph.add_edge(START, "decider")
graph.add_conditional_edges(
"decider",
decide_next_node,
{
"addition_op": "add_node",
"subtraction_op": "sub_node"
}
)
graph.add_edge("add_node", END)
graph.add_edge("sub_node", END)
app = graph.compile()
You can run the code yourself and check how the outpur looks like, all the codes are available on my GitHub Repository for reference.
Let's see how this code looks in a graph. It will be very interesting to see:
Graduating to Real agents 🎓
Enough practice in notebooks, lets now move to the actual agents. So first i started with a Simple LLM Agent - No tools, no memory, just:
class AgentState(TypedDict):
messages: List[HumanMessage]
llm = ChatOllama(model="llama2:7b", temperature=0)
I’m using ChatOllama
to load the LLaMA 2 model locally. Setting the temperature to 0
ensures deterministic outputs useful during testing or when I want consistent results.
def process(state: AgentState) -> AgentState:
response = llm.invoke(state['messages'])
print(f"\nAI: {response.content}")
return state
Here’s the core function of the agent. It receives the current state, passes the user message to the LLM, prints the model’s reply, and returns the state as is. This function is stateless, as i said I’m not storing history yet but it can easily be extended to do so which would be shown in the next section.
graph = StateGraph(AgentState)
graph.add_node("process", process)
graph.add_edge(START, "process")
graph.add_edge("process", END)
agent = graph.compile()
Then the basic.. wiring the graph
user_input = input("Enter: ")
while user_input != "exit":
agent.invoke({"messages": [HumanMessage(content=user_input)]})
user_input = input("Enter:")
Finally, I wrap the agent in a basic loop. It takes user input, sends it to the model, and prints the response until the user types exit
.
Memory Agent
Next, I added memory: the agent keeps a list of all messages and passes the full conversation on each turn.
class AgentState(TypedDict):
messages: List[Union[HumanMessage, AIMessage]]
llm = ChatOllama(model="llama2:7b", temperature=0)
The agent state is now a list that includes both human and AI messages. This allows the LLM to consider the entire conversation history when generating a response key for any multi-turn interaction.
I'm still using llama2:7b
via Ollama, with temperature=0
to make the outputs deterministic and predictable
def process(state: AgentState) -> AgentState:
response = llm.invoke(state['messages'])
state['messages'].append(AIMessage(content=response.content))
print(f"AI: {response.content}")
return state
The process
node is now state-aware. It takes the full message history, sends it to the model, appends the AI’s reply to the state, and returns the updated state. This is the building block for maintaining coherent multi-turn dialogue.
graph = StateGraph(AgentState)
graph.add_node("Process", process)
graph.add_edge(START, "Process")
graph.add_edge("Process", END)
agent = graph.compile()
The same old wiring.
conversation_history = []
user_input = input("Enter: ")
while user_input != "exit":
conversation_history.append(HumanMessage(content=user_input))
result = agent.invoke({"messages": conversation_history})
conversation_history = result['messages']
user_input = input("Enter: ")
The conversation loop collects user input, appends it to the running history, invokes the agent, and updates the history with the new AI message. The dialogue builds up turn by turn, and context grows with each iteration.
with open("logging.txt", "w") as f:
for message in conversation_history:
if isinstance(message, HumanMessage):
f.write(f"You: {message.content}\n")
elif isinstance(message, AIMessage):
f.write(f"AI: {message.content}\n")
f.write("End of convo")
Finally, I log the full conversation to a text file, useful for debugging, reviewing interactions, or building datasets for fine-tuning.
Tools Agent
In this iteration, I built a tool using agent that can update and save a document using natural language. The system is designed with modularity in mind
State Design
class AgentState(TypedDict): messages: Annotated[Sequence[BaseMessage], add_messages]
The
AgentState
holds the message history both user input and model responses enabling full conversation context. I annotate it withadd_messages
so LangGraph knows how to manage and evolve this sequence.Document Tools
I define two tools:
@tool def update(content: str) -> str: """Updates the document with provided content""" global document_content document_content = content return f"Document has been succcessfully updated. current content: {document_content}"
@tool def save(filename: str) -> str: """saves the current document to a text file Args: filename: Name for the text file. """ global document_content if not filename.endswith('.txt'): filename = f"{filename}.txt" try: with open(filename, 'w') as f: f.write(document_content) print(f"Document has been saved to: {filename}") return f"Document has been saved to: {filename}" except Exception as e: return f"Error saving document: {str(e)}"
update()
replaces the current document content.save()
writes the content to a.txt
file in the directory.
Both use a shared global variable document_content
to simulate an in memory document. Not production grade, but good enough for prototyping.
Binding the Model to Tools
model = ChatOllama(model="llama3.1:latest", temperature=0).bind_tools(tools)
Here, I bind my tools to the model so it can autonomously decide when to call
update()
orsave()
based on the user’s intent.Binding the Model with Tools
def our_agent(state: AgentState) -> AgentState: system_prompt = SystemMessage(content = f""" You are a drafter, a helpful writing assistant. You are going to help the user to update and modfy the documents. - If the user wants to update or modify the contentt, use the 'update' tools with the complete updated content. - If the user wants to save and finish, use the 'save' tool. - Make ure to always show the current document state after modifications. The current document content is: {document_content} """) if not state['messages']: user_input = "I'm ready to help you update the document, what would you like to create?" user_message = HumanMessage(content = user_input) else: user_input = input("\nWhat would you like to do with the document? ") print(f"\nUSER: {user_input}") user_message = HumanMessage(content = user_input) all_messages = [system_prompt] + list(state['messages']) + [user_message] response = model.invoke(all_messages) print(f"\nAI: {response.content}") if hasattr(response, "tool_calls") and response.tool_calls: print(f"\nusing tools: {[tc['name'] for tc in response.tool_calls]}") return {"messages": list(state['messages']) + [user_message, response]}
This function runs the LLM and feeds it both the current message history and a system prompt that guides behavior. I prompt it to act as a “drafter” a helpful assistant for modifying and saving documents.
Importantly, I embed the current
document_content
in the prompt so the model always has visibility into the latest state. Then I ask the user what they want to do next and append the response to the conversation.Conersational Flow Control
def should_continue(state: AgentState) -> str: ...
This conditional edge checks if the most recent tool response includes a confirmation that the document was saved. If so, we end the graph; otherwise, we loop back to the agent node for more interaction.
LangGraph Structure
graph = StateGraph(AgentState) graph.add_node("agent", our_agent) graph.add_node("tools", ToolNode(tools)) graph.set_entry_point("agent") graph.add_edge("agent", "tools") graph.add_conditional_edges( "tools", should_continue, { "continue": "agent", "end": END }, ) app = graph.compile()
I define a simple looped graph:
Start at the agent
Flow to the tools
Conditionally loop or end depending on tool output
I used ToolNode(tools)
to automatically handle tool execution logic, including feeding return values back into the message stream.
Finally, running the agent
def run_doc_agent():
print("\n === DRAFTER ===")
state = {"messages": []}
for step in app.stream(state, stream_mode="values"):
if "messages" in step:
print_messages(step['messages'])
print("\n === DRAFTER END ===")
if __name__ == "__main__":
run_doc_agent()
TL;DR 🥱
🟢 Stateless Agent (Single-Turn)
Uses
ChatOllama
with a minimal LangGraph flow.Accepts user input and returns one-shot responses.
No conversation history is maintained.
🟡 Stateful Conversational Agent
Tracks full chat history using
HumanMessage
andAIMessage
.Enables context-aware replies across multiple turns.
Persists the conversation in memory and logs it to a file.
🔵 Tool-Using Agent for Document Editing
Introduces tools (
update
,save
) and binds them to the LLM.Implements a multi-turn loop with tool invocation and flow control.
Acts as a writing assistant that can modify and save documents via natural language.
Final Thoughts 🧠
LangGraph isn’t just a new abstraction on top of LangChain it’s a shift in how you architect reasoning. When you start modeling your LLM interactions as graphs of stateful logic, things click differently. You get finer control, better structure, and room for complex flows that vanilla chains just can’t manage.
Whether you're building simple conversational agents or complex tool-using assistants, LangGraph gives you a powerful mental model one where thinking like a graph actually helps you think like an agent.
Thanks for Reading 💌
If you’ve made it this far thank you! I hope this gave you not just a working intro to LangGraph, but a better way to think through agent design.
Got feedback? Questions? Ideas? Drop them in the comments or hit me up I’d love to see what others build with this.
Resources 🔗
Subscribe to my newsletter
Read articles from Rishee Panchal directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Rishee Panchal
Rishee Panchal
I’m a computer science student exploring AI and machine learning through hands-on projects and critical thinking. My work spans from experimenting with LLMs and transformers to building practical AI tools. I believe in learning by doing, questioning assumptions, and sharing insights along the way.