In-Depth Comparison: Workflow Control with LangGraph and CrewAI


Choosing between CrewAI and LangGraph for multi-agent orchestration isn’t trivial, especially for developers balancing rapid prototyping with future scalability.
Personally, I gravitate toward CrewAI for its straightforward setup and developer-friendly abstractions. It lets you build production-ready agent flows in minutes, staying focused on business logic rather than complex orchestration architecture. That ease of use and quick iteration cadence make CrewAI a great fit for many projects.
That said, LangGraph has quickly become the go-to framework for large, stateful projects in production, thanks to its expressive graph-based DSL and wide adoption among major tech players. It offers granular control over state, dependencies, and memory, which is often necessary for complex or long-running workflows.
Absolute Control Over Agent Steps: LangGraph vs. CrewAI
One of LangGraph’s undeniable strengths is the absolute control it offers over every single step executed by agents in the workflow. Thanks to its explicit graph-based architecture, you precisely define nodes, edges, and state transitions. This granular orchestration lets you monitor, intervene, and debug the entire flow of execution with full visibility.
In contrast, CrewAI’s abstraction via agents and tasks simplifies setup and speeds development, but sometimes this comes at the cost of less direct visibility and control over internal step execution. The orchestration logic is more implicit and "black-boxed" inside the Crew abstractions, which can make fine-grained intervention more challenging.
That said, with the introduction of Flows in CrewAI—explicitly defined workflows coordinating agents and tasks—you can reclaim a similar level of control and transparency over execution. The Flows feature lets you model sequences, conditions, and branching explicitly, reducing the "black box" effect while still keeping CrewAI’s ease of use.
The Java Verifier examples below provide a clear demonstration of this: while LangGraph exposes every node and edge explicitly, CrewAI Flows allow similarly controlled stepwise execution and monitoring inside a more developer-friendly environment.
Example: The Java Verifier
To illustrate the difference in abstraction and workflow style between CrewAI and LangGraph, here is a minimal “Verifier” agent implementation in both frameworks. The goal of this verifier is to validate, correct, and optimize small Java code snippets. The workflow is designed with a loop: it first checks the code for syntax errors. If an error is found, it attempts to correct the code and then re-enters the verification step. This "verify-correct" loop continues until the code is deemed syntactically correct. Only then does the workflow proceed to the final step of optimizing the code for readability and performance. You can find all the code shown in this post on this Github repo (Java-Verifier-LangGraph-CrewAI)
1. CrewAI: Java Verifier with Flow
(This image has been generated using the functionality at the end of the provided Python code)
CrewAI Flows allow you to define a multi-agent workflow in a class-based structure that feels familiar to developers. You declare agents and then use decorators (@start
, @router
, @listen
) to map out the flow. This structure keeps the logic contained and easy to read. For this particular example, we chose to orchestrate the individual agents directly within the Flow
rather than using a complete Crew
. This approach gives us more direct control over each agent's step, which was ideal for this straightforward, cyclical process. It's important to note that in more complex scenarios, CrewAI's flexibility allows you to orchestrate entire Crews
of agents, leveraging their collective intelligence for a different level of abstraction.
from typing import Any, Dict, List
from dotenv import load_dotenv
from pydantic import BaseModel, Field
from crewai.agent import Agent
from crewai.flow.flow import Flow, listen, start, router
load_dotenv()
class CodeValidationState(BaseModel):
code: str = ""
syntax_status: str = ""
class JavaCodeValidationFlow(Flow[CodeValidationState]):
analyzer_agent = Agent(
role="Java code syntax analyzer",
goal="Receive Java code and reply exactly with 'correct' or 'incorrect' based on syntax.",
backstory="You are a meticulous and precise code quality tool."
)
corrector_agent = Agent(
role="Java code corrector",
goal="Receive syntactically incorrect Java code and return corrected code with valid syntax only. Return only Java code with no explanation and no markdown",
backstory="You are an expert Java programmer who fixes code errors.",
verbose=True
)
optimizer_agent = Agent(
role="Java code optimizer",
goal="Receive syntactically correct Java code and optimize it for readability and performance without changing logic. Return only Java compilable code with no explanation and no markdown",
backstory="You are a senior developer specializing in code optimization."
)
@start("try_again")
def start_analysis(self):
"""Starts the workflow, sends the code to the analyzer."""
print(f"Starting code analysis...\n {self.state.code}")
result = self.analyzer_agent.kickoff(self.state.code)
syntax_status = result.raw
self.state.syntax_status = syntax_status
print(f"Analyzer result: {syntax_status}")
@router(start_analysis)
def conditional_next_step(self):
"""Decides whether to proceed with correction or optimization."""
syntax_status = self.state.syntax_status
print(f"conditional_next_step {syntax_status}")
if syntax_status=="correct":
return "correct"
else:
return "incorrect"
@router("incorrect")
def correct_code_step(self):
"""Performs code correction."""
result = self.corrector_agent.kickoff(self.state.code)
print(f"Corrector: code corrected.\n {result.raw}")
self.state.code = result.raw
return "try_again"
@listen("correct")
def optimize_code_step(self):
"""Performs code optimization."""
result = self.optimizer_agent.kickoff(self.state.code)
self.state.code = result.raw
print("Optimizer: optimization finished.")
print("\nWorkflow complete. Final optimized code:")
print("---" * 20)
print(self.state.code)
print("---" * 20)
# Example usage of the workflow
def run_code_validation_flow():
# Initial Java code with a syntax error (missing semicolon)
initial_java_code = """
public class Test {
public static void main(String[] args) {
System.out.println("Hello, World!"
}
}
"""
flow = JavaCodeValidationFlow()
print("Starting CrewAI Java code validation flow...\n")
# Start the workflow and get the final result
flow.kickoff(inputs={"code": initial_java_code})
return flow
if __name__ == "__main__":
flow=run_code_validation_flow()
flow.plot('my_flow_plot')
Mental Model: In CrewAI, you first define the roles and goals of your agents. Then, you use a Flow
to orchestrate them through a series of steps. The flow itself feels like a regular class with methods, where decorators handle the routing logic. This keeps the logic contained and aligned with a developer's class-based thinking. Notice that here we are using default LLM used by CrewAI that at the minute is “gpt-4o-mini“.
2. LangGraph: Java Verifier with Graph
(This image has been generated using the functionality at the end of the provided Python code)
LangGraph's approach is more explicit and foundational. You define a graph with nodes and edges, where each node is a function. The state of the workflow is passed explicitly between nodes, and conditional edges are used to manage the flow.
import os
from langgraph.graph import StateGraph, END
from dotenv import load_dotenv
from typing import TypedDict, Annotated # Import TypedDict and Annotated
from langgraph.types import Command
from langchain_openai import OpenAI
from IPython.display import Image, display
# Ensure your OpenAI API Key is set in environment variable OPENAI_API_KEY
load_dotenv()
# Define your state schema
class CodeState(TypedDict):
code: str
is_correct: bool
class JavaVerifierGraph(StateGraph):
def __init__(self):
# Initialize LangChain OpenAI wrapper (reads OPENAI_API_KEY from env)
self.llm = OpenAI(model="gpt-4o-mini", temperature=0)
workflow = StateGraph(CodeState)
workflow.add_node("start_analysis_node", self.start_analysis_node)
workflow.add_node("optimize_code_node", self.optimize_code_node)
workflow.add_node("correct_code_node", self.correct_code_node)
# Define entry and transitions
workflow.set_entry_point("start_analysis_node")
# Conditional edges from analyzer node
workflow.add_conditional_edges("start_analysis_node",
self.conditional_next_node,
{"optimize_code_node": "optimize_code_node",
"correct_code_node": "correct_code_node"})
# Cycle back from corrector to analyzer
workflow.add_edge("correct_code_node", "start_analysis_node")
self.graph = workflow.compile()
def llm_check_code_syntax(self,code: str) -> bool:
prompt = (
"Check if the following Java code is syntactically correct. "
"Reply ONLY with 'correct' or 'incorrect'.\n\n"
f"Java code:\n{code}\n\nAnswer:"
)
result = self.llm(prompt)
return result.strip().lower() == "correct"
def llm_correct_code(self,code: str) -> str:
prompt = (
"Correct the following Java code to be syntactically valid. "
"Provide only the corrected code without explanations.\n\n"
f"Java code:\n{code}\n\nCorrected Java code:"
)
return self.llm(prompt).strip()
def llm_optimize_code(self,code: str) -> str:
prompt = (
"Optimize the following syntactically correct Java code "
"for readability and performance. Return only Java code with no explanation and no markdown.\n\n"
f"Java code:\n{code}\n\nOptimized Java code:"
)
return self.llm(prompt).strip()
# The analyzer node sets 'is_correct' in state to guide conditional branching,
# no explicit goto inside the node function.
def start_analysis_node(self,state:CodeState):
code = state["code"]
print("Analyzer: Checking syntax via LLM...")
is_correct = self.llm_check_code_syntax(code)
print(f"Analyzer: Code is {'correct' if is_correct else 'incorrect'}")
return Command(update={"is_correct": is_correct})
# Optimizer node - performs optimization and ends the workflow.
def optimize_code_node(self,state:CodeState):
code = state["code"]
print("Optimizer: Optimizing code via LLM...")
optimized_code = self.llm_optimize_code(code)
print("Optimizer: Optimization complete. Ending workflow.")
return Command(update={"code": optimized_code}, goto=END)
# Corrector node - fixes syntax errors; no goto, flow cycles via explicit edge.
def correct_code_node(self,state:CodeState):
code = state["code"]
print("Corrector: Correcting code via LLM...")
corrected_code = self.llm_correct_code(code)
print("Corrector: Correction complete. Returning to analyzer.")
return Command(update={"code": corrected_code})
# Branching function to decide next node after analyzer based on 'is_correct'.
def conditional_next_node(self,state:CodeState):
return "optimize_code_node" if state.get("is_correct") else "correct_code_node"
def run_workflow(self,code):
initial_state = {
"code": code,
"is_correct": True # 'analyzer' will set this value at the beginning
}
print("Starting the execution of the workflow with invoke()...\n")
# Run the workflow and get the final state
final_state = self.graph.invoke(initial_state)
print("\nWorkflow completed!")
print("-" * 50)
print("Final Java code (corrected and optimized):\n")
print(final_state["code"])
print("-" * 50)
print(f"Final 'is_correct' state: {final_state['is_correct']}")
def get_graph(self):
return self.graph.get_graph()
def run_code_validation_flow():
initial_code = """
public class Test {
public static void main(String[] args) {
System.out.println("Hello World!")
}
}
"""
initial_state = {
"code": initial_code,
"is_correct": True # 'analyzer' will set this value at the beginning
}
print("Starting the execution of the workflow with invoke()...\n")
graph=JavaVerifierGraph()
final_state = graph.run_workflow(initial_state)
return graph
if __name__ == "__main__":
graph=run_code_validation_flow()
# This command generates the PNG data as bytes, which works outside of Jupyter
mermaid_png_bytes = graph.get_graph().draw_mermaid_png()
# Specify the filename for the output image
output_filename = "langgraph_workflow.png"
try:
# Open the file in binary write mode ('wb')
with open(output_filename, "wb") as f:
# Write the image bytes to the file
f.write(mermaid_png_bytes)
print(f"Diagram sevaed on {output_filename}")
except Exception as e:
print(f"Error during diagram generation: {e}")
Mental Model: With LangGraph, you build the workflow by explicitly defining a state schema, followed by individual nodes (functions). You then connect these nodes with edges. The orchestration logic is literally a graph you construct, giving you a powerful, albeit more complex, low-level tool for state management.
Conclusion
I appreciate CrewAI’s lean, quick-to-launch workflows and developer experience, which speed up prototyping and smaller scope projects. The Flow
feature adds a layer of explicit control that bridges the gap with more low-level frameworks. Meanwhile, LangGraph stands out for heavyweight, scalable AI pipelines in enterprise-grade systems with a need for full-control graph orchestration.
To gain a more complete understanding of each framework's workflow implementation capabilities, other critical aspects must be considered. These include state management, the ability to handle state persistence (saving and resuming a workflow), and the ease of implementing a Human-in-the-Loop (HITL) scenario. These factors are crucial for designing robust, long-running processes that require durability or external intervention.
For the time being, CrewAI fits my development style best, but I watch LangGraph’s evolution with interest as it appears to be solidifying its place as the go-to framework for complex AI agent orchestration at scale.
Subscribe to my newsletter
Read articles from Roberto directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
