2 Simple tips for making your Langgraph agent Production ready


LangGraph is a great framework that allows you to develop agents while keeping you in control of prompts and routing. If you are done building your agent and running it in Python Notebooks and want to put in production there are somethings that you need to follow. Prototyping and running your first agent is straightforward, but minor oversights during production deployment can lead to major consequences.
So here are two simple tips to help you make your langgraph agent production ready.
1.Make your Agent Async
By default, the agents are synchronous (sync), which makes them unable to process multiple requests simultaneously. Langgraph allows you to make your agent asynchronous (async) by using the async and ainvoke. async is self explanatory which makes a given node which is a simple python function async on the other hand hand ainvoke just prefixes ‘a’ in invoke which is used to invoke your agent in async mode.
So if you are going to put your agent in production making your agent async is the first thing you should do. If you don’t want your agent to stall and want your system to handle multiple requests you should make your Graph async.
Making your graph async is really simple, you have to make all the nodes in your graph as async with ‘async’ keyword.
async get_price(state:AgentState) -> AgentState:
pass
Then when invoking the graph you have to use ainvoke which is asynchronous invoke instead of just invoke along await. Don’t forget the await !
Please look at the example below
await graph.ainvoke( input={
"messages": [HumanMessage(content="""Find the me price of Mustang GT""")]
})
Just by making a simple change to your graph , you will have a better agent.
2.Use Runtime Configurations
Runtime configurations is also another powerful setting you can use in your LangGraph agent which will give you a lot of flexibility over the graph.
If you want to enable the ability to adjust certain configurations of the graph while it is running, you should consider using the Runnable Config feature. This feature provides significant flexibility, allowing you to modify various parameters and settings of your LangGraph agent on the fly. By doing so, you can adapt the behavior and performance of your agent to better suit different operational conditions and requirements without needing to stop or restart the entire system. This capability is particularly useful in dynamic environments where the demands on your agent can change frequently, ensuring that your system remains responsive and efficient.For example, you might want to be able to specify what LLM or system prompt to use at runtime, without polluting the graph state with these parameters. Look at the example below to see how it works
#1 Define a config and pass the values that you want to send at runtime
config = {
"configurable": {
"model": model,
"tone": happy
}
}
await graph.ainvoke({}, {"configurable": config}}))
By making use of configurations you can modify the agent settings on the fly making it more dynamic.
These two straightforward tips can significantly enhance your agent's performance, but there is potential for even more improvements. By making scenario-specific adjustments and modifications, you can address particular challenges and further refine the agent's capabilities. This approach allows you to tailor the agent to better meet the unique demands of different situations, ultimately leading to a more effective and efficient system.
Thanks for reading!
Subscribe to my newsletter
Read articles from GuruGen directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

GuruGen
GuruGen
Hi, I'm Vikrant, a passionate software developer with a strong belief in the power of teamwork, empathy, and getting things done. With a background in building scalable and efficient backend systems, I've had the privilege of working with a range of technologies that excite me - from Express.js, Flask, and Django to React, PostGres, and MongoDB Atlas. My experience with Azure has given me a solid understanding of cloud infrastructure, and I've had a blast building and deploying applications that make a real impact. But what really gets me going is exploring the frontiers of AI and machine learning. I've had the opportunity to work on some amazing projects, including building advanced RAG applications, fine-tuning models like Phi2 on custom data, and even dabbling in web3 and Ethereum. For me, it's not just about writing code - it's about understanding the people and problems I'm trying to solve. I believe that empathy is the unsung hero of software development, and I strive to bring a human touch to everything I do. Whether it's collaborating with colleagues, communicating with clients, or simply trying to make sense of complex technical concepts, I'm always looking for ways to make technology more accessible and more meaningful. If you're looking for a team player who is passionate about building innovative solutions, let's connect! I'm always up for a chat about the latest tech trends, or just about life in general.