Discover End-to-End Tracing on Google ADK with LangDB


Before diving into the code, watch this 2-minute video to see a complete demonstration of what we'll be building. You'll learn how to integrate LangDB tracing into the Google ADK Travel Concierge sample with no code chages.
In this quick demo you’ll see:
How to install and initialize the
pylangdb[adk]
package.The single line of code that enables full observability for every ADK agent and tool.
Running a sample prompt like “Find me flights from JFK to London”.
Inspecting your workflow in the LangDB AI Gateway dashboard, including:
Threads view for step-by-step conversation logs.
Traces view for Gantt charts, cost & token breakdowns, and dependency graphs.
Drilling into any agent or tool (like the
planning_agent
on Claude 3 Sonnet) for full observability.
In this tutorial, we'll walk through the architecture of a sophisticated Travel Concierge agent built with Google's Agent Development Kit (ADK). We'll explore how to leverage the LangDB AI Gateway to use any LLM—from OpenAI, Google, Anthropic, and more—and harness powerful features like Virtual Models and Virtual MCPs (Model Context Protocol) to create a dynamic, observable, and easily maintainable agent system.
Our travel_concierge
agent is not just a single agent; it's a hierarchy of specialized sub-agents that handle everything from vacation inspiration to booking and in-trip assistance. Here's a look at the overall architecture:
This project is based on the official Google ADK Travel Concierge sample and has been modified to showcase the integration with the LangDB AI Gateway.
You can find the complete source code for this agent on GitHub: LangDB Samples
The Magic Behind the Curtain: pylangdb.adk.init()
First, let's talk about the most important line of code in this integration:
# travel_concierge/agent.py
from pylangdb.adk import init
# Initialize LangDB *before* importing any ADK modules.
init()
This single function call is the key to unlocking the LangDB AI Gateway's observability features. By placing it at the very top of our script, before any google.adk
modules are imported, we enable automatic instrumentation for the entire agent framework.
Here’s what init()
does automatically:
Discovers Agents: It recursively finds all agent and sub-agent definitions within your project.
Patches Runtimes: It automatically patches the necessary ADK components to emit traces.
Links Sessions: It intelligently links all the interactions—from the root agent's initial processing to the deepest sub-agent and tool calls—into a single, cohesive trace in the LangDB Tracing.
This "zero-instrumentation" approach means you get complete, end-to-end visibility into your agent's complex workflows just by adding that one line of code.
The Architecture: Root Agent and Sub-Agents
Our travel_concierge
is a hierarchical agent. At the top is the root_agent
, which acts as a smart router or orchestrator. Its job is not to answer queries directly, but to delegate them to a specialized sub-agent.
Here's its actual definition:
# travel_concierge/agent.py
root_agent = Agent(
model="openai/gpt-4.1",
name="root_agent",
description="A Travel Conceirge using the services of multiple sub-agents",
instruction=prompt.ROOT_AGENT_INSTR,
sub_agents=[
inspiration_agent,
planning_agent,
# ... and other sub-agents
],
# ...
)
As you can see, it uses a standard model ("openai/gpt-4.1"
) and has a list of sub_agents
. It doesn't have any tools of its own. The real power comes from the sub-agents.
Dynamic Tooling with Virtual Models and Virtual MCPs
A LangDB Virtual Model is a powerful abstraction that decouples your agent's code from its runtime configuration. It acts as a pointer to a configuration that you can manage entirely from the LangDB UI.
This is where the Model Context Protocol (MCP) comes in. MCP is a standard that allows language models to interact with external tools and services in a uniform way. However, managing connections to multiple MCP-enabled tools can be complex.
The LangDB AI Gateway simplifies this with Virtual MCP Servers. A Virtual MCP is a single, managed endpoint that you configure in the UI. It can bundle multiple tools (like Google Maps, Tavily Search, or your own custom APIs), handle their authentication securely, and lock them to specific versions.
You then connect this Virtual MCP to your agent's Virtual Model. This is how you can dynamically grant new capabilities to your agents without changing a single line of code.
Here are all the virtual models for our project, as seen in the LangDB AI Gateway dashboard. You can see the inspiration_agent
, google_search_agent
, and planning_agent
all configured here, ready to be assigned to our agents.
Example: The inspiration_agent
and Google Maps
Let's look at our inspiration_agent
. It needs access to location data to give travel ideas. Instead of hardcoding a Google Maps MCP, we use a Virtual Model.
Here's the agent's definition:
# travel_concierge/sub_agents/inspiration/agent.py
inspiration_agent = Agent(
model= "langdb/inspiration_agent_z73m3wmd",
name="inspiration_agent",
description="A travel inspiration agent...",
# ...
)
Notice its model is langdb/inspiration_agent_z73m3wmd
. In the LangDB AI Gateway UI, we've configured this virtual model to use a Virtual MCP server that has the Google Maps API attached as a tool. Now, when the inspiration_agent
is active, it can seamlessly query Google Maps, even though the tool isn't explicitly listed in its code.
Example: Grounding with Google Search
We also have a specialized agent tool for web searches, google_search_grounding
.
# travel_concierge/tools/search.py
_search_agent = Agent(
model= "langdb/google_search_agent_hsz7lf9q",
name="google_search_grounding",
description="An agent providing Google-search grounding capability",
# ... instruction ...
)
google_search_grounding = AgentTool(agent=_search_agent)
Just like our inspiration_agent
, the _search_agent
uses a virtual model, langdb/google_search_agent_hsz7lf9q
. We've attached a Virtual MCP server that provides the Tavily Search tool to this model in LangDB.
Example: The planning_agent
for Flights and Hotels
Finally, let's look at the planning_agent
, which handles the core booking tasks.
# travel_concierge/sub_agents/planning/agent.py
planning_agent = Agent(
model="langdb/planning_agent_w1l8sygt",
name="planning_agent",
description="Helps users with travel planning...",
# ...
)
This agent's virtual model, langdb/planning_agent_w1l8sygt
, is connected to a Virtual MCP that provides an Airbnb search tool. This allows the agent to handle complex booking-related queries by leveraging this external service, all without having the tool logic hardcoded in the agent's definition.
The Flow: From Query to Answer
A user asks the
travel_concierge
: "What are some good museums to visit in Paris?"The
root_agent
receives the query and, based on its instructions, delegates the task to theinspiration_agent
.The
inspiration_agent
is activated. Its virtual model configuration is loaded from the LangDB AI Gateway.The agent now knows it has access to the Google Maps tool (via its Virtual MCP).
It uses the tool to find museums in Paris and provides a list to the user.
All of these steps—the delegation, the model calls, the tool usage—are automatically captured as traces in the LangDB AI Gateway, giving us complete observability into our agent's behavior.
You can explore a complete, shareable trace of a conversation with this agent here: https://app.langdb.ai/sharing/threads/8425e068-77de-4f41-8aa9-d1111fc7d2b7
When you open the trace, you'll see a detailed breakdown of the entire workflow. This includes:
A Gantt chart visualizing the sequence and duration of each agent and tool invocation.
Cost and token counts for every LLM call, helping you monitor usage and optimize performance.
Detailed input/output payloads for each step, allowing you to inspect the exact data being passed between components.
A dependency graph showing how agents and tools are interconnected, making it easy to debug complex interactions.
Conclusion
By combining Google ADK with the LangDB AI Gateway's virtual models and MCPs, we've built a travel_concierge
agent that is:
Modular: Each sub-agent has a specific responsibility.
Dynamic: We can change models and grant new tools on the fly from the LangDB UI without redeploying our agent.
Observable: We get detailed traces of every interaction, making debugging and performance analysis easy.
This architecture allows for rapid development and iteration, enabling us to build truly powerful and intelligent agentic systems.
Ready to build your own? Check out the LangDB AI Gateway documentation to get started
Subscribe to my newsletter
Read articles from Mrunmay Shelar directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
