The Observe Downlink: June 2025


As AI applications and LLM-powered workflows proliferate, observability becomes more complex, particularly around monitoring and debugging API interactions with external AI services. In addition, AI apps have the potential to produce even more telemetry, and it is important that teams retain the raw signal needed to understand and troubleshoot emerging AI behaviors, especially when root causes are non-deterministic or not well understood as yet.
To help our users navigate the AI application landscape, we’ve recently launched LLM Observability in our product. Observe is well-suited to the challenge presented by this new wave of apps because of our ability to unify and correlate disparate telemetry, using our knowledge graph to provide context–useful for troubleshooting AI workflows that span multiple services. Observe’s data lake-based architecture allows teams to store all their data from AI apps cost effectively, without compromise.
I invite you to learn more about how Observe can be your observability partner as you build for AI. Check out LLM Observability below! 👇
-Jeremy Burton
What’s New: A RedMonk Conversation: Jacob Leverich on Observe & the Future of Observability
Join Observe CPO Jacob Leverich on this MonkCast episode where he discusses how Observe reimagined observability by betting early on cloud object storage and data lake-style architectures. Learn how emerging standards like OpenTelemetry and AI-assisted query tools are reshaping how teams manage scale, cost, and incident response.
What’s New: LLM Observability (Public Beta)
Observe's LLM Explorer tracks AI application performance, token costs, and reasoning chains. Teams can trace through multi-step agent workflows, analyze prompt inputs/outputs, monitor spending across LLM providers, and debug infrastructure issues affecting AI services. Reach out to your account team to get started with LLM Observability.
Learn More about LLM Observability
What’s New: Troubleshooting Database Performance in Observe APM
Observe APM now includes a performance breakdown chart showing time spent within services and databases, and a waterfall trace view. Teams can identify latency spikes, trace them through microservice chains to specific endpoints, and examine database queries to pinpoint inefficient patterns like n+1s, from OpenTelemetry data.
Learn More about Troubleshooting Database Performance
What’s New: Scheduled Monitoring (Private Preview)
Scheduled monitoring provides precise control over when and how frequently your monitors are evaluated. It's ideal for monitoring data that isn't generated continuously but arrives at predictable intervals, such as from backup jobs, data exports, or batch processes. Configure monitor schedules using either CRON syntax or our intuitive visual UI. Contact your account team to join the private preview.
Subscribe to my newsletter
Read articles from Grant Swanson directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
