Monitoring Snowflake — Usage, Access & Credit Optimization

Sriram KrishnanSriram Krishnan
4 min read

In our last blog, we learned how to define governance with tags and policies. But a strategy is only effective if you can verify and enforce it. This is where monitoring comes in. In a platform like Snowflake, where usage and cost scale with demand, observability is essential for control.

Effective monitoring follows a diagnostic workflow: Monitor high-level trends, Identify specific anomalies, Diagnose the root cause, and Optimize. This guide explores how to use Snowflake's native tools for each step, so you can run a high-performance, governed, and cost-efficient platform.

Usage Visibility: Monitor What’s Running and Why

Snowflake offers two key sources for monitoring:

  • INFORMATION_SCHEMA views: Real-time metadata scoped to a database.

  • ACCOUNT_USAGE views: Historical metadata scoped to the entire account (with some latency).

Use them together to get a high-level view of your environment. A great starting point is tracking warehouse credit usage to understand where your budget is going.

Example: Track Warehouse Credit Usage

SELECT 
  warehouse_name,
  SUM(credits_used) AS total_credits
FROM SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY
WHERE start_time >= DATEADD('day', -7, CURRENT_TIMESTAMP())
GROUP BY warehouse_name
ORDER BY total_credits DESC;

This helps you monitor trends and identify which warehouses are the most expensive, beginning the "Identify" phase of our workflow.

Query Monitoring: Identify and Diagnose Bottlenecks

The QUERY_HISTORY views are your primary tool for investigating performance issues.

Example: Identify Top Long-Running Queries

SELECT 
  query_text,
  user_name,
  warehouse_name,
  execution_status,
  total_elapsed_time/1000 AS duration_sec
FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY
WHERE start_time >= DATEADD('day', -1, CURRENT_TIMESTAMP())
ORDER BY duration_sec DESC
LIMIT 10;

Diagnosing with the Query Profile
Once you identify a problematic query, the next step is to diagnose its execution plan. The Query Profile, available in Snowsight for every query, is the most powerful tool for this. It provides a visual breakdown of every operator in the query, allowing you to spot common bottlenecks like:

  • Spilling: When data overflows from memory to local or remote storage, dramatically slowing down the query.

  • "Exploding" Joins: Incorrect join logic that creates a massive number of intermediate rows.

  • Full Table Scans: Queries that fail to prune data effectively.

Access Monitoring: Verifying Your Governance Strategy

Tracking object-level access is how you verify that the tags and policies from our previous chapter are working. The ACCESS_HISTORY view shows who accessed what, when, and which columns were involved.

Example: Verify Who Accessed Tagged PII Columns

SELECT
  user_name,
  direct_objects_accessed[0].objectName as table_name,
  direct_objects_accessed[0].columnNames as columns_accessed,
  query_start_time
FROM SNOWFLAKE.ACCOUNT_USAGE.ACCESS_HISTORY
WHERE columns_accessed ILIKE '%EMAIL%' -- Or use tags to find sensitive columns
  AND query_start_time >= DATEADD('day', -7, CURRENT_TIMESTAMP());

This is how you close the loop on governance—by actively monitoring access to sensitive data and ensuring your policies are being enforced as expected.

Cost Optimization: The "Optimize" Phase in Practice

The WAREHOUSE_METERING_HISTORY view is critical for optimization. A key technique is to distinguish between compute and cloud services credits.

Diagnosing Credit Burn: Compute vs. Cloud Services

SELECT 
  warehouse_name,
  SUM(credits_used_compute) AS compute_credits,
  SUM(credits_used_cloud_services) AS cloud_services_credits
FROM SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY
WHERE start_time >= DATEADD('day', -30, CURRENT_TIMESTAMP())
GROUP BY warehouse_name
ORDER BY SUM(credits_used) DESC;
  • High Compute Credits: Often solved by resizing warehouses, tuning queries, or using materialized views.

  • High Cloud Services Credits: Indicates inefficient operations that don't use warehouse compute. Common culprits include queries that are too complex to compile, frequent file listings in stages, or excessive use of SHOW commands. This requires fixing the underlying query or process.

Proactive Controls: Resource Monitors
Don't just track costs—prevent them. Resource Monitors are your primary tool for avoiding budget overruns. You can set credit quotas for warehouses or your entire account and configure them to automatically suspend a warehouse or send alerts when a threshold is reached. This is a non-negotiable best practice for any production environment.

Monitoring Best Practices: The Optimization Loop

Think of monitoring not as a one-time task, but as a continuous Optimization Loop: Monitor dashboards -> Identify anomalies -> Diagnose with query/access history -> Optimize queries and controls -> Repeat.

  • Build Snowsight Dashboards: Visualize warehouse activity, query performance, and user access.

  • Tag PII and Cost Centers: Use tags to track and report on sensitive or high-cost assets.

  • Alert on Anomalies: Use Resource Monitors or third-party tools to flag runaway queries or suspicious access.

  • Review Regularly: Schedule monthly reviews of usage, credit burn, and access activity.

Final Thoughts

Monitoring in Snowflake is powerful—but only if you use it. Every query, warehouse, and access event is already being logged. The views are there. The insights are yours to extract. The best Snowflake environments aren’t just fast and governed—they’re visible, predictable, and cost-efficient.

0
Subscribe to my newsletter

Read articles from Sriram Krishnan directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Sriram Krishnan
Sriram Krishnan

Sharing lessons, tools & patterns to build scalable, modern data platforms —one post at a time.