Dynatrace Dashboards: A Complete Guide to Classic & Modern Dashboards


Introduction
Dynatrace Dashboards are powerful tools that provide real-time monitoring, data visualization, and analytics for IT environments. Whether you're tracking system health, identifying performance bottlenecks, or optimizing cloud resources, dashboards offer an intuitive way to organize and analyze critical data.
Dynatrace offers two types of dashboards:
Classic Dashboards – Easy to use, tile-based monitoring with predefined templates.
Modern Dashboards – Advanced analytics using Dynatrace Query Language (DQL) for custom visualization and data exploration.
In this guide, we will explore both Classic and Modern Dashboards, their differences, use cases, and how to set them up for effective monitoring.
Classic Dashboards in Dynatrace
What is classic dashboard
Classic Dashboards provide a simple tile-based interface to visualize monitoring data. They are great for basic system monitoring, alerts, and quick insights without requiring complex configurations.
Key Features of Classic Dashboards
✅ Predefined Tiles – Drag-and-drop widgets displaying logs, charts, and metrics. ✅ Simple Setup – Easily configurable without coding. ✅ Timeframe Selection – Adjust the dashboard’s timeframe for detailed insights. ✅ Customizable Views – Users can arrange tiles to focus on specific metrics.
How to create a classic dashboard
Log in to Dynatrace and navigate to Dashboards Classic.
Click "Create Dashboard" to start a new dashboard.
Enter a Name for the dashboard (e.g., "System Health Overview").
Add Tiles to the dashboard:
Select predefined monitoring metrics such as CPU Usage, Memory Utilization, Network Latency.
Arrange tiles for better visualization.
Set Filters & Timeframe based on monitoring needs.
Click "Done" to save the dashboard.
Best Practices for Classic Dashboards
✅ Keep dashboards clean & focused—only include essential metrics. ✅ Use alert tiles (red/yellow indicators) to monitor critical failures. ✅ Ensure time range selection matches monitoring requirements.
Demo :
Setting Up Classic Dashboards for Azure Virtual Machine Monitoring (Using OneAgent)
If you have an Azure Virtual Machine (VM) running and have connected Dynatrace OneAgent, your next step is to set up a Classic Dashboard for easy monitoring.
Here’s a quick step-by-step guide that your readers can follow to visualize VM performance metrics inside Dynatrace Classic Dashboards.
Step 1: Verify OneAgent Installation
Before creating a dashboard, ensure that Dynatrace OneAgent is successfully installed on your Azure VM:
Log in to Dynatrace and go to Hosts.
Check if your Azure VM appears in the list.
Confirm that monitoring data is being captured (CPU, memory, network usage).
Step 2: Create a Classic Dashboard
Navigate to →
Dashboards Classic
.Click "Create Dashboard" and enter a name (e.g., "Azure VM Monitoring").
Add Monitoring Tiles for essential metrics:
Host Overview Tile → Displays VM health status.
CPU Usage Tile → Helps track processor load.
Memory Consumption Tile → Monitors RAM usage.
Network Activity Tile → Shows inbound/outbound traffic.
Note : You just have drag and drop the graph
Adjust the timeframe (e.g., last 6 hours, last 24 hours).
Click "Save".
Here’s a brief explanation of each tile:
Host Health → Monitors CPU, memory, disk, and process activity on your VM.
Network Metrics → Tracks latency, packet loss, and data flow between services.
Network Status → Shows connection availability and performance issues.
VMware → Displays CPU/memory usage & virtual machine activity for VMware environments.
AWS → Monitors AWS cloud resources like EC2 instances & Lambda functions.
Application Health → Monitors the performance, availability, and errors of applications, ensuring they run smoothly.
Third-Party Monitoring → Tracks external services (APIs, cloud providers, databases) used within your application to detect outages or slow responses.
Problems → Dynatrace automatically detects issues like crashes, high latency, or resource failures and provides root cause analysis.
Top Web Applications → Displays the most accessed applications in your environment, helping prioritize monitoring efforts.
Service Health → Shows the current status and stability of backend services, such as APIs and databases.
HTTP Monitor → Tracks web requests and responses, helping identify slow or failing endpoints in your web applications.
User Action → Captures user interactions (clicks, page views, transactions) to analyze real user experience and behavior.
Custom Application → Allows monitoring of non-standard or specialized apps, enabling full visibility even for unique software.
Smartscape → A visual map of your entire IT environment, showing connections between applications, services, and infrastructure in real-time.
Modern Dashboard in dynatrace
Modern Dashboards in Dynatrace offer advanced monitoring, deeper analytics, and custom data visualization using Dynatrace Query Language (DQL). These dashboards provide more flexibility than Classic Dashboards, allowing users to fine-tune queries and build interactive monitoring views.
Key Features of Modern Dashboards
✅ DQL Query Support – Enables custom metric filtering and analytics. ✅ Rich Visualizations – Supports graphs, tables, charts, and maps. ✅ Custom Filters & Variables – Allows dynamic data adjustments. ✅ Advanced Monitoring – Ideal for deep-dive performance analysis.
Understanding Dynatrace Query Language (DQL)
Dynatrace Query Language (DQL) is a powerful tool for querying and analyzing data within Dynatrace. It uses a pipeline-based data-flow model, allowing users to chain commands for filtering, summarizing, and visualizing data. Below, we explore basic and graph queries to demonstrate the versatility of DQL.
Basic DQL Queries
1. Fetch Logs
Retrieve all logs from the last hour:
fetch logs, from:now()-1h
2. Filter Logs
Filter logs where the source ends with "ssh.service":
fetch logs | filter endsWith(log.source, "ssh.service")
3. Parse Logs
Extract specific fields from log content:
fetch logs | parse content, "LD IPADDR:ip ':' LONG:payload SPACE LD 'HTTP_STATUS' SPACE INT:http_status"
4. Summarize Data
Count the total number of logs:
fetch logs | summarize count()
5. Group and Aggregate
Group logs by host
and calculate the total payload
fetch logs | summarize total_payload = sum(payload), by:{dt.entity.host}
6. Sort Results
Sort logs by the number of failed requests in descending order:
fetch logs | summarize failedRequests = countIf(http_status >= 400), by:{dt.entity.host} | sort failedRequests desc
7. Add Fields
Add a new field to convert payload to megabytes:
fetch logs | summarize total_payload = sum(payload) | fieldsAdd total_payload_MB = total_payload / 1000000
Graph Queries for Modern Dashboards
Graph queries enable users to visualize data trends and patterns, making it easier to identify anomalies and insights. Below are examples tailored to the log data provided.
1. Line Graph: Authentication Failures Over Time
Visualize the number of authentication failures over time:
fetch logs | summarize authFailures = count(), by:{bin(timestamp, 1h)}
2. Bar Chart: Failures by Host
Show the number of authentication failures grouped by host:
fetch logs | summarize authFailures = count(), by:{dt.entity.host}
3. Pie Chart: Failures by Azure Region
Display the distribution of authentication failures across Azure regions:
fetch logs | summarize count(), by:{dt.entity.azure_region}
4. Stacked Area Chart: Failures by Process Group
Compare authentication failures across different process groups over time:
fetch logs | summarize authFailures = count(), by:{dt.entity.process_group, bin(timestamp, 1h)}
5. Heatmap: Failures by Host and Time
Create a heatmap to show authentication failures by host and time:
fetch logs | summarize count(), by:{dt.entity.host, bin(timestamp, 1h)}
Building a Modern Dashboard
To leverage these queries, follow these steps to create a Modern Dashboard:
Set Up a New Dashboard: Navigate to Dynatrace and create a new Modern Dashboard.
Integrate DQL Queries: Add tiles to the dashboard and input the DQL queries for data filtering and visualization.
Add Visualizations: Choose graph types (line, bar, pie, etc.) to represent the data effectively.
Customize Filters and Variables: Apply dynamic filters to adjust data in real-time.
Save and Monitor: Save the dashboard and use it for continuous monitoring and analysis.
Security Insights from Log Analysis
The log data analyzed in this blog highlights potential security threats, such as repeated failed SSH login attempts and suspicious IP addresses. Modern Dashboards enable proactive monitoring and response to such incidents. Key recommendations include:
Blocking suspicious IPs.
Enforcing strong password policies.
Enabling multi-factor authentication (MFA).
Setting up alerts for unusual login behavior.
Dashboard Implementation in Action
Modern Dashboards offer a variety of tiles for dynamic monitoring:
Logs: Analyze raw data, like connection attempts, with DQL for filtering and graphing.
Metrics: Monitor system performance (CPU, memory, etc.) for trends and optimization.
Events: Track real-time incidents, such as service failures, to prevent escalation.
Problem: Visualize detected issues for faster troubleshooting.
DQL: Process custom queries directly for deep analytics.
Code: Add custom scripts for tailored data handling.
These tiles empower users to create interactive and insightful dashboards tailored to their needs.
Now Click on metrics
Now using DQL
fetch logs
| summarize authFailures = count(), by:{dt.entity.process_group, bin(timestamp, 1h)}
This query counts the total number of authentication failures (authFailures
) in the logs and groups them by process group (dt.entity.process_group
) and hourly intervals (bin(timestamp, 1h)
). It provides insights into when and where the failures are occurring, broken down by time and process group. 🚀 Let me know if you need further clarification!
fetch logs, from:now()-1d
| summarize count(), by:{dt.entity.host, bin(auto, 1h)}
Short Explanation: This query fetches log data from the last 24 hours (from:now()-1d
), counts the total number of logs (count()
), and groups them by host (dt.entity.host
) and hourly intervals (bin(auto, 1h)
). It helps analyze log activity on an hourly basis for each host. 🚀 Let me know if you need more details!
fetch logs
| summarize count(), by:{dt.entity.azure_region}
Short Explanation: This query retrieves log data, counts the total number of logs (count()
), and groups them by Azure regions (dt.entity.azure
_region
). It helps analyze log activity based on the geographical regions of your Azure infrastructure. 🚀 Let me know if you need further clarifications!
Subscribe to my newsletter
Read articles from Abhi Nikam directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
