Building Interactive Dashboards with Dynamic Data Using Streamlit - Part 1

In moment's data- driven world, the capability to harness real- time data is pivotal. Whether tracking key metrics, monitoring operational efficiency, or understanding trends as they unfold, having a dynamic dashboard that updates live can make all the difference. Our goal is to create interactive dashboards that not only visualize data but also provide real-time insights as the data changes. This capability can bring tremendous value to various sectors, enhancing decision-making and performance.

Using Streamlit, we are building a solution that allows users to engage with their data interactively, updating in real-time, and giving a live pulse on the most important metrics. Here's why this is a game changer for you as an investor and how we plan to implement it.

Why Real-Time Dashboards Matter

Imagine having access to a dashboard that reflects the most up-to-date information on any process, operation, or activity—no delays, no manual intervention. Whether monitoring trends or key performance indicators (KPIs), our solution enables you to:

  • Act in real-time: React instantly to emerging trends or issues.

  • Optimize performance: Spot inefficiencies or bottlenecks quickly.

  • Make informed decisions: Access insights from live data, rather than waiting for periodic reports.

Streamlit offers a perfect platform for us to build such solutions quickly and efficiently, ensuring flexibility to adapt to any scenario.

How We Build Real-Time Interactive Dashboards

Step 1: Setting Up the Dashboard Framework

We start by creating an intuitive Streamlit dashboard with a well-structured layout. This allows for seamless interaction, easy navigation, and future expansion. At the core, the dashboard’s entry point lays the foundation for various visualizations and metrics.

import streamlit as st

# Entry point for the dashboard
def app():
    st.title("Real-Time Interactive Dashboard")

Step 2: Simulating Real-Time Data Feeds

In cases where live data is not yet available, we simulate real-time updates using the existing dataset. This helps demonstrate the power of real-time dashboards. By simulating a live data feed, we ensure that the dashboard updates continuously, mimicking how the final product will behave when connected to an actual live data source (from sensors, logs, or external systems).

Here’s how we simulate real-time data:

import pandas as pd
import numpy as np
import time

def simulate_data():
    return {
        'metric1': np.random.randint(100, 500),
        'metric2': np.random.randint(50, 100),
        'time': pd.Timestamp.now()
    }

for i in range 200:  # Simulate for 200 seconds
    data = simulate_data()
    time.sleep(1)  # Simulating a 1-second delay

Step 3: Auto-Refreshing Components for Dynamic Data

To ensure the dashboard refreshes seamlessly with new data, we leverage Streamlit’s placeholder functionality. By using st.empty(), we create a container that holds all the visual elements like charts, metrics, and KPIs. As new data becomes available, the container dynamically updates the contents, ensuring the most recent information is displayed in real-time.

import streamlit as st
import time

# Placeholder for dynamic updates
placeholder = st.empty()

# Simulating real-time updates
for i in range(200):  # Simulate for 200 seconds
    with placeholder.container():
        st.write("### Live Data Feed")
        st.metric("Metric 1", np.random.randint(100, 500))
        st.metric("Metric 2", np.random.randint(50, 100))
        st.metric("Revenue", np.random.randint(1000, 5000))

    time.sleep(1)  # Simulating a 1-second delay

In this example, the dashboard refreshes key metrics and updates charts automatically, providing a live view of the data.


Caching for Optimal Performance

When dealing with large datasets or frequent data updates, it’s important to optimize performance. Streamlit has a built-in caching system that prevents redundant data fetching, ensuring the dashboard runs smoothly even with large datasets or frequent updates.

Using the @st.experimental_memo decorator, we store data in cache after the first load. This way, instead of repeatedly fetching data, we retrieve it from the cache, speeding up the process and making the experience more efficient.

@st.experimental_memo
def load_data():
    # Simulate loading data
    data = pd.read_csv('large_dataset.csv')
    return data

This ensures faster loading times and reduced computational overhead when using large datasets or frequent real-time updates.

Few reference links that would help you to know how it works:

Superstore Dashboard

Creating Google Sheets with python and streamlit

The next blog part 2 would be about deploying the dashboard on a local server.

0
Subscribe to my newsletter

Read articles from Vanshika Nagarajan directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Vanshika Nagarajan
Vanshika Nagarajan