Building a Data Mart in Amazon Redshift: A Practical Guide

Aaron RoseAaron Rose
7 min read

Problem

Teams using Amazon Redshift often struggle with a paradox: they know Redshift excels at analytical workloads and can support focused data marts, but when it's time to build one, they face decision paralysis. Should they create a full enterprise data warehouse? Use flat tables? Which Redshift features actually matter? The lack of clear, practical guidance leads to projects that either become overly complex with unnecessary ETL pipelines and modeling layers, or overly simplistic with denormalized tables that become unmaintainable as data grows.

Clarifying the Issue

A data mart is a subject-focused subset of your organization's data designed to serve specific analytical needs—think sales performance, customer behavior, or financial reporting. Unlike a comprehensive data warehouse that tries to model everything, a data mart is deliberately narrow in scope, making it faster to build, easier to understand, and more performant to query.

Amazon Redshift provides excellent building blocks for data marts through its columnar storage, parallel processing, and integration with the AWS ecosystem. However, AWS documentation focuses on individual features rather than providing an opinionated approach to assembling these pieces into a cohesive data mart architecture. This leaves teams to figure out the optimal combination of star schemas, distribution strategies, materialized views, and data loading patterns on their own.

The result is often either analysis paralysis or solutions that don't leverage Redshift's strengths effectively.

Why It Matters

A well-designed Redshift data mart delivers significant business value by providing fast, reliable access to analytical data without the complexity of a full data warehouse. Teams can answer critical business questions in seconds rather than minutes, data analysts spend less time wrestling with complex joins across unrelated domains, and the focused scope makes the system easier to maintain and evolve.

Conversely, a poorly designed data mart becomes a bottleneck that slows down decision-making. Queries run slowly due to poor distribution key choices, maintenance becomes expensive because of unnecessary complexity, and data quality issues multiply when the underlying structure doesn't match the analytical use cases.

Getting the data mart design right from the start prevents these problems and creates a foundation that can scale with your business needs.

Key Terms

  • Data Mart: A focused analytical database serving a specific subject area or business function

  • Star Schema: A dimensional model with a central fact table connected to dimension tables

  • Fact Table: Contains measurable business events (sales, clicks, transactions)

  • Dimension Table: Contains descriptive attributes (customer details, product information, dates)

  • Distribution Key (DISTKEY): Column used to distribute rows across Redshift nodes for parallel processing

  • Sort Key (SORTKEY): Column(s) used to physically order data for faster query performance

  • Materialized View: Pre-computed query results stored as a table for improved performance

  • COPY Command: Redshift's optimized method for loading data from S3, EMR, or other sources

  • Compression Encoding: Algorithm applied to columns to reduce storage space and improve I/O performance

Steps at a Glance

  1. Define your data mart's subject area and key metrics

  2. Design a star schema with proper fact and dimension tables

  3. Choose optimal distribution and sort keys for Redshift performance

  4. Set up your Redshift cluster with appropriate node types

  5. Implement data loading processes using COPY commands

  6. Create materialized views for frequently accessed aggregations

  7. Establish monitoring and maintenance procedures

  8. Connect business intelligence tools and enable users

Detailed Steps

Step 1: Define Your Subject Area

Start by clearly defining what your data mart will cover. Resist the temptation to include everything—a focused scope is a feature, not a limitation. Examples of good data mart subjects include sales performance, marketing campaign effectiveness, or customer support metrics.

Document your key metrics upfront. For a sales data mart, these might include revenue by region, product performance trends, and sales rep productivity. Having clear metrics helps guide schema design and ensures you're building something users actually need.

Step 2: Design Your Star Schema

Create a star schema with one central fact table surrounded by dimension tables. For a sales data mart:

-- Central fact table
CREATE TABLE fact_sales (
    date_key INTEGER NOT NULL,
    product_key INTEGER NOT NULL,
    customer_key INTEGER NOT NULL,
    sales_rep_key INTEGER NOT NULL,
    quantity INTEGER NOT NULL,
    unit_price DECIMAL(10,2) NOT NULL,
    total_amount DECIMAL(12,2) NOT NULL,
    discount_amount DECIMAL(10,2) DEFAULT 0
);

-- Supporting dimension tables
CREATE TABLE dim_date (
    date_key INTEGER NOT NULL PRIMARY KEY,
    full_date DATE NOT NULL,
    day_of_week VARCHAR(10),
    month_name VARCHAR(10),
    quarter INTEGER,
    fiscal_year INTEGER
);

CREATE TABLE dim_product (
    product_key INTEGER NOT NULL PRIMARY KEY,
    product_name VARCHAR(100) NOT NULL,
    category VARCHAR(50),
    brand VARCHAR(50),
    unit_cost DECIMAL(10,2)
);

Keep dimensions relatively small and avoid unnecessary hierarchies that complicate queries.

Step 3: Optimize for Redshift Performance

Choose distribution keys based on your most common join patterns. For the sales fact table, distribute on the key you join most frequently:

-- Distribute fact table on customer_key if customer analysis is primary use case
ALTER TABLE fact_sales ADD DISTKEY(customer_key);

-- Set sort keys for common filter patterns
ALTER TABLE fact_sales ADD SORTKEY(date_key, product_key);

For dimension tables, consider using DISTSTYLE ALL if they're small (under 1 million rows) to ensure they're replicated on all nodes:

ALTER TABLE dim_product SET DISTSTYLE ALL;

Apply appropriate compression encoding to reduce storage costs and improve I/O performance:

ALTER TABLE fact_sales ALTER COLUMN product_name SET ENCODING lzo;
ALTER TABLE fact_sales ALTER COLUMN quantity SET ENCODING delta;

Step 4: Set Up Your Redshift Environment

Choose the right node type for your workload. For most data marts, RA3 nodes provide good performance with the flexibility to scale storage independently:

-- Example cluster configuration
CREATE CLUSTER sales-datamart
NODE TYPE ra3.xlplus
NODES 2

For development or smaller data marts, consider Redshift Serverless to avoid managing cluster capacity.

Configure security groups and IAM roles to enable secure data access from your ETL processes and BI tools.

Step 5: Implement Data Loading

Use COPY commands for efficient bulk data loading from S3:

-- Load fact data with proper error handling
COPY fact_sales FROM 's3://your-bucket/sales-data/'
IAM_ROLE 'arn:aws:iam::account:role/RedshiftCopyRole'
FORMAT AS PARQUET
COMPUPDATE ON
STATUPDATE ON;

For ongoing data loads, implement incremental loading strategies:

-- Load only new data using date filters
COPY fact_sales FROM 's3://your-bucket/sales-data/year=2024/month=08/'
IAM_ROLE 'arn:aws:iam::account:role/RedshiftCopyRole'
FORMAT AS PARQUET;

Set up automated processes using AWS Lambda, Step Functions, or your preferred orchestration tool to run these loads on schedule.

Step 6: Create Materialized Views

Build materialized views for commonly requested aggregations to improve query performance:

CREATE MATERIALIZED VIEW sales_by_month AS
SELECT 
    d.fiscal_year,
    d.month_name,
    p.category,
    SUM(f.total_amount) as total_sales,
    COUNT(*) as transaction_count
FROM fact_sales f
JOIN dim_date d ON f.date_key = d.date_key
JOIN dim_product p ON f.product_key = p.product_key
GROUP BY d.fiscal_year, d.month_name, p.category;

Schedule regular refreshes of materialized views to keep data current:

REFRESH MATERIALIZED VIEW sales_by_month;

Step 7: Establish Monitoring and Maintenance

Set up monitoring for query performance, storage usage, and data freshness. Use CloudWatch metrics to track cluster utilization and set up alerts for unusual patterns.

Implement regular maintenance procedures:

-- Vacuum tables to reclaim space and re-sort data
VACUUM fact_sales;

-- Update table statistics for optimal query planning
ANALYZE fact_sales;

Create procedures to monitor data quality, such as checking for missing dimension keys or unexpected null values in critical fields.

Step 8: Enable Business Users

Connect your preferred BI tools (Amazon QuickSight, Tableau, Power BI) using JDBC/ODBC connections. Create database users with appropriate permissions for different user types:

-- Create read-only user for analysts
CREATE USER analyst_user WITH PASSWORD 'secure_password';
GRANT SELECT ON ALL TABLES IN SCHEMA sales_mart TO analyst_user;

Provide documentation on the data mart structure, available metrics, and example queries to help users get started quickly.

Conclusion

Building an effective data mart in Amazon Redshift doesn't require months of planning or complex ETL pipelines. Success comes from focusing on a specific subject area, designing a clean star schema optimized for Redshift's architecture, and leveraging native features like materialized views and efficient data loading.

The key principles are selectivity in scope, thoughtful performance optimization through proper distribution and sort keys, and maintaining simplicity in the overall design. When done right, a Redshift data mart becomes a powerful analytical asset that grows with your business while remaining fast and maintainable.

Start small with one subject area, prove the value, and then expand to additional data marts as needed. This approach builds organizational confidence in the platform while avoiding the complexity trap that derails many data warehouse projects.


Aaron Rose is a software engineer and technology writer at tech-reader.blog.

0
Subscribe to my newsletter

Read articles from Aaron Rose directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Aaron Rose
Aaron Rose

Software engineer and technology writer. I explore cloud tools, Raspberry Pi projects, and practical DevOps—always from the ground up. More at tech-reader.blog