Building a Simple AI DAW, Part 2: MCP and Agents

Johannes NaylorJohannes Naylor
19 min read

1. Introduction and Recap

In my first post, I showed how a local FFmpeg wrapper could turn AI into a basic audio agent. But that was just a simple demo. The prompting was overly simple, it was unreliable with its output, frequently got stuck where it couldn’t get itself out of an error state, and the quality of the outputs were subpar. It wasn’t very useful outside of being an educational tool.

Since that post’s publication, there have been a number of Ai DAWs or DAW-like products and start ups emerge. Udio released their new Sessions View UI, Suno acquired WavTool (a browser-based DAW), Mozart AI and Riff both launched on ProductHunt, the latter even being YC-backed! It’s exciting that all these companies seem to be exploring the "co-producer" model of music creation. This surge of activity points to a potential shift in priorities: AI models aren’t just generating music wholesale; they’re becoming an indispensable part of the production workflow itself. This highlights a need for flexible, robust, and collaborative tools that can seamlessly connect the expressive power/depth of these big AI models with the precise demands of audio engineering.

“We’ve built Sessions to seamlessly integrate into those workflows, making it easier to visualize, edit, and experiment with tracks in one unified place.” - Udio CEO, Andrew Sanchez

Screenshot of Udio's Session View

This post will be a bit different than Part 1 and than my normal technical blog posts in general. I’ll call this a semi-technical blog post. This is intended for people interested in the music/tech space and perhaps with varying levels of technical skills rather than just a simple tutorial like normal. If you’re someone who likes music and has a curiosity for who to make it then this is for you. If you’re an engineer who wants to know the nitty gritty, I’ve included a Technical Appendix that goes into the details of how I built this (beware, it’s all in Rust 🦀).

Or if you’d prefer to go straight into the code: https://github.com/jonaylor89/freqmoda

2. Evolving the Audio Workflow

Traditional DAWs are good stuff. They’ve powered music production for decades. From classic studios to bedroom setups, they’ve enabled remarkable creativity. And as workflows become more distributed and AI-integrated, a new layer of abstraction is emerging.

Instead of relying solely on timeline-based editing or rigid software UIs, we can now think of audio tools as modular, programmable, and composable services. This shift doesn’t replace the old—it builds on it. It allows AI agents and collaborators to work in tandem with artists more fluidly.

“Having worked on creator platforms at TikTok and advising music-tech startups at Abbey Road Red, I’ve seen a recurring pattern: DAWs give artists precision, but they also demand translation. Every creative impulse - ‘make this darker,’ ‘bring out the vocal’ - has to be turned into a series of technical steps. That friction can pull people out of flow” - Pal Chohan

AI-driven workflows are shifting that. Instead of engineering every move, artists can express intent in natural language or quick sketches, and let the system interpret. It feels less like operating a machine and more like collaborating with an assistant producer who understands what you mean. The biggest impact is psychological: creators stay in creative mode, not technical mode.The biggest productivity killer for me, for example, is when I get back in a DAW or in Blender or whatever and I can’t remember any of the keyboard shortcuts. That’s much less of an issues and the amount of activation energy to get into the flow is almost zero now.

A screenshot of Mozart AI's tool thinking about how to add a guitar solo to a song

3. A New Architecture: Artists Working with AI Conductors

Modern AI music systems work best when each part of the process has a clearly defined role. At the top is the AI “conductor,” responsible for deciding which tools to use, when to use them, and how to sequence their outputs. The tools themselves function like specialized instruments — one might handle audio processing, another metadata extraction, another mastering — each optimized for a specific task.

And with MCP, there is a shared language between these components. It defines the capabilities of each tool and the precise parameters they accept, allowing the AI to communicate with them in a predictable, structured way. This creates a workflow that is both flexible and robust: artists can describe creative intent in natural language, while engineers have a clear technical interface for execution.

In practice, this architecture bridges the gap between creative direction and technical execution. As an artist, I don’t need to translate every idea into a manual sequence of edits; they can express intent, and the AI orchestrates the technical steps. Engineers, meanwhile, can design and improve tools independently, confident they will integrate smoothly into the broader system. The result is a collaborative production environment where human creativity and machine precision reinforce each other, rather than competing for control.

An image of a robot behind a studio board taking commands from human listners

4. Improvements from Part 1

💡
For the engineers and tech savvy people, there’s a more detailed Technical Appendix at the end of this post, complete with code walkthroughs and architectural insights. You can jump straight there, or continue reading for the high-level overview.

One of the main issues with the previous blog post’s toy agent was that it was extremely unreliable. Sometimes it would hallucinate FFmpeg parameters that didn’t exist, get confused where to save temporary files, or accidentally get itself in infinite loops. The root cause of this is that the agent has too much it needs to do that isn't involved in the actual music editing process: file management, remembering documentation, error correcting, etc.. The solution I created to this is a separate server/tool that automates all this for the agent which I called Streaming Engine. The Streaming Engine is a server that wraps FFmpeg and exposes its functions via an HTTP API. That lets the AI call tools over the network with no local setup.

The next major improvement was switching from the legacy LangChain prompting / parsing to run tools and use the new, fancy Model Context Protocol (MCP). The previous agent was originally built pre-MCP becoming the standard so it relied on just a custom ReAct prompting. There’s a lot of articles explaining how MCP works so I won’t go into any detail here other than to say that the Streaming Engine could be easily connected to Claude, giving it access to a tool definition that looks like:

{
   name: "process_audio",
   description: "Process audio with various effects and transformations",
   inputSchema: {/*the potential input params to the Streaming-Engine*/}
}

The end result meaning I could edit audio files directly in Claude Desktop

A screenshot of Claude Desktop calling the  tool

5. Demo

6. The Vision: Modular, Collaborative

Fixing reliability issues and adopting MCP weren’t just engineering upgrades; they were steps toward a bigger goal. Every improvement moves us closer to a setup where AI doesn’t just edit audio but actively collaborates in the creative process. This is where the conversation shifts from tool design to the broader vision for how these automations can contribute to the music making process. Take the introduction of autocorrect to Microsoft 6.0 in 1993, which started as Dean Hachamovitch’s script to fix common errors like typing “teh” instead of “the”. The stickiest solutions will be to the quiet inconveniences that slow the creative process down.

“I’m most excited about things that solve the ‘annoying stuff’. Think of a delay compensator that uses a predictor model to anticipate latency. Or a deep learning model that can moderate between LUFS and perceived loudness. I also like the idea of giving models technical data like vocal formants to suggest complementary instrumentals.” - Collette Tibbetts

Royalties

One useful tool would be live embedded royalty tracking. In recent years, great solutions for audio tagging have emerged, such as the open-source project musicnn by Jordi Pons and metadata automation platforms like Musiio, founded by Hazel Savage and acquired by SoundCloud in 2022. If developers in related fields integrated real-time session tracking with contract and rights management, they could build tools that extend directly into the studio workflow, saving significant time on backend royalty attribution. This could help avoid the angry “he said, she said” email chains about who did what during sessions.

“I’d love to see intelligent stem separation combined with mood-based mastering - tell the AI, ‘make this more cinematic and wider,’ and have it reshape the mix dynamically. That’s not just assistive tech; it’s creative direction embodied. That’s exactly the frontier we’re playing with - giving creators flexible, expressive control without forcing them to learn another UI.” - Pal Chohan

Mastering

As AI evolves and speeds up the process of creating music, the number of songs created and released will increase accordingly. As more and more producers rely on prompting to shape their music, they don’t necessarily possess a deep technical understanding of audio anymore, which makes it important to fix technical issues behind the scenes autonomously. Only then can producers be certain that their tracks will sound good on all sorts of speaker systems and don’t suffer from acoustic issues.

AI mastering can solve this by handling the technical aspects of music production in the background, freeing artists and engineers to focus on the creative parts. Services like Masterchannel have perfected this by developing proprietary AI models that can process a huge number of songs, treating them with the same or better quality as a human engineer—and at a fraction of the cost.

“Just as computational photography allows us to take great pictures on our phones, AI Mastering can iron out technical issues in audio tracks that are not apparent to the creator or that would be too complicated to fix with explicit prompting. We’ve reached a tipping point now in the music industry, where people realize that this is an area where it makes sense to rely on AI’s strong technical capabilities. - Christian Ringstad Schultz

Speed, Control, and FUN

In the last 50 years, developments in music creation technologies can generally be bucketed into two categories: more efficient ways to create sounds we imagine and love, and granular controls to craft sounds we’ve never heard before. But one thing remains paramount in music-making: FUN. We create because we enjoy finding riffs and rhymes over sounds that give us a dopamine rush and let us express emotions words can’t capture. The future of DAWs will combine speed and control. Today’s AI models — like text-to-song generators — expedite the process, but often lack the nuance to fully tell our story. Cutting-edge, open-source research is providing building blocks for that control: stem splitters to extract individual instruments from full mixes, models like ACE-step for stem-based generation, tools like Synplant to extract and tweak synthesis parameters, and voice swap platforms such as Controlla Voice to simulate any vocal in any style. These nuanced controls, paired with the inspiration of AI-generated sounds, will make music creation more personal than ever.

Eventually, anything you imagine in music will be possible to create instantly — but the fun lies in the process, not just the result. By spending less time navigating plugins, artists can focus on what they want to say and how they want to say it. No AI piano solo will feel the same as playing an old upright from your childhood home; direct physical interaction adds a personal touch that can’t be replaced. New tactile interfaces like MPE keyboards, motion-controlled apps like Controlla XYZ, and yet-to-be-invented instruments will merge AI’s possibilities with human expression. This extends beyond AI into brain interfaces, AR/VR sound control, and other tools that give music-making its playful, exploratory nature. Once creation is instant, the value will shift to the tools that remind us why we make music at all — for the joy of discovery and the thrill of shaping sound with our own hands.

7. Technical Appendix

This appendix provides a deep dive into the technical architecture and implementation details of its architecture three-tier system. I chose Rust because the error handling is excellent, it's performant, and produces a single binary. More importantly, building AI applications shouldn't be gatekept to just Python and JavaScript like most tutorials suggest - any language that can make HTTP requests can integrate with LLMs.

7.1. System Architecture Overview

The repo includes 3 services

Gateway Service (Port 9000): AI orchestration layer that manages Claude integration, conversation persistence, and tool orchestration between the AI and audio processing services.

Streaming Engine (Port 8080): Core audio processing service that wraps FFmpeg with a production-ready HTTP API, supporting multiple storage backends and caching strategies.

MCP Server: Model Context Protocol bridge that translates high-level AI requests into specific Streaming Engine API calls.

The services communicate through HTTP APIs, allowing for independent scaling and deployment. This architecture separates concerns cleanly: the Gateway handles AI logic and user sessions, while the Streaming Engine focuses purely on audio processing performance.

7.2. Streaming Engine: Core Audio Processing Service

The Streaming Engine is the heart of this demo’s audio processing capabilities. Built in Rust for performance and memory safety, it provides a robust HTTP API wrapper around FFmpeg.

Request Processing Pipeline

// freqmoda/streaming-engine/src/routes/streamingpath.rs#L11-35
pub async fn streamingpath_handler(
    State(state): State<AppStateDyn>,
    params: Params,
) -> Result<impl IntoResponse, (StatusCode, String)> {
    let params_hash = suffix_result_storage_hasher(&params);
    let result = state.storage.get(&params_hash).await.inspect_err(|_| {
        info!("no audio in results storage: {}", &params);
    });
    if let Ok(blob) = result {
        return Response::builder()
            .header(header::CONTENT_TYPE, blob.mime_type())
            .body(Body::from(blob.into_bytes()))
            .map_err(|e| {
                (
                    StatusCode::INTERNAL_SERVER_ERROR,
                    format!("Failed to build response: {}", e),
                )
            });
    }
}

The handler first checks if the requested audio transformation is already cached. If not, it proceeds to fetch the source audio and process it through the FFmpeg pipeline.

FFmpeg Integration and Effect Pipeline

The core audio processing happens in the FFmpeg integration layer:

// freqmoda/streaming-engine/src/processor/ffmpeg.rs#L12-40
pub async fn process_audio(
    input: &AudioBuffer,
    params: &Params,
    temp_dir: TempDir,
    additional_tags: &HashMap<String, String>,
) -> Result<AudioBuffer> {
    let output_format = params.format.unwrap_or(AudioFormat::Mp3);

    let input_path = temp_dir
        .path()
        .join(format!("in.{}", input.format().extension()));
    let output_path = temp_dir
        .path()
        .join(format!("out.{}", output_format.extension()));

    // Write input file
    tokio::fs::write(&input_path, input.as_ref()).await?;

    // Build FFmpeg command
    let mut cmd = Command::new("ffmpeg");
    cmd.args(["-i", input_path.to_str().unwrap(), "-y"]);

    // Add quiet mode flags to reduce log noise
    cmd.args(["-loglevel", "quiet", "-nostats", "-nostdin"]);

    // Add optional metadata
    if let Some(tags) = &params.tags {
        for (k, v) in tags {
            cmd.args(["-metadata", &format!("{}={}", k, v)]);
        }
    }
}

The system creates temporary files for processing, builds FFmpeg commands dynamically based on the requested parameters, and handles both simple effects (volume, speed) and complex filter chains (echo, chorus, reverb).

Storage Abstraction Layer

One of the most important architectural decisions was implementing a pluggable storage backend system. The Streaming Engine supports multiple storage types through a common interface:

- Filesystem Storage (default): For local development and simple deployments

- Google Cloud Storage: For production cloud deployments

- AWS S3: For S3-compatible storage solutions

Concurrent Processing Architecture

The Streaming Engine implements sophisticated concurrent processing using Rust's async/await and semaphore-based limiting:

// freqmoda/streaming-engine/src/processor/processor.rs#L22-35
#[tracing::instrument(skip(self, blob, params))]
async fn process(&self, blob: &AudioBuffer, params: &Params) -> Result<AudioBuffer> {
    let _permit = self.semaphore.acquire().await?;
    info!(params = ?params, "Processing with FFmpeg");

    let temp_dir = TempDir::new()?;

    let processed_audio = process_audio(blob, params, temp_dir, &self.tags).await?;
    info!("Audio processing completed successfully");

    Ok(processed_audio)
}

The semaphore ensures that only a configured number of FFmpeg processes run concurrently, preventing resource exhaustion while maximizing throughput.

Caching Strategy

The system implements a two-tier caching strategy:

Redis Cache: For distributed deployments, storing processed audio metadata and small audio buffers.

Filesystem Cache: For local caching of processed audio files, with automatic cleanup based on size and age limits.

Cache keys are generated using content-based hashing of the input audio and processing parameters, ensuring that identical requests always hit the cache.

7.3. Gateway Service: AI Orchestration Layer

While the Streaming Engine handles audio processing, the Gateway Service manages the AI integration and user experience. Built in Rust for consistency with the rest of the stack, it orchestrates communication between Claude AI and the audio processing pipeline.

Claude Integration and Tool Definition

The Gateway Service defines audio processing tools for Claude using a structured schema:

// freqmoda/gateway-service/src/services/claude.rs#L80-120
ClaudeTool {
    name: "process_audio".to_string(),
    description: "Process audio with various effects and transformations".to_string(),
    input_schema: json!({
        "type": "object",
        "properties": {
            "audio_name": {
                "type": "string",
                "description": "URL/URI/filename to audio file or sample name like 'Sample 1'"
            },
            "format": {
                "type": "string",
                "description": "Output format (mp3, wav, etc.)",
                "enum": ["mp3", "wav", "flac", "ogg", "m4a"]
            },
            "speed": {
                "type": "number",
                "description": "Playback speed multiplier (e.g., 0.5 = half speed, 2.0 = double speed)"
            },
            "reverse": {
                "type": "boolean",
                "description": "Reverse the audio"
            },
            "echo": {
                "type": "string",
                "description": "Echo effect - use simple values like 'light', 'medium', or 'heavy'"
            }
        },
        "required": ["audio_name"]
    }),
}

The tool definitions abstract complex FFmpeg parameters into simple, AI-friendly options like "light", "medium", and "heavy" for effects.

Conversation Management and Persistence

The Gateway Service implements full conversation persistence using PostgreSQL:

// freqmoda/gateway-service/src/handlers/chat.rs#L15-35
pub async fn chat(
    State(state): State<AppState>,
    Json(request): Json<ChatRequest>,
) -> Result<Json<ChatResponse>> {
    tracing::info!("Starting chat request processing");

    // Get or create conversation
    let conversation = if let Some(conversation_id) = request.conversation_id {
        tracing::debug!("Looking up existing conversation: {}", conversation_id);
        match get_conversation(&state.db, &conversation_id).await {
            Ok(Some(conv)) => {
                tracing::debug!("Found existing conversation: {}", conversation_id);
                conv
            }
            Ok(None) => {
                tracing::warn!(
                    "Conversation not found: {}, creating new one",
                    conversation_id
                );
                create_conversation(&state.db, None, None).await?
            }
        }
    }
}

This allows users to have persistent conversations with context maintained across sessions.

Tool Orchestration

When Claude decides to use an audio processing tool, the Gateway Service translates the tool call into Streaming Engine API requests, handles the response, and formats it appropriately for the AI and end user.

7.4. MCP Integration: Bridging AI and Audio Tools

The Model Context Protocol server acts as a bridge between Claude Desktop and the Streaming Engine, providing a clean interface for direct AI interaction.

Tool Schema Definition

The MCP server exposes three primary tools:

// freqmoda/streaming-engine/mcp-server/index.js#L25-60
{
  name: "process_audio",
  description: "Process audio with various effects and transformations",
  inputSchema: {
    type: "object",
    properties: {
      audio_name: {
        type: "string",
        description: "URL/URI/filename to the audio file to process",
      },
      format: {
        type: "string",
        description: "Output format (mp3, wav, etc.)",
        enum: ["mp3", "wav", "flac", "ogg", "m4a"],
      },
      speed: {
        type: "number",
        description: "Playback speed multiplier (e.g., 0.5 = half speed, 2.0 = double speed)",
      },
      reverse: {
        type: "boolean",
        description: "Reverse the audio",
      },
      fade_in: {
        type: "number",
        description: "Fade in duration in seconds",
      }
    },
    required: ["audio_name"]
  }
}

Parameter Translation and Effect Presets

The MCP server translates simple AI-friendly parameters into complex FFmpeg command-line arguments:

// freqmoda/streaming-engine/mcp-server/index.js#L150-180
buildQueryParams(args) {
  const queryParams = new URLSearchParams();

  for (const [key, value] of Object.entries(args)) {
    if (key === 'audio_name') continue; // Handled separately

    if (key === 'echo' && typeof value === 'string') {
      // Map simple presets to complex FFmpeg parameters
      const echoPresets = {
        'light': '0.8:0.88:60:0.4',
        'medium': '0.8:0.88:40:0.6',
        'heavy': '0.8:0.88:20:0.8'
      };
      queryParams.append('echo', echoPresets[value.toLowerCase()] || value);
    } else if (key === 'chorus' && typeof value === 'string') {
      const chorusPresets = {
        'light': '0.7:0.9:55:0.4:0.25:2',
        'medium': '0.6:0.9:50:0.4:0.25:2',
        'heavy': '0.5:0.9:45:0.4:0.25:2'
      };
      queryParams.append('chorus', chorusPresets[value.toLowerCase()] || value);
    } else {
      queryParams.append(key, value.toString());
    }
  }

  return queryParams.toString();
}

This abstraction allows Claude to use natural language like "add heavy echo" while the underlying system receives the precise FFmpeg parameters it needs.

Binary Data Handling

Processing audio requires careful handling of binary data streams:

// freqmoda/streaming-engine/mcp-server/index.js#L200-220
async processAudio(args) {
  try {
    const audioUrl = args.audio_name;
    const queryParams = this.buildQueryParams(args);
    const streamingEngineUrl = `${DEFAULT_SERVER_URL}/unsafe/${encodeURIComponent(audioUrl)}?${queryParams}`;

    const response = await axios.get(streamingEngineUrl, {
      responseType: 'arraybuffer',
      timeout: 30000
    });

    const audioBuffer = Buffer.from(response.data);
    const base64Audio = audioBuffer.toString('base64');

    return {
      content: [{
        type: "text",
        text: `Audio processed successfully. Length: ${audioBuffer.length} bytes`
      }],
      isError: false
    };
  } catch (error) {
    return {
      content: [{ type: "text", text: `Error: ${error.message}` }],
      isError: true
    };
  }
}

The responseType: 'arraybuffer' is crucial for correctly receiving binary audio data from the Streaming Engine.

7.5. Production Engineering Challenges

Async Rust Patterns for Audio Processing

The system leverages Rust's async ecosystem extensively. Key patterns include:

Semaphore-based Concurrency Control: Prevents resource exhaustion during heavy audio processing loads.

Structured Concurrency: Using tokio::spawn and proper error handling to manage concurrent audio processing tasks.

Stream Processing: Using tokio_util::io::ReaderStream for efficient binary data streaming.

Configuration Management

The system uses a layered configuration approach with environment-specific YAML files:

// freqmoda/streaming-engine/config/base.yml
application:
  port: 8080
  host: "127.0.0.1"
  hmac_secret: "your-secret-key"

processor:
  concurrency: 4
  max_cache_files: 1000
  max_cache_mem: 256

storage:
  base_dir: "/path/to/storage"
  path_prefix: "audio/"

Environment variables override file-based configuration using the APP_SECTION__KEY pattern, enabling deployment flexibility.

Database Design for Conversations

The PostgreSQL schema supports rich conversation management:

CREATE TABLE conversations (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    user_id UUID REFERENCES users(id),
    title TEXT,
    created_at TIMESTAMPTZ DEFAULT NOW(),
    updated_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE messages (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    conversation_id UUID REFERENCES conversations(id),
    role TEXT NOT NULL CHECK (role IN ('user', 'assistant')),
    content TEXT NOT NULL,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

This makes sure we have full conversation history, user session management, and audit trails.

7.6. Performance and Scalability

Caching Architecture

The multi-tier caching system significantly improves performance:

1. Content-based Cache Keys: Generated from input audio hash + processing parameters

2. Cache Size Management: Automatic cleanup based on LRU eviction and size limits

  1. Cache Warming: Pre-processing common audio samples during startup

Resource Management

Memory Management: Careful use of streaming I/O prevents loading entire audio files into memory.

Process Isolation: Each FFmpeg operation runs in a separate process with proper cleanup.

Connection Pooling: Database connections are pooled using SQLx for efficient resource utilization.

Metrics and Monitoring

The system exposes Prometheus metrics for monitoring:

  • Audio processing latency histograms

  • Cache hit/miss ratios

  • Concurrent processing gauge

  • Error rate counters

7.7. Development Experience and Tooling

Just-based Build System

Based on feedback from Armin Ronacher (Flask creator), I use Justfiles as part of my AI development workflow. The Justfile contains all the commands I would ever want Claude Code to run, which reduces cognitive overhead during AI pair programming - it's one less thing for the AI to worry about when helping with development tasks:

# freqmoda/justfile#L15-25
# Run both services in parallel with auto-reload
dev-all:
    #!/usr/bin/env bash
    trap 'kill 0' INT
    just dev-gateway &
    just dev-streaming &
    wait
# Full check: format, lint, build, test
check:
    just fmt-check
    just lint
    just build
    just test

This approach streamlines AI-assisted development by providing a single interface for all development commands, making it easier for AI coding assistants to understand and execute the right tasks.

7.8. Future Technical Directions

The modular architecture enables several exciting expansion possibilities:

Offline Generative Music Composition - Beyond processing existing audio, there's compelling potential to evolve toward generative music composition i.e. moving from "add reverb to Sample 1" to "create a chill trap beat." Instead of tool calls triggering effects, the AI would output structured JSON music projects that synthesize entirely offline using Web Audio API. Generated projects could export to DAWProject format, allowing musicians to open AI-generated ideas in Logic Pro or Ableton Live for professional refinement. This bridges AI creativity with traditional production workflows while maintaining our offline-first philosophy.

Real-time Processing: Extending to handle live audio streams rather than just file-based processing would enable live performance tools.

Advanced AI Models: Integration points exist for specialized models doing automatic mixing, stem separation, or intelligent sound design.

GPU Acceleration: The FFmpeg integration can be extended to use GPU-accelerated processing for computationally intensive effects.

Plugin Ecosystem: The tool definition system could be extended to support user-defined audio processing plugins, creating a marketplace for custom effects.

The clean separation between AI orchestration, audio processing, and storage makes these extensions straightforward to implement without disrupting the core architecture.

Project GitHub: https://github.com/jonaylor89/freqmoda

Authors

0
Subscribe to my newsletter

Read articles from Johannes Naylor directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Johannes Naylor
Johannes Naylor