Spatial Audio, Music Systems, and Production Applications with Web Audio API


The Web Audio API has evolved into a powerful platform for creating immersive audio experiences directly in the browser. In this third part of our series, we'll explore advanced concepts that elevate web audio applications from simple sound players to professional-grade audio environments.
3D Spatial Audio: Creating Immersive Soundscapes
Understanding Spatial Audio Fundamentals
Spatial audio creates the illusion that sounds exist in three-dimensional space around the listener. This technology leverages our brain's ability to localize sound based on subtle timing, level, and spectral differences between our ears.
The Web Audio API provides robust tools for creating spatial audio experiences through the PannerNode
and related interfaces. Let's explore how to bring sounds to life in 3D space.
Implementing 3D Positioning with PannerNode
The PannerNode
is your gateway to positioning sounds in 3D space. Here's a simple example:
// Create an audio context
const audioContext = new AudioContext();
// Create a sound source
const oscillator = audioContext.createOscillator();
oscillator.frequency.value = 440;
// Create a panner node
const panner = audioContext.createPanner();
panner.panningModel = 'HRTF'; // Use Head-Related Transfer Function for realistic 3D
panner.distanceModel = 'inverse';
panner.refDistance = 1;
panner.maxDistance = 10000;
panner.rolloffFactor = 1;
// Set the initial position (x, y, z)
panner.positionX.value = 0;
panner.positionY.value = 0;
panner.positionZ.value = -5; // 5 units in front of the listener
// Connect the nodes
oscillator.connect(panner);
panner.connect(audioContext.destination);
// Start the oscillator
oscillator.start();
Distance-Based Effects and Attenuation
In real-world acoustics, sounds attenuate (reduce in volume) as they move farther from the listener. The Web Audio API models this naturally through the PannerNode
's distance models:
// Choose a distance attenuation model
panner.distanceModel = 'inverse'; // Options: 'linear', 'inverse', 'exponential'
panner.refDistance = 1; // Distance at which the volume reduction begins
panner.maxDistance = 10000; // No further reduction after this distance
panner.rolloffFactor = 1; // How quickly the volume reduces with distance
Creating Moving Sound Sources with Doppler Effect
To create the sensation of a moving sound source, we need to animate both its position and velocity properties:
// Set initial position and velocity
panner.positionX.value = -10;
panner.positionY.value = 0;
panner.positionZ.value = 0;
// Set velocity for Doppler effect
panner.velocityX.value = 10; // Moving right at 10 units/second
panner.velocityY.value = 0;
panner.velocityZ.value = 0;
// Animate the position over time
function animateSound() {
// Get current position
const x = panner.positionX.value;
// Update position
panner.positionX.value = x + 0.1;
// Continue animation if still in range
if (x < 10) {
requestAnimationFrame(animateSound);
}
}
// Start animation
animateSound();
Enhanced Spatial Realism with HRTF
The Head-Related Transfer Function (HRTF) models how sounds reach our ears based on their origin in 3D space. Enabling HRTF in the Web Audio API significantly improves spatial realism:
// Enable HRTF for more realistic 3D audio
panner.panningModel = 'HRTF'; // Other option: 'equalpower' (less realistic)
Creating Directional Sound Sources
Sound sources can be directional, projecting sound primarily in one direction:
// Configure the sound cone for directional audio
panner.coneInnerAngle = 40; // Full volume within this angle
panner.coneOuterAngle = 180; // Reduced volume outside inner angle, up to this angle
panner.coneOuterGain = 0.1; // Volume multiplier outside the outer angle
Integrating with WebXR for Immersive Experiences
For truly immersive experiences, we can combine spatial audio with WebXR:
if (navigator.xr) {
navigator.xr.requestSession('immersive-vr').then(session => {
// Connect audio context with XR session
// Update audio listener position based on headset position
session.addEventListener('inputsourceschange', e => {
// Update audio based on controller input
});
// Further XR integration code...
});
}
Building a Complete Music System
Tempo and Beat Management
A foundational element of any music system is precise timing. Let's create a tempo manager:
class TempoManager {
constructor(audioContext, bpm = 120) {
this.audioContext = audioContext;
this.bpm = bpm;
this.quarterNoteTime = 60 / this.bpm;
this.events = []; // Time-based events
}
scheduleAt(callback, beatPosition) {
const timeInSeconds = beatPosition * this.quarterNoteTime;
const deadline = this.audioContext.currentTime + timeInSeconds;
this.events.push({
callback,
deadline
});
return deadline;
}
setBpm(newBpm) {
this.bpm = newBpm;
this.quarterNoteTime = 60 / this.bpm;
}
// More methods for managing musical time...
}
Building a Precise Metronome
A metronome demonstrates how to achieve rock-solid timing in Web Audio:
class Metronome {
constructor(audioContext, tempo = 120) {
this.audioContext = audioContext;
this.isPlaying = false;
this.tempo = tempo;
this.lookahead = 25.0; // How far ahead to schedule (ms)
this.scheduleAheadTime = 0.1; // How far ahead to schedule (sec)
this.nextNoteTime = 0; // When the next note is due
this.currentBeat = 0;
this.tempoManager = new TempoManager(audioContext, tempo);
this.clickBuffer = this.createClickBuffer();
}
createClickBuffer() {
// Create a short percussive click sound
const buffer = this.audioContext.createBuffer(
1,
this.audioContext.sampleRate * 0.1,
this.audioContext.sampleRate
);
const channelData = buffer.getChannelData(0);
// Attack
for (let i = 0; i < buffer.length * 0.1; i++) {
channelData[i] = Math.sin(i * 0.1) * (1 - i / (buffer.length * 0.1));
}
return buffer;
}
nextNote() {
// Advance current note and time by a quarter note
this.nextNoteTime += 60.0 / this.tempo;
// Advance the beat number
this.currentBeat = (this.currentBeat + 1) % 4;
}
scheduleNote(beatNumber, time) {
// Create click sound
const clickSource = this.audioContext.createBufferSource();
clickSource.buffer = this.clickBuffer;
// Emphasized click on the first beat
const clickVolume = this.audioContext.createGain();
clickVolume.gain.value = beatNumber % 4 === 0 ? 1.0 : 0.5;
// Connect and play
clickSource.connect(clickVolume);
clickVolume.connect(this.audioContext.destination);
clickSource.start(time);
}
scheduler() {
// Schedule notes until the lookahead period
while (this.nextNoteTime < this.audioContext.currentTime + this.scheduleAheadTime) {
this.scheduleNote(this.currentBeat, this.nextNoteTime);
this.nextNote();
}
// Call scheduler again
this.timerId = setTimeout(() => this.scheduler(), this.lookahead);
}
start() {
if (this.isPlaying) return;
this.isPlaying = true;
this.currentBeat = 0;
this.nextNoteTime = this.audioContext.currentTime;
this.scheduler();
}
stop() {
this.isPlaying = false;
clearTimeout(this.timerId);
}
setTempo(bpm) {
this.tempo = bpm;
this.tempoManager.setBpm(bpm);
}
}
Musical Theory Integration
Let's implement a simple chord and scale generator:
class MusicTheory {
// Chromatic scale starting from C
static NOTES = ['C', 'C#', 'D', 'D#', 'E', 'F', 'F#', 'G', 'G#', 'A', 'A#', 'B'];
// Scale patterns (semitone steps)
static SCALES = {
major: [0, 2, 4, 5, 7, 9, 11],
minor: [0, 2, 3, 5, 7, 8, 10],
pentatonicMajor: [0, 2, 4, 7, 9],
pentatonicMinor: [0, 3, 5, 7, 10],
blues: [0, 3, 5, 6, 7, 10]
};
// Chord patterns (scale degree positions)
static CHORD_PATTERNS = {
major: [0, 2, 4], // 1, 3, 5
minor: [0, 2, 4], // 1, ♭3, 5
diminished: [0, 2, 4], // 1, ♭3, ♭5
augmented: [0, 2, 4], // 1, 3, #5
sus2: [0, 1, 4], // 1, 2, 5
sus4: [0, 3, 4], // 1, 4, 5
major7: [0, 2, 4, 6], // 1, 3, 5, 7
dominant7: [0, 2, 4, 6] // 1, 3, 5, ♭7
};
static getScale(root, type) {
const rootIndex = this.NOTES.indexOf(root);
if (rootIndex === -1) throw new Error('Invalid root note');
const pattern = this.SCALES[type];
if (!pattern) throw new Error('Invalid scale type');
return pattern.map(step => {
const noteIndex = (rootIndex + step) % 12;
return this.NOTES[noteIndex];
});
}
static getChord(root, type) {
// First get the appropriate scale for this chord type
let scaleType;
switch(type) {
case 'major':
case 'major7':
case 'dominant7':
case 'sus2':
case 'sus4':
scaleType = 'major';
break;
case 'minor':
case 'diminished':
scaleType = 'minor';
break;
case 'augmented':
// Custom handling for augmented
const majorScale = this.getScale(root, 'major');
const notes = [majorScale[0], majorScale[2]];
// Raise the 5th by a semitone
const fifthIndex = (this.NOTES.indexOf(majorScale[4]) + 1) % 12;
notes.push(this.NOTES[fifthIndex]);
return notes;
default:
throw new Error('Invalid chord type');
}
const scale = this.getScale(root, scaleType);
const pattern = this.CHORD_PATTERNS[type];
// Map chord pattern to notes from the scale
return pattern.map(degree => scale[degree]);
}
static noteToFrequency(note, octave = 4) {
// A4 = 440Hz
const A4 = 440;
const A4_INDEX = this.NOTES.indexOf('A') + (4 * 12);
const noteIndex = this.NOTES.indexOf(note) + (octave * 12);
const semitoneDistance = noteIndex - A4_INDEX;
// Each semitone is a factor of 2^(1/12)
return A4 * Math.pow(2, semitoneDistance / 12);
}
}
Building a Sequencer
Now let's create a step sequencer for pattern-based music creation:
class StepSequencer {
constructor(audioContext, steps = 16, tracks = 4) {
this.audioContext = audioContext;
this.steps = steps;
this.tracks = tracks;
this.currentStep = 0;
this.isPlaying = false;
this.tempo = 120;
this.stepTime = 60 / this.tempo / 4; // Sixteenth notes
this.nextStepTime = 0;
this.patterns = Array(tracks).fill().map(() => Array(steps).fill(false));
this.soundBuffers = [];
this.scheduleAheadTime = 0.1;
this.lookahead = 25;
}
loadSample(url, trackIndex) {
return fetch(url)
.then(response => response.arrayBuffer())
.then(arrayBuffer => this.audioContext.decodeAudioData(arrayBuffer))
.then(audioBuffer => {
this.soundBuffers[trackIndex] = audioBuffer;
});
}
toggleStep(trackIndex, stepIndex) {
this.patterns[trackIndex][stepIndex] = !this.patterns[trackIndex][stepIndex];
}
nextStep() {
this.nextStepTime += this.stepTime;
this.currentStep = (this.currentStep + 1) % this.steps;
}
playSample(trackIndex, time) {
if (!this.soundBuffers[trackIndex]) return;
const source = this.audioContext.createBufferSource();
source.buffer = this.soundBuffers[trackIndex];
source.connect(this.audioContext.destination);
source.start(time);
}
scheduler() {
while (this.nextStepTime < this.audioContext.currentTime + this.scheduleAheadTime) {
// Check each track for this step
for (let track = 0; track < this.tracks; track++) {
if (this.patterns[track][this.currentStep]) {
this.playSample(track, this.nextStepTime);
}
}
this.nextStep();
}
if (this.isPlaying) {
setTimeout(() => this.scheduler(), this.lookahead);
}
}
start() {
if (this.isPlaying) return;
this.isPlaying = true;
this.currentStep = 0;
this.nextStepTime = this.audioContext.currentTime;
this.scheduler();
}
stop() {
this.isPlaying = false;
}
setTempo(bpm) {
this.tempo = bpm;
this.stepTime = 60 / this.tempo / 4;
}
}
Audio Analysis and Visualization
Building a Frequency Analyzer
The Web Audio API's AnalyserNode
provides powerful tools for real-time audio analysis:
class AudioAnalyzer {
constructor(audioContext, fftSize = 2048) {
this.audioContext = audioContext;
this.analyser = audioContext.createAnalyser();
this.analyser.fftSize = fftSize;
this.analyser.smoothingTimeConstant = 0.85;
this.frequencyData = new Uint8Array(this.analyser.frequencyBinCount);
this.timeData = new Uint8Array(this.analyser.fftSize);
}
connectSource(source) {
source.connect(this.analyser);
return this; // For chaining
}
getFrequencyData() {
this.analyser.getByteFrequencyData(this.frequencyData);
return this.frequencyData;
}
getTimeData() {
this.analyser.getByteTimeDomainData(this.timeData);
return this.timeData;
}
// Utility to get dominant frequency
getDominantFrequency() {
this.analyser.getByteFrequencyData(this.frequencyData);
let maxIndex = 0;
let maxValue = 0;
for (let i = 0; i < this.frequencyData.length; i++) {
if (this.frequencyData[i] > maxValue) {
maxValue = this.frequencyData[i];
maxIndex = i;
}
}
// Convert bin index to frequency
return maxIndex * this.audioContext.sampleRate / this.analyser.fftSize;
}
}
Creating a Spectrum Visualizer
Now let's build a visualizer that uses Canvas to display frequency data:
class SpectrumVisualizer {
constructor(audioAnalyzer, canvasElement) {
this.analyzer = audioAnalyzer;
this.canvas = canvasElement;
this.canvasCtx = this.canvas.getContext('2d');
this.isAnimating = false;
// Set canvas size to match display size
this.resizeCanvas();
window.addEventListener('resize', () => this.resizeCanvas());
}
resizeCanvas() {
this.canvas.width = this.canvas.clientWidth;
this.canvas.height = this.canvas.clientHeight;
}
start() {
if (this.isAnimating) return;
this.isAnimating = true;
this.draw();
}
stop() {
this.isAnimating = false;
}
draw() {
if (!this.isAnimating) return;
// Get frequency data
const frequencyData = this.analyzer.getFrequencyData();
// Clear canvas with semi-transparent black for trail effect
this.canvasCtx.fillStyle = 'rgba(0, 0, 0, 0.2)';
this.canvasCtx.fillRect(0, 0, this.canvas.width, this.canvas.height);
// Draw spectrum
const barWidth = this.canvas.width / frequencyData.length;
let x = 0;
for (let i = 0; i < frequencyData.length; i++) {
const barHeight = frequencyData[i] / 255 * this.canvas.height;
// Create gradient
const hue = i / frequencyData.length * 360;
this.canvasCtx.fillStyle = `hsl(${hue}, 100%, 50%)`;
this.canvasCtx.fillRect(x, this.canvas.height - barHeight, barWidth, barHeight);
x += barWidth;
}
requestAnimationFrame(() => this.draw());
}
}
Creating a Waveform Display
Similarly, we can visualize the time-domain data:
class WaveformVisualizer {
constructor(audioAnalyzer, canvasElement) {
this.analyzer = audioAnalyzer;
this.canvas = canvasElement;
this.canvasCtx = this.canvas.getContext('2d');
this.isAnimating = false;
this.resizeCanvas();
window.addEventListener('resize', () => this.resizeCanvas());
}
resizeCanvas() {
this.canvas.width = this.canvas.clientWidth;
this.canvas.height = this.canvas.clientHeight;
}
start() {
if (this.isAnimating) return;
this.isAnimating = true;
this.draw();
}
stop() {
this.isAnimating = false;
}
draw() {
if (!this.isAnimating) return;
// Get time data
const timeData = this.analyzer.getTimeData();
// Clear canvas
this.canvasCtx.fillStyle = 'rgba(0, 0, 0, 0.2)';
this.canvasCtx.fillRect(0, 0, this.canvas.width, this.canvas.height);
// Draw waveform
this.canvasCtx.lineWidth = 2;
this.canvasCtx.strokeStyle = '#00FFFF';
this.canvasCtx.beginPath();
const sliceWidth = this.canvas.width / timeData.length;
let x = 0;
for (let i = 0; i < timeData.length; i++) {
const v = timeData[i] / 128.0; // Convert to range -1 to 1
const y = v * this.canvas.height / 2;
if (i === 0) {
this.canvasCtx.moveTo(x, y);
} else {
this.canvasCtx.lineTo(x, y);
}
x += sliceWidth;
}
this.canvasCtx.lineTo(this.canvas.width, this.canvas.height / 2);
this.canvasCtx.stroke();
requestAnimationFrame(() => this.draw());
}
}
Integrating with MIDI and External Hardware
Connecting to MIDI Devices
The Web MIDI API brings hardware integration to web applications:
class MidiController {
constructor() {
this.inputs = [];
this.outputs = [];
this.onNoteOn = null;
this.onNoteOff = null;
this.onControlChange = null;
}
async initialize() {
try {
const midiAccess = await navigator.requestMIDIAccess();
// Get inputs and outputs
this.inputs = Array.from(midiAccess.inputs.values());
this.outputs = Array.from(midiAccess.outputs.values());
// Set up connection state change listener
midiAccess.addEventListener('statechange', this.handleStateChange.bind(this));
// Set up message listeners
this.inputs.forEach(input => {
input.addEventListener('midimessage', this.handleMidiMessage.bind(this));
});
return true;
} catch (error) {
console.error('MIDI access denied:', error);
return false;
}
}
handleStateChange(event) {
console.log('MIDI connection state change:', event.port.name, event.port.state);
}
handleMidiMessage(event) {
const [status, data1, data2] = event.data;
// Determine message type from status byte
const messageType = status >> 4;
const channel = status & 0xF;
switch (messageType) {
case 0x9: // Note On (144-159)
if (data2 > 0 && this.onNoteOn) {
this.onNoteOn(data1, data2, channel, event);
} else if (data2 === 0 && this.onNoteOff) {
// Note On with velocity 0 is equivalent to Note Off
this.onNoteOff(data1, data2, channel, event);
}
break;
case 0x8: // Note Off (128-143)
if (this.onNoteOff) {
this.onNoteOff(data1, data2, channel, event);
}
break;
case 0xB: // Control Change (176-191)
if (this.onControlChange) {
this.onControlChange(data1, data2, channel, event);
}
break;
// Handle other message types as needed
}
}
sendNoteOn(note, velocity = 64, channel = 0) {
this.outputs.forEach(output => {
output.send([0x90 | channel, note, velocity]);
});
}
sendNoteOff(note, velocity = 0, channel = 0) {
this.outputs.forEach(output => {
output.send([0x80 | channel, note, velocity]);
});
}
sendControlChange(controller, value, channel = 0) {
this.outputs.forEach(output => {
output.send([0xB0 | channel, controller, value]);
});
}
}
Building a MIDI Synthesizer
Let's create a synthesizer controlled by MIDI:
class MidiSynthesizer {
constructor(audioContext) {
this.audioContext = audioContext;
this.activeOscillators = new Map();
this.masterGain = audioContext.createGain();
this.masterGain.gain.value = 0.5;
this.masterGain.connect(audioContext.destination);
// Synth parameters
this.waveform = 'sawtooth';
this.attackTime = 0.05;
this.releaseTime = 0.1;
// Create MIDI controller
this.midiController = new MidiController();
this.setupMidiHandling();
}
async initialize() {
return this.midiController.initialize();
}
setupMidiHandling() {
this.midiController.onNoteOn = (note, velocity, channel) => {
this.noteOn(note, velocity / 127);
};
this.midiController.onNoteOff = (note) => {
this.noteOff(note);
};
this.midiController.onControlChange = (controller, value) => {
// Handle CC messages - example: mod wheel
if (controller === 1) { // Mod wheel
// Apply modulation effect
}
};
}
noteToFrequency(note) {
// A4 (MIDI note 69) = 440Hz
return 440 * Math.pow(2, (note - 69) / 12);
}
noteOn(note, velocity = 0.7) {
// If note is already playing, stop it first
if (this.activeOscillators.has(note)) {
this.noteOff(note);
}
const frequency = this.noteToFrequency(note);
// Create oscillator
const oscillator = this.audioContext.createOscillator();
oscillator.type = this.waveform;
oscillator.frequency.value = frequency;
// Create envelope
const envelope = this.audioContext.createGain();
envelope.gain.value = 0;
// Connect nodes
oscillator.connect(envelope);
envelope.connect(this.masterGain);
// Apply attack
envelope.gain.setValueAtTime(0, this.audioContext.currentTime);
envelope.gain.linearRampToValueAtTime(
velocity,
this.audioContext.currentTime + this.attackTime
);
// Start oscillator
oscillator.start();
// Store active oscillator and its envelope
this.activeOscillators.set(note, { oscillator, envelope });
}
noteOff(note) {
const activeNote = this.activeOscillators.get(note);
if (!activeNote) return;
const { oscillator, envelope } = activeNote;
const releaseEnd = this.audioContext.currentTime + this.releaseTime;
// Apply release envelope
envelope.gain.setValueAtTime(envelope.gain.value, this.audioContext.currentTime);
envelope.gain.linearRampToValueAtTime(0, releaseEnd);
// Stop oscillator after release
oscillator.stop(releaseEnd);
// Remove from active notes after release
setTimeout(() => {
this.activeOscillators.delete(note);
}, this.releaseTime * 1000);
}
setWaveform(waveform) {
this.waveform = waveform;
}
setAttack(time) {
this.attackTime = time;
}
setRelease(time) {
this.releaseTime = time;
}
}
Building Professional Audio Applications
Creating a Multi-track Mixing Console
Let's develop a mixing console for professional audio applications:
class AudioTrack {
constructor(audioContext, name = 'Track') {
this.audioContext = audioContext;
this.name = name;
// Track routing nodes
this.input = audioContext.createGain(); // Track input
this.output = audioContext.createGain(); // Track output
this.fader = audioContext.createGain(); // Volume fader
this.panner = audioContext.createStereoPanner(); // Pan control
// Effects
this.eqLow = audioContext.createBiquadFilter();
this.eqLow.type = 'lowshelf';
this.eqLow.frequency.value = 250;
this.eqLow.gain.value = 0;
this.eqMid = audioContext.createBiquadFilter();
this.eqMid.type = 'peaking';
this.eqMid.frequency.value = 1000;
this.eqMid.Q.value = 1;
this.eqMid.gain.value = 0;
this.eqHigh = audioContext.createBiquadFilter();
this.eqHigh.type = 'highshelf';
this.eqHigh.frequency.value = 4000;
this.eqHigh.gain.value = 0;
// Sends
this.sends = new Map();
// Connect the main signal path
this.input
.connect(this.eqLow)
.connect(this.eqMid)
.connect(this.eqHigh)
.connect(this.panner)
.connect(this.fader)
.connect(this.output);
// Initial settings
this.fader.gain.value = 0.75;
this.panner.pan.value = 0;
this.muted = false;
this.soloed = false;
}
// Volume control (0-1)
setVolume(value) {
this.fader.gain.linearRampToValueAtTime(
value,
this.audioContext.currentTime + 0.01
);
}
// Pan control (-1 to 1)
setPan(value) {
this.panner.pan.linearRampToValueAtTime(
value,
this.audioContext.currentTime + 0.01
);
}
// EQ controls
setLowEQ(gain) {
this.eqLow.gain.value = gain;
}
setMidEQ(gain) {
this.eqMid.gain.value = gain;
}
setHighEQ(gain) {
this.eqHigh.gain.value = gain;
}
// Mute control
setMute(mute) {
this.muted = mute;
this.fader.gain.linearRampToValueAtTime(
mute ? 0 : this.unmutedVolume || 0.75,
this.audioContext.currentTime + 0.01
);
if (mute) {
this.unmutedVolume = this.fader.gain.value;
}
}
// Add a send to an effects bus
addSend(name, destination, level = 0.5) {
const sendGain = this.audioContext.createGain();
sendGain.gain.value = level;
// Connect from pre-fader (after EQ)
this.eqHigh.connect(sendGain);
sendGain.connect(destination);
this.sends.set(name, sendGain);
}
// Set send level
setSendLevel(name, level) {
const send = this.sends.get(name);
if (send) {
send.gain.linearRampToValueAtTime(
level,
this.audioContext.currentTime + 0.01
);
}
}
}
class MixingConsole {
constructor(audioContext) {
this.audioContext = audioContext;
this.tracks = new Map();
// Master bus
this.masterBus = audioContext.createGain();
this.masterBus.connect(audioContext.destination);
// Effects buses (reverb, delay, etc.)
this.effectsBuses = new Map();
// Create some standard effects buses
this.createReverbBus();
this.createDelayBus();
}
createTrack(name) {
const track = new AudioTrack(this.audioContext, name);
track.output.connect(this.masterBus);
// Connect to standard effect sends
track.addSend('reverb', this.effectsBuses.get('reverb'), 0);
track.addSend('delay', this.effectsBuses.get('delay'), 0);
this.tracks.set(name, track);
return track;
}
removeTrack(name) {
const track = this.tracks.get(name);
if (track) {
track.output.disconnect();
this.tracks.delete(name);
}
}
setMasterVolume(value) {
this.masterBus.gain.linearRampToValueAtTime(
value,
this.audioContext.currentTime + 0.01
);
}
createReverbBus() {
const reverbBus = this.audioContext.createGain();
// We'll create a convolver for reverb
const convolver = this.audioContext.createConvolver();
// Generate an impulse response algorithmically
// (or you could load a real IR file)
this.createImpulseResponse().then(buffer => {
convolver.buffer = buffer;
});
// Connect the reverb chain with a dry/wet mix
const dryGain = this.audioContext.createGain();
const wetGain = this.audioContext.createGain();
dryGain.gain.value = 0.5;
wetGain.gain.value = 0.5;
reverbBus.connect(dryGain);
reverbBus.connect(convolver);
convolver.connect(wetGain);
dryGain.connect(this.masterBus);
wetGain.connect(this.masterBus);
this.effectsBuses.set('reverb', reverbBus);
}
createDelayBus() {
const delayBus = this.audioContext.createGain();
// Create a stereo delay effect
const delayLeft = this.audioContext.createDelay(2.0);
const delayRight = this.audioContext.createDelay(2.0);
const feedback = this.audioContext.createGain();
delayLeft.delayTime.value = 0.25;
delayRight.delayTime.value = 0.5;
feedback.gain.value = 0.3;
// Create a stereo split
const splitter = this.audioContext.createChannelSplitter(2);
const merger = this.audioContext.createChannelMerger(2);
// Connect the delay network
delayBus.connect(splitter);
splitter.connect(delayLeft, 0);
splitter.connect(delayRight, 1);
delayLeft.connect(merger, 0, 0);
delayRight.connect(merger, 0, 1);
// Feedback loop
merger.connect(feedback);
feedback.connect(delayLeft);
feedback.connect(delayRight);
// Connect to master
merger.connect(this.masterBus);
this.effectsBuses.set('delay', delayBus);
}
// Create a simple algorithmic impulse response for reverb
async createImpulseResponse() {
const sampleRate = this.audioContext.sampleRate;
const length = 2 * sampleRate; // 2 seconds
const decay = 2.0;
const buffer = this.audioContext.createBuffer(2, length, sampleRate);
// Fill both channels with noise that decays exponentially
for (let channel = 0; channel < 2; channel++) {
const channelData = buffer.getChannelData(channel);
for (let i = 0; i < length; i++) {
// Random noise
const white = Math.random() * 2 - 1;
// Exponential decay
channelData[i] = white * Math.pow(1 - i / length, decay);
}
}
return buffer;
}
}
Creating Transport Controls for Audio Projects
Let's create a transport system for precise control in audio applications:
class TransportController {
constructor(audioContext) {
this.audioContext = audioContext;
this.isPlaying = false;
this.isPaused = false;
this.startTime = 0;
this.pauseTime = 0;
this.tempo = 120;
this.timeSignature = { numerator: 4, denominator: 4 };
this.loopRegion = { start: 0, end: 0, enabled: false };
this.markers = new Map();
// Callbacks
this.onPlay = null;
this.onPause = null;
this.onStop = null;
this.onPositionChange = null;
// Scheduler
this.scheduledEvents = [];
this.nextScheduledEventId = 0;
}
// Convert between different time formats
secondsToBeats(seconds) {
return seconds / 60 * this.tempo;
}
beatsToSeconds(beats) {
return beats * 60 / this.tempo;
}
secondsToMeasures(seconds) {
const beats = this.secondsToBeats(seconds);
const beatsPerMeasure = this.timeSignature.numerator;
return beats / beatsPerMeasure;
}
// Transport controls
play() {
if (this.isPlaying) return;
if (this.isPaused) {
// Resume from pause
const elapsedTime = this.pauseTime - this.startTime;
this.startTime = this.audioContext.currentTime - elapsedTime;
this.isPaused = false;
} else {
// Start from beginning or current position
this.startTime = this.audioContext.currentTime;
}
this.isPlaying = true;
if (this.onPlay) this.onPlay();
this.scheduleEvents();
this.updatePosition();
}
pause() {
if (!this.isPlaying || this.isPaused) return;
this.pauseTime = this.audioContext.currentTime;
this.isPaused = true;
this.isPlaying = false;
if (this.onPause) this.onPause();
}
stop() {
if (!this.isPlaying && !this.isPaused) return;
this.isPlaying = false;
this.isPaused = false;
this.startTime = 0;
this.pauseTime = 0;
// Clear all scheduled events
this.scheduledEvents.forEach(event => {
if (event.timeoutId) {
clearTimeout(event.timeoutId);
}
});
this.scheduledEvents = [];
if (this.onStop) this.onStop();
}
// Get current playback position in seconds
getCurrentTime() {
if (this.isPaused) {
return this.pauseTime - this.startTime;
} else if (this.isPlaying) {
return this.audioContext.currentTime - this.startTime;
} else {
return 0;
}
}
// Seek to a specific position in seconds
seek(time) {
if (this.isPlaying) {
this.startTime = this.audioContext.currentTime - time;
} else if (this.isPaused) {
this.pauseTime = this.startTime + time;
}
// Reschedule events from new position
if (this.isPlaying) {
// Clear existing scheduled events
this.scheduledEvents.forEach(event => {
if (event.timeoutId) {
clearTimeout(event.timeoutId);
}
});
this.scheduledEvents = [];
this.scheduleEvents();
}
if (this.onPositionChange) this.onPositionChange(time);
}
// Set loop region
setLoopRegion(start, end) {
this.loopRegion.start = start;
this.loopRegion.end = end;
}
// Enable/disable looping
setLooping(enabled) {
this.loopRegion.enabled = enabled;
}
// Add marker at specific time
addMarker(name, time) {
this.markers.set(name, time);
}
// Jump to marker
jumpToMarker(name) {
const markerTime = this.markers.get(name);
if (markerTime !== undefined) {
this.seek(markerTime);
}
}
// Schedule an event at specific time
scheduleEvent(callback, time) {
const id = this.nextScheduledEventId++;
const event = {
id,
callback,
time,
timeoutId: null
};
if (this.isPlaying) {
const now = this.getCurrentTime();
const delay = Math.max(0, (time - now) * 1000);
event.timeoutId = setTimeout(() => {
callback();
// Remove from scheduled events
this.scheduledEvents = this.scheduledEvents.filter(e => e.id !== id);
// Handle looping
if (this.loopRegion.enabled && time >= this.loopRegion.end) {
this.seek(this.loopRegion.start);
}
}, delay);
}
this.scheduledEvents.push(event);
return id;
}
// Reschedule all events (called when play starts or after seeking)
scheduleEvents() {
// Create a copy to avoid modification issues during iteration
const eventsToSchedule = [...this.scheduledEvents];
// Clear existing timeouts
eventsToSchedule.forEach(event => {
if (event.timeoutId) {
clearTimeout(event.timeoutId);
event.timeoutId = null;
}
});
// Reschedule
const now = this.getCurrentTime();
eventsToSchedule.forEach(event => {
if (event.time >= now) {
const delay = (event.time - now) * 1000;
event.timeoutId = setTimeout(() => {
event.callback();
this.scheduledEvents = this.scheduledEvents.filter(e => e.id !== event.id);
// Handle looping
if (this.loopRegion.enabled && this.getCurrentTime() >= this.loopRegion.end) {
this.seek(this.loopRegion.start);
}
}, delay);
}
});
}
// Periodically update position (for UI)
updatePosition() {
if (!this.isPlaying) return;
const currentTime = this.getCurrentTime();
if (this.onPositionChange) {
this.onPositionChange(currentTime);
}
// Handle loop region
if (this.loopRegion.enabled && currentTime >= this.loopRegion.end) {
this.seek(this.loopRegion.start);
}
// Update again in about 16ms (~60fps)
requestAnimationFrame(() => this.updatePosition());
}
}
Conclusion: The Future of Web Audio
The Web Audio API has transformed browsers into powerful audio workstations capable of professional-grade sound processing. From spatial audio and complex synthesizers to complete DAW-like applications, the capabilities continue to expand.
The integration with other web technologies like WebXR, Canvas, and Web MIDI extends the potential even further, enabling immersive audio experiences that were once only possible with native applications.
As we look to the future, technologies like AudioWorklet and WebAssembly are unlocking new performance frontiers, while creative developers continue to push the boundaries of what's possible. The Web Audio API has matured into a robust platform for audio programming that can support everything from games and virtual reality to serious music production tools.
By mastering these advanced concepts, you're well-equipped to create remarkable audio applications that run directly in the browser, accessible to users across devices and platforms without installation. The web is increasingly becoming the universal platform for audio experiences, and the tools we've explored in this article give you the power to be at the forefront of this evolution.
Subscribe to my newsletter
Read articles from Mikey Nichols directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Mikey Nichols
Mikey Nichols
I am an aspiring web developer on a mission to kick down the door into tech. Join me as I take the essential steps toward this goal and hopefully inspire others to do the same!