SynthQuest 3: Implementing MIDI input

Harshit SarmaHarshit Sarma
4 min read

Welcome back to the SynthQuest series, where we’re building a VST from scratch. This is Episode 3, and today’s quest is to create a juce::Synthesiser object. If this is your first time here, make sure to check out the previous episodes to catch up. Alright, let’s begin. Why do we need a juce::Synthesiser object? Because it handles a lot of things, and MIDI input is one of them (when we press a key on a piano or keyboard, a sound is produced, and that input is called MIDI input).

Additionally, our current synth is only producing one note continuously. To change this and play different notes when keys are pressed, creating a juce::Synthesiser object is necessary. Then, we will define two classes in the pluginProcessor.h file they are SynthSound and SynthVoice. Why do we need these? Because we need to tell our juce::Synthesiser object the sounds that can be played and how those sounds should be generated, this is how we feed logic to our juce::Synthesiser by creating those two classes. Generally, we create separate files to define these classes, but since our project is relatively small, we are not going to do it that way. Let’s get started:

Firstly, we’ll define the SynthSound class and then the member function as well, since we are deriving it from the juce::SynthesiserSound abstract class.

class SynthSound : public juce::SynthesiserSound {

bool appliesToNote(int midiNoteNumber) override { return true; }

bool appliesToChannel(int midiChannel) override { return true; }

};

The SynthSound class acts as a filter. For example, if you want to block certain MIDI notes from being played or restrict specific MIDI channels (which are like tracks in a DAW), this is where you do it. The appliesToNote() function checks the notes that can be played, and here we are returning true for all the notes. And appliesToChannel() function checks which channels are allowed, and here we are allowing all the channels. Right now, this is the default state where everything is allowed.

Similarly, for the SynthVoice class:

class SynthVoice : public juce::SynthesiserVoice {

public:

bool canPlaySound(juce::SynthesiserSound*) override;

void startNote(int midiNoteNumber, float velocity, juce::SynthesiserSound* sound, int currentPitchWheelPosition) override;

void stopNote(float velocity, bool allowTailOff) override;

void controllerMoved(int controllerNumber, int newControllerValue) override;

void pitchWheelMoved(int newPitchWheelValue) override {}

void prepareToPlay(double sampleRate, int samplesPerBlock, int outputChannels);

void renderNextBlock(juce::AudioBuffer< float >& outputBuffer, int startSample, int numSamples) override;

private:

};

SynthVoice class is where we are going to write the logic for producing sounds. For example, if we want to create a gain slider that increases or decreases gain, we will have to write the logic for that here. The processing of the audio.

Firstly, let’s understand what a voice is. If I play the C major chord, C E G, here we will need three voices, and each voice will be responsible for generating its sound and waveform. If you can recall, our oscillator was making only one sound, and it was having only one voice, and we had no control over it. But with juce::Synthesiser we can create polyphonic sound as well.

The canPlaySound() function checks whether this voice can play a specific sound. The startNote() function is called when a MIDI note is pressed, and the stopNote() function is triggered when that note is released. These are the places where we usually apply the ADSR envelope logic. The controllerMoved() function handles changes in MIDI controllers like mod wheels or expression pedals. The pitchWheelMoved() function handles pitch bending, but it is currently not doing anything (just vibing for now). Now, you might be wondering why there's a prepareToPlay() function here when we already have one in the main class. The difference is: the main prepareToPlay() sets up the entire synthesizer, while this one is specific to an individual voice. Finally, we have the renderNextBlock() function. This is where we set up our oscillators, gain, ADSR, etc. Previously, we handled those in the main synthesizer class, but this is the more standard way, doing it inside the voice itself.

So, to sum it all up, SynthSound acts as the filter deciding which notes and channels are allowed, and think of SynthVoice class as real-life individual musicians in an orchestra or a band who generate sound by processing MIDI input and stuff. Now we are going to create a juce::Synthesiser object in the main class in PluginProcessor.h, and the name is synth.

juce::Synthesiser synth;

So now the pluginProcessor.h file should look like this:

You can check out the synth-quest-3 branch in the GitHub repository for the code till here.

In the next episode, we will move some code to the classes we created today and organize it. Grazie, and see you in the next one!

20
Subscribe to my newsletter

Read articles from Harshit Sarma directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Harshit Sarma
Harshit Sarma