SynthQuest 2: Building an oscillator in JUCE

Harshit SarmaHarshit Sarma
4 min read

Welcome back to the SynthQuest series, where we’re building a VST from scratch. This is Episode 2, and today’s quest is to create an oscillator. If you are new here, make sure to check out the first episode to catch up. Before we begin with our code, let’s take a moment to talk about oscillators. An Oscillator is the heart of a synthesizer; it generates a continuous waveform, which is the basis of all synthesized sound. In simpler words, the basic thing that produces sound. We'll start by implementing the sine waveform for now and add the others later.

Let’s explore some audio concepts that will help you understand what’s happening under the hood. They’re fundamental to how digital audio works and will come up often as we build our synth:

Sample Rate: The number of audio samples captured per second, measured in Hertz (Hz). It's like FPS in video—higher sample rate means smoother and more detailed sound.

Bit Depth: Determines the resolution of each audio sample, affecting the sound's dynamic range and precision. It's like the division of the Y-axis (amplitude); more divisions mean greater dynamic range, clearer sound, and less noise.

Buffer Size: The number of audio samples processed at a time before sending the data to the output. The computer processes audio data in chunks, and we can define the size of the chunks. A simple analogy for this would be packing boxes for delivery. Smaller boxes (with a low buffer size) are sent faster but require more frequent packing (resulting in more CPU usage), while larger boxes (with a high buffer size) take longer but reduce the number of trips.

Now let’s begin with our project. Open Projucer and click on the IDE icon to open Visual Studio. We will go to the BasicOscAudioProcessor class in the PluginProcessor.h file and initialize an Oscillator as a private member.

juce::dsp::Oscillator<float> osc{ [](float x) {return std::sin(x); } };

To explain this snippet, in the juce namespace, we are using the juce::dsp module's Oscillator class, specifying float as the template parameter. This means the oscillator will generate waveform values in floating-point precision. Using float type reduces distortion, and it is more precise than an integer value, also uses less memory than <double>, as <double> is 64-bit and <float> is 32-bit. This makes it a sine wave oscillator. The file should look like this:

It’s time to provide the Oscillator with the necessary data that it requires before processing any sound. We’ll head over to the PluginProcessor.cpp file, and in the prepareToPlay function, we’ll write a few lines of code. Firstly, we’ll declare a processSpec object, let’s call it spec.

juce::dsp::ProcessSpec spec;

After that, we’ll define buffer size, sample rate, and number of output channels.

spec.maximumBlockSize = samplesPerBlock;

spec.sampleRate = sampleRate;

spec.numChannels = getTotalNumOutputChannels();

After defining all of this, we will prepare the oscillator.

osc.prepare(spec);

Now, we have fed all the necessary data that the oscillator requires before it can play. This is how the prepareToPlay function should look:

Let’s define the buffer and put audio in it so that it can be heard. In the processBlock function (same file), we are going to define an AudioBlock object, let’s call that audioBlock. So, basically, this is a wrapper around the buffer that helps us deal with the buffer at a very high level. This looks something like this.

juce::dsp::AudioBlock<float> audioBlock{ buffer };

Now, we can deal with the buffer very easily. We pass audioBlock to osc.process(), instructing the oscillator to generate a waveform and overwrite the buffer with its (oscillator’s) output. In our case, the buffer is empty because no other previous audio data is present.

osc.process(juce::dsp::ProcessContextReplacing<float>(audioBlock));

With the osc.process() function, we pass the buffer to the oscillator. It is like saying to the oscillator, “Hey, this is a buffer, please fill this with your waveform data”.

What is ProcessContextReplacing? We wrap audioBlock inside ProcessContextReplacing<float>, which tells the oscillator to overwrite the buffer with new waveform data. So, it replaces the previous data. There is ProcessContextReplacing, and the other one is ProcessContextNonReplacing. NonReplacing is used when we want to mix the oscillator’s output with existing audio instead of replacing it. We are sending audioBlock as an argument in the constructor.

Now the buffer is filled, let’s try to run the project. Warning, please decrease the volume of your system before building, since we have not done anything for the gain in the program as of now. Once the program is built, it will produce a sound of the sine wave.

Here’s the Github link. Just clone the repo and open the .jucer file in Projucer.

In the next episode, we will shift our code to the juce::SynthesiserSound and juce::SynthesiserVoice classes. Check these out in the documentation here. Till then, Gracias, see you in the next one!

32
Subscribe to my newsletter

Read articles from Harshit Sarma directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Harshit Sarma
Harshit Sarma