Disclaimer: There’s going to be some serious theory in this article.
Ah yes, the big scary ‘t’ word. The bane of many musicians’ existence. We just want to get into the music-making, into the studio and chip away at our next masterpiece without having to worry about concepts like frequency bands, key changes or bit-rate.
But unless we want to pay for other people to do the work for us — which, I must say, is not necessarily a bad strategy — these are bridges we simply must cross. To burn them would be satisfying, but ultimately disabling in your journey to becoming the next *insert influential songwriter here*.
Through learning about some of the main audio effects, their intended purposes and potential uses, you allow yourself a greater launching pad from which you can sky-rocket your home studio’s productions.
Whether you just want to learn to mix your own music, delve into the wide world of VSTs to enhance your studio’s toolbox or are just purely interested in the theory behind it — surely not! — it is my sincere hope that this article sets you up well on your way to achieving your goals.
I am no professional engineer and am not pretending this piece will turn you from a total novice into a master of production, but it will hopefully serve as an easy-to-understand overview of the most commonly used audio effects.
With this knowledge you can begin to apply it to your own creative works, learning through practice and application, and slowly become a fairly proficient audio engineer that’s comfortable with using most of the generic audio effects on their tracks.
Of course, for a more in-depth mixing tutorial, there are literally hundreds of books on the matter that will kickstart your music studio’s campaign, attacking the ‘problem’ (that is, how to make a song sound good) from a number of angles.
Entire books have been written on the individual audio effects I’m only going to spend a few hundred words discussing below — so if you want extra resources beyond what I provide, be my guest.
I won’t get jealous.
The Big 5
Generally speaking, there are 5 main audio effects that are apparent on nearly every song recorded or played. Even beyond the world of mixing — when you’re just messing around with some guitar pedals, or even your hi-fi system, it is likely that at least some of The Big 5 will become involved in some way.
Due to their prominence, it is of vital importance that we get a basic understanding of their functions before we embark on any sort of home studio related venture, even if we want to stick with vanilla VST/audio effects going forward.
The big 5 are:
- Equalization
- Compression
- Reverb
- Delay
- Saturation
These will be the class’ focus today, so sit up, pay attention and get ready to take notes. If you’re lucky — and well-behaved — after class I might mention a few more fun audio effects that can wow your productions.
Okay, ready?
Let’s go.
EQs (Equalizers)
What are they?
Equalizers (better known as EQs) are one of the most common forms of audio processing and is probably the only one of the 5 that consumers will have used without ever developing a song.
You can find EQs on your phone, often with a number of presets, on your hi-fi system, on Spotify, on a guitar pedal and even in your car.
Beyond these household uses, EQs find themselves within amplifiers and speakers, used in radio and TV production rooms and in theatres.
The main types of EQ are: graphic and parametric. There are a few different others, though at a basic functional level they all do the same thing — adjust the volume (electrical signal) of a specific ‘frequency band’, thus altering the overall sound.
This can simply be used to make something sound more to your liking, such as turning up the ‘bass’ knob in your car when a specific song comes on, or to remove unwanted frequencies (sounds), such as electrical hums and squeals.
Most EQs that you’ll come across in your basic mixing and music-creating journeys will also include a Q value, which dictates the width of the frequency band being altered.
For example, let’s say you’re lowering the volume of the bass in a track you’re making. A low Q value indicates a wide band, meaning that your balance cuts to the bass end might stretch from 100-400hz.
On the flipside, if you were to employ a much higher Q value, the change in volume might only apply to a range of 250-270hz.
Each method of manipulating the audio signal is called a ‘filter’, and there are many different methods of this. As you go further into your career as the next greatest musician you will learn many of these, but for now I will keep it simple and give a basic overview of the most common and accessible filtering methods.
Parametric EQs involve a few other functions, most of which I won’t dive too far into in fear of getting sucked into the never-ending vacuum (Keep a keen eye out for more vacuum related references along this series!) of the intricacies of audio engineering
Okay. That’s enough of that. Let’s power onto the next section, where I’ll give you a little taste of what you can do with the sheer power afforded to you by a simple little graph-looking-thingy on your screen.
How and when should I use it?
I must admit — as I ironically attempt to instruct my loyal readers through scripture — learning and employing EQ is better done visually (through image) and sonically (through song) than the written word.
While hopefully the images included will grant some perspective, I wouldn’t be shocked if a few newcomers to EQ have been left baffled by the previous section.
Not to worry — there’s a lot more to EQ than knowing what it does. In fact, being told specifically what to do with any mix is a toxic mindset to have in the first place. The best way to do anything is (this isn’t going to shock you, and may in fact aggravate you…) do it for yourself.
There’s a lot of: ‘well someone on YouTube said that if you EQ vocals with a shelf filter at 5khz and a slight boost at 450hz then BAM! Your song will be radio-ready. But when I did it to my spanking new track, it made it sound even worse!’
Play around with it! Grab a random song. Drag and drop various eq points. See how each little change you make manipulates the audio, alters it in ways that could either be completely unnoticeable or totally game-changing.
With that said, there are a number of oft-employed ways that equalization is used to best enhance a song’s mix. I’ll briefly touch on some of these techniques before moving onto audio effect number 2 of 5.
Sweeping
Used to find frankly disgusting sounds, such as room resonances or malevolent echoes, sweeping is performed by getting a bell filter, boosting it by an absurd amount (often 15dB or more), and then sweeping (I know, right?!) through the entire frequency spectrum, noting particularly offensive sounds.
How you define particularly offensive is actually a point of contention for this method and is one of its major drawbacks, but at a basic level it can be effective for EQ newbies.
Once done, you go back to each frequency range that you noted down and add a notch filter.
Mirror EQing
Also called notching, this is a method of equalization that is focussed on ‘big’ musical projects with a number of different tracks and layers that have some overlap in sonic qualities.
When mixing a track that feels muddy, or a song where instruments simply disappear into the chaos, mirror eqing is a popular method to add some space to the mix.
This is done by boosting a particular frequency signal within one track, and then cutting that exact same frequency band in another, thus carving out some space in the auditory spectrum for the sounds to coexist peacefully.
Pass filtering
As touched on earlier, pass filtering is an essential element of any mixing exercise and is one of the very few cut and dry pieces of advice. You should do this on everything.
Of course, experience comes into the picture when deciding where to cut, the answer to which is best informed by listening to the song/audio and not by what that random bloke off reddit said.
Other uses
The above are only the tip of the iceberg when it comes to employing EQ to your home studio works.
It can be used creatively, replicating a loudspeaker/radio type sound, or used to manipulate where certain tracks sit within a sonic field. You can make vocals sound tinny and bright, or keyboards sound boomy and wall-shakingly big.
What you do with EQ is up to you, but the options are near unlimited. No other effect is quite as vital to the sound of a complete song.
An example of a guitar piece with and without a high-pass filter. You can use high-pass filters to create lo-fi sounding sections to add a ‘build-up’ or change in sound without significantly altering volume
Compression
What is it?
Have you ever listened to a softer, acoustic song on Spotify while driving your car, turned it up so you could groove with it a bit better, only to almost crash seconds later after the next song in the queue announces itself at a ridiculous volume blasting from your speaker? Well, I have.
A compressor exists to avoid this very scenario from occurring, but generally within the vacuum (I’ve done it again!) of an individual song, or even individual recording.
Much like equalization, compression has important functionality beyond that of your next hip-hop project in Ableton Live. It is an essential component in many audio-related fields, such as radio, film and live performance.
The primary purpose of a compressor is to reduce the dynamic range of audio, allowing for a smooth, consistent listening experience.
In a general sense, there are two main types of compression, though under these categories fall independent subcategories that have their own popular uses.
How and when should I use it?
Whereas the technical properties — not to dismiss their overall complexity — of equalizers can be surmised rather briefly, compressors have a number of specific functions and controls that would be painful for me to express and equally boring for you to read in this instance.
If you would like to learn more, I would recommend reading an article, book chapter or watching a video dedicated to compression, as it is an in-depth topic and takes a little while to get the hang of.
The most common adjustable parameters for compression are: threshold, ratio, gain/makeup gain, attack, release, knee, Peak or RMS and look-ahead.
For extremely basic purposes, the only three of these you need to know are threshold, ratio and gain.
The threshold affects the ‘limit’ that your audio signal either cannot exceed or be less than, the ratio affects how much the signal that is above or below the threshold is altered and gain is adjusted to pull back the overall volume of the track to its original place, as compressors will often increase or decrease the volume of the entire recording.
When to use a compressor
When mixing your song (or another artist for that matter), compressors are almost always going to be useful. Having too much dynamic range makes for a jarring auditory experience, and compression is essential in getting your song to sit together nicely in a mix.
Many engineers will actually prioritize compressing the track before they even get into basic equalization measures.
With that said, you want to be very careful as to not overcompress your song, unless of course it is an artistic decision (overcompressed drums is a common creative choice, e.g. the intro to Live Forever by Oasis). Overuse of this effect can result in a tinny, boxy sound that seems too processed and non-musical.
How to avoid this of course comes with experience (oh don’t act so surprised!), but there is still a certain logic that even a novice can anticipate and execute.
It makes sense that a pop or electronic song with more virtual performances and a focus on being louder would require far more compression than an indie folk ballad.
The same concept applies to individual instruments — overdriven electric guitars or heavy metal drumming are bigger and have far less dynamic range than fingerpicking an acoustic guitar or playing a solo viola. The guitar and drums would likely take a lot more compression before they lost their magic than the acoustic or the viola.
Much like EQs, compressors have the unfair reputation as being functionally important but not artistically exciting, when the contrary is true.
There are all sorts of crazy musical ideas that can be implemented with a compressor — you just gotta have the flair to try.
A creative use of compression — over-pumping drums to add a level of aggression and intensity.
Reverb
What is it?
Reverb is essentially a mirror for sound.
Sound travels out from a source in a weird (though maybe not so weird if you’re a physics major) fashion, in that it travels outward in all directions, not just the direction it is being directed at. In doing this, when the sonic waves hit a solid object, they bounce off of it, causing a reflection — or an echo.
This effect is often invisible to the naked ear in smaller rooms, as the soundwaves are all traveling so close together that it is relatively imperceptible. That said, reverb is of course magnified in larger spaces, such as cathedrals or designated studio rooms.
Bathrooms are often high-reverb environments due to the tiled floors — I’m sure some of you reading have spent many an hour in one of these environments singing operatically, the echoes morphing your terrible vocal performance into one mistakable for that of a virtuoso.
Such an effect was typically replicated through the positioning of a microphone in a large recording chamber.
Reverb has since become the holy grail of virtual effects, with thousands of digital emulations of nearly any conceivable recording environment available as a simple (albeit oft expensive) download onto your DAW.
Want to replicate the Abbey Road where The Beatles recorded, well, Abbey Road? You can do that. Want to pretend like your vocals were recorded in the Notre–Dame? That option is available to you.
Reverb processing is also available as hardware units, commonly contained within units such as guitar amplifiers, PA systems, FX pedals and analog equipment.
There are 4 main types of reverb.
How and when should I use it?
What’s interesting about reverb is that its use is often connotated with experimentation and artistic venturing, relatively dissimilar to that of compression and equalization which are primarily thought of as being functional. This is completely untrue…
Such a perception exists simply because reverb is a more common tool for scaping unique sounds, not because it allows for any greater level of musical creativity.
Genres such as post-rock, shoegaze, dream pop and ambient all revolve around the heavy use of reverb effects which feeds into this thought train.
The reality is reverb is an essential functional tool for any mix. When you record in a ‘dry’ environment — that is your microphone is positioned so that it only picks up your vocal/acoustic performance — the resultant sound is, well, it’s just bizarre. It sounds so lifeless and dull, and the only way to bring a natural feel to the recording is to ironically, add artificial reverb.
Additionally, reverb is an indispensable element when it comes to spacing a track. A well-tuned combination of EQ and reverb are able to create the illusion of space, sending certain supporting elements of a song to the ‘back’ of an auditory spectrum, while bringing primary tracks to the forefront.
It is vital as both a way of making your recordings sound natural, in addition to making your mixes deeper, thicker and better-balanced. This is of course, not to mention its many creative uses.
It is important not to overuse reverb VSTs in your musical works, unless you are intending to create a certain vibe or sound. Too much echo can cause confusion in the listener’s auditory spectrum, as elements of individual tracks designed to stand out blend together, muddying the song, morphing it into one big vacuum (it’s been a while) of sound.
BE SUBTLE.
An example of how light vs heavy reverb usage on guitars can impact a mix. You can hear the drastic impact on spacing even though no volumes have been changed.
So, you ask, how do I avoid creating this overlap of echo and fuzziness in my songs, without my acoustic recordings sounding dry and unnatural?
Enter…
Delay
What is it?
Delay is pretty much exactly what it sounds like (funny about that, huh?). Similar to reverb, the delay effect revolves around an echo of a portion of the audio, which can repeat continuously, decay and leave a certain amount of time between each time the signal feeds back.
Where delay is distinct from reverb is that reverb is intended as a natural replication of soundwaves continuously bouncing off surfaces, creating a fuller, thicker sound.
In contrast, delay is an unnatural replication of a sound source that echoes ONCE and then stops. This is why a basic delay echo sounds almost exactly the same as its source audio, whereas a reverb echo appears more like a blended soundscape than an individual source.
Tape delay was one of the early forms of easily replicable delay that musicians employed, mainstream use beginning in around the 1950s, though other forms existed prior. This method of delay worked by sending the actual audio signal (which was being recorded) onto another tape machine.
The delay presented between the receiving of the initial audio source and its replication on the second tape’s head created an echo which became a popular effect among musicians.
Solid-state delays briefly entered the professional market in the 1970s as an alternative to tape delays, however were quickly usurped by the dominant force in echo today — digital delay.
Galaxy Tape Echo plugin
Digital delay works pretty much the same way as digital reverb, in that it processes a signal through various, uh… let’s say electronic-related means, allowing for extreme versatility, portability and functionality.
This method of creating an echo was of course ported to computer software with the birth and hostile takeover of DAWs as the predominant medium for music production, leading to the myriad of delay VSTs at our behest in today’s market.
How and when should I use it?
Considering I’ve said nearly the exact same thing for each particular VST element, it’s not going to shock you when I say:
‘Delay has a far wider range of function than just, well, being a delay’.
When you think of delay, I’m sure your mind immediately casts to one of your favorite songs, when the last line of a vocalist’s stanza is repeated exactly ¼ a beat after they’ve finished singing, or as an integral part of a metal’s guitar-driven intro (just listen to one song by U2 and you’ll hear).
Delay can be used for a number of interesting effects, both creatively and in helping a mix sit well. From an artistically experimental perspective, delay has the ability to operate somewhat like a reverb with clarity when rigged to operate as a looper.
Many-an indie guitarist has sat outside of whatever ‘hip’ neighbourhood they’re from, playing a C major chord which sounds unfairly magical due to the presence of a looper.
Delay has the ability to completely alter the pitch and time structure of an individual track or even an entire song, send the audio input back in reverse and add filters such as phasing or eq modulation to the echo.
However, I bet you’re now asking: ‘You promised us an answer to the question about how to avoid using too much reverb, but where is it?’
Yes, a few hundred words and the simplest explanation-that-still-turned-out-relatively-convoluted of delay and I’m still yet to fulfill that promise. Well, you can stop your holding your breath, because here it is:
Delay.
That’s it. The answer is just: Delay.
This can — and often is in pop and rock music — be achieved by using a ‘slapback’ delay, which is an echo with a single repetition that usually occurs very shortly after the source material, giving the impression of a reverb without the mud. Pretty neat huh?
There are a number of YouTube tutorials on how to properly implement slapback delays into your mixes, but using delay instead of reverb is one (of admittedly very few) tips I’m willing to dish out that is applicable to the majority of your creative works.
Here’s a slapback delay on a lead guitar. You can hear how it sounds as though it has been ‘spaced’ backward in the mix, but without cluttering up the low-mid-range frequency bands like putting a reverb on it would.
Saturation (Distortion)
What is it?
Though saturation found use — albeit often organically — prior to contemporary methods of music production, saturation VSTs are the modern engineer’s response to the digitization of hardware.
It is an attempt to replicate the pleasant sonic qualities that the original, analog methods of recording and processing music — way back in The Dark Age — had, giving a more musically powerful and satisfying tonal quality to digital productions that involve little physical equipment.
When deliberately running and overloading sounds, be it an entire mix or an individual track, through hardware such as transistor amps, tube amps and tape machines, the result is a warm, colorful distortion that can range from barely audible to commandeering in its effect on audio.
The sweet result that comes through using saturators is due to ‘harmonics’ that are added to a digital recording or sound. When a waveform is distorted, its original frequency response is altered, with certain new layers and textures being added that weren’t initially present.
While this can have some more extreme and unwanted effects, this is the basic process of a saturator.
By adding character to your mixes and your recordings through the introduction of new, idiosyncratic musical inconsistencies, you avoid missing out on the organic musicality that can otherwise be lost through the digital process.
How and when should I use it?
I must say, saturators are not one of the obvious effects your mind shoots to when you start thinking (okay fine, daydreaming) about VSTs you want to start adding to your DAW’s collection.
It’s not that they’re not ‘sexy’, they’re just kind of, unthought of? I’m not too sure why, other than that a lot of entry-level producers and composers don’t seem to know of their existence.
That said — every new ‘distortion’ pedal is technically a form of saturation (or vice versa) so most musicians will have been exposed to it on some level.
Yet even with their relative anonymity, saturators are, simply put, AWESOME. They just make things sound good. Their presence on every track, every mix, every recording, if carefully operated and selected, is welcome. They can (and often are) the missing piece of the puzzle, the final VST that transforms your track from sounding like a finished draft to a radio-ready production.
I can hear you scoffing right now, even from inside my house in Australia. Yes, I can. Don’t ask how. What’s that? You think it’s a ridiculous idea to put distortion onto an instrument like an acoustic guitar, or a heartfelt cello solo, do you?
I admit — it may indeed seem that way, but in practice that assertation couldn’t be further from the truth. Saturation can literally (yes I mean literally) be applied to any recording, instrumentation or mix so long it is done with a keen ear and a purpose.
This applies to all of the above-mentioned VSTs, but it is a particularly important lesson when considering a tool as powerful as saturation — one that can be plug-and-played on anything and make that individual recording sound better.
While that one track might sound better, you may be taking away focus from the vocals, or the guitar solo, or paradoxically muddying up your final song by granting each individual recording a new level of clarity.
And as my parting gift to you for this section, I’m going to include a piece of advice you might remember from before. I’m sure you’ll be overjoyed to hear it.
BE SUBTLE.
Tape distortion is one of the most popular and pleasant-sounding saturation effects. Listen to its effect on the same guitars from before — you can hear its warmth come in about halfway through the recording.
Other cool effects
I don’t want to keep you here all day (and all night, and all week, etc.) so I won’t even come close to going through an exhaustive list of audio effects that can be sourced for your DAW/home studio.
That said, I will briefly mention a few other popular ones that aren’t as vital to your mix as The Big Five, but can be just as impactful and exciting.
Analyzers/Meters: A generally visual plugin which measures the amplitude of an input signal and puts it to a specific measure — often the frequency spectrum. This is useful for comparing your mix’s balance to others within the genre.
Chorus: While chorus can be created using delay, many plugins devoted to emulating such effect with more malleability exist. Chorus is essentially an extremely short delay and minor pitch alteration between an original sound and an output.
Filters: Filters are just pre-automated EQ shifts — a bit like what I demonstrated in the EQ section but generally constantly moving/evolving. That classic sound of a jet engine starting up could be done with an AutoFilter.
Flangers: Flangers operate almost identically to Choruses, though with a much shorter delay time, which causes greater feedback in the higher frequencies which results in its distinct, resonant sound. Chorus sounds more like a detuned sheen — flangers feel more like movement within the audio.
Phasers: Once again based on copying a signal (like choruses and flangers), however this time without using any delay. It is then passed through a filter which creates a series of peaks and dips in the frequency spectrum — sounding similar to a flanger but with an even greater sense of dynamic drive.
Panning plugins: These usually just automate panning creatively — using gates and thresholds to dictate when and where a designated sound moves within the left-right audios spectrum. Certain panning plugins allow for ‘3d processing’ instead of just 50L and 50R.
Conclusion
Hopefully now you have a handy overview of some of the more popular audio effects that are available to you through both your stock DAW as well as third-party VST plugins.
Many of these audio manipulations are digital emulations of old hardware, allowing amateur musos such as myself the ability to play around with them (perhaps too much) without having to sell their entire house.
The world of audio effects, and by extension, mixing, is so dense and never-ending that it is impossible to learn everything there is to learn. Even the greats are constantly experimenting, constantly finding out new ways to operate sound waves to please their audiences.
But hey, if we knew everything already, what would be the point?
So good luck class, and I look forward to seeing you back here next time where we learn about the atomic structure of CDs.
Just kidding.
You might also like:
Best VST Plugins: Must-Have Effects for Any Budget
Virtual Instruments: In-Depth Guide + Best Free VSTi