There's an elephant in the room. Or that's what you want anyway. You want fat, satisfying bass that has all the firmness and richness that you hear in your head yet seems so elusive in your mix.
Your journey to amazing low end is not a one stop journey. No, this blog article is not about how to get that perfect low end, but it's one major step in preventing a mix rife with problems. If you struggle with your low end sounding inconsistent, awesome now but disappointing only seconds later, this article is for you.
It's time to stop sabotaging your own mix.
The Fundamentals of Fundamentals
There was a very clever man that lived back in the 19th century. His name was Joseph Fourier. The work he is most famous for is the Fourier series and Fourier transform. What this is, in a nutshell, is a theory (that is absolutely true) that all waveforms can be represented by a sum of sine waves at different frequencies, amplitudes and phase shifts. Let's not get into a big lesson on that right now. What you do need to know is this. That any musical sound can be broken down into a series of sine waves. We refer to these as the Fundamental Frequency and Harmonics.
The Fundamental Frequency tells our brain what note or pitch is being played at any given moment. The Harmonics of that note build the complexity of that sound and depending on the mathematical relationship of the intervals and their amplitudes, you get the timbre of the instrument or sound.
This is crucially important to understand when you apply EQ to any sound.
The Problem of using EQ to adjust Timbre
EQ was not designed in the first place to adjust the timbre of an instrument. However, we've grown so accustomed to using it in that way that we never question whether this is a wise practice or not.
EQ, or Equalization was originally designed to compensate for the uneven frequency response of an instrument, room, speakers, or other kind of audio equipment. It was not designed to change the characteristics of an instrument's sound. It was used to make all frequencies equal in loudness. Get it? However, we get pretty good results using it to shape an instrument's sound so we think it's the right tool for this. But it has limitations and we had better be very aware of them if we want to use it properly and effectively.
When you adjust the EQ on an instrument, you are effectively changing the relationship between the fundamental and the harmonics and often the relationship between separate groups of harmonics. The problem is that our fundamental frequencies and all the harmonics change with every different note while your EQ settings are static.
If you look at how a spectrum analyzer has the frequencies laid out, it's a logarithmic scale. The far left of the scale is 20 Hz, the centre frequency is usually 1 kHz, and the far right of the scale is 20 kHz. Between the left boundary and centre is 9980 Hz. From the centre to the right, it's 19 000 Hz. The difference between octave notes also increases with frequency. A is at 55 Hz, 110 Hz, 220 Hz, 440 Hz, 880 Hz, etc. You can see the gap getting larger, but it's only doubling with every octave. The logarithmic scale of how we perceive pitch increases at a much faster rate. Consequently our fundamental frequencies, as seen on a logarithmic scale, get bunched up closer and closer together as they get higher.
You may be wondering what this has to do with low end EQ. Well, let's take a close look again at those fundamental frequencies at the bottom of the spectrum. A1 is 55 Hz, as we just mentioned right? well, B1 is 61.74 Hz and C2 is 65.41 Hz and D2 is 73.42 Hz. Are you starting to see the issue?
Check out those EQ settings above. Seems innocent enough, right? Let's say you wanted that nice zone where the bass sounds heavy and rich around 60 Hz. You boost that. Let's say you don't like that muffled sounding characteristic that comes in a 180 Hz. Let's cut that. When you do this you are thinking about frequencies and you probably aren't putting together the idea of what notes those frequencies are. Because if you start thinking that way, you'd realize that what you've actually done is made the note B1 louder and it's third harmonic quieter. But now what happens when B1 isn't the note that the bass is playing? What if it was E2 played next at about 82.41 Hz? Now that note is quieter than the B1 just played before, which is disappointing, and also the cut at 180 Hz is acting more on the 2nd harmonic than the third so our timbre doesn't match the B1 note we just heard either! Now lets say for the third note, the bassist slides up to play E3 at 164.81 Hz. Now this one sounds the quietest of them all, and none of the harmonics are being EQ'd so it doesn't match in loudness or timbre either. What a mess!
Disappearing notes up high on the bass is one of my biggest pet peeves as a mastering engineer. And I always know it's because there's a nice big scoop at about 200-300 Hz, with no regard given to the notes that are being played up there.
You can see what trouble you can get into once you start tinkering with the EQ down low. It can be a disaster. As a mastering engineer, there's not a lot I can do about this since there are kick drum, snare, guitar, and synth elements working down there as well. Sure I have tricks up my sleeve like multi band compression and mid side techniques, but realistically by the time mastering comes, it's too late for this situation to be salvaged.
Ok, So When DO I Use EQ On The Low End?
Great question. Remember we mentioned that EQ is used for equipment or acoustics that don't have an even frequency response? Rooms are horribly uneven when it comes to acoustics for low frequencies. You will have modes and nodes that either make bass notes way louder or way quieter when they are played in that room. If you've DI'd your bass guitar or used a synth, this becomes removed from the equation (which is one reason why DI bass is so popular). But if you've mic'd up a low end instrument in a room, you'll find that there certainly are frequencies that are louder and quieter. And because those frequencies are based on the dimensions of your room, they are also static. Using EQ to even out the low end response due to room acoustics or actual instrument flaws is a great application, and it's the intended application for that lovely EQ you whipped out on that track. A good way to know this is by listening for those offending frequencies across several notes of different pitches. If that frequency is too strong or too weak every single time, then please, go ahead and use the EQ there.
Also, have you ever noticed that most EQ's have a shelf on the low band? That's so that you can grab all those fundamentals and shift them up or down the same amount. While that approach is also imperfect since on the low notes the shelf will grab some harmonics as well, it's much better than using bell shapes to cut and boost specific frequencies.
This leaves us with one lingering question:
So How Do I Make That Bass Sound Awesome?
I hate to tell you this, but the correct answer is: at the source. Your bass instrument should sound how you want it to sound before you even hit record on your transport. You should be setting the instrument up to sound the way you want and then setting up the recording path to at least capture the sound if not enhance it.
Did I just say enhance?! Yes I did. Not all processes are frequency-static. One process you can do to bass tracks that follows the changes of fundamental and harmonic frequencies is applying harmonic distortion of some kind. Ever wonder why it's called "harmonic distortion" because it creates harmonics based on the frequencies of the signal being distorted. Lucky for us, these harmonics often follow the natural harmonics that the instrument itself would generate. Therefore, if you get the right kind of distortion happening, you can increase the loudness of the harmonics in relation to the fundamental in a way that always sounds nice and consistent across the range of the instrument.
Distortion can come from many different sources: tubes, compression, solid state components, etc. They all have their own quirks and sounds. You may need to play with many different kinds too get the right effect. Specific use of compression and distortion are for another blog entry, since they are monster topics of their own.
So, in summary: First: set up your instrument and recording path well. Spend a lot of time on that! This is something you should spend the most effort on regardless of the instrument, genre, or budget.
Second, play with small amounts of different kinds of distortion if you want to make that instrument sound more rich.
Third: Always ask yourself if you are correcting for the static frequency imbalances that come from the instrument itself or the room it was recorded in.
Have fun with it. Looking forward to hearing your next batch of mixes!