
Sound is at the core of our careers. It informs nearly every decision we make, from the subtle inflection in a winning audition to the placement of an acoustic panel in our home studio. Yet, many voice actors don’t really understand the core science behind sound, despite its key role in our lives.
Understanding how sound works will give you a strong foundation. You will understand how the frequencies in your voice work, which will give you freedom to experiment with new timbres and textures. It will be easier to make informed decisions (or at least more informed guesses) when setting up or modifying your home studio. And it will help demystify the jargon you read when looking at microphones, plug-ins, and post-processing in your DAW. Let’s dive in!
Fundamentals of Sound
Sound is the transference of vibrations through “sound waves.” When anything (such as a string, hard surface, or the air in a wind instrument) vibrates, it produces sound waves. A sound wave is a repeating pattern of alternating pulses of compression and rarefaction – literally molecules squishing together and pulling apart repeatedly. It is somewhat more complicated than this, but for voice-over purposes, just picture a classic, two-dimensional squiggly line.
Wavelength, Frequency, and Intensity
Sound waves are measured by wavelength, frequency, and intensity. Each “pulse” or compression (and corresponding rarefaction) in a sound wave is known as a cycle.
The wavelength is the distance between two adjacent peaks (top point of the wave) or troughs (bottom point of the wave) in one cycle. As a voice actor, wavelength mainly matters for acoustic treatment.
The frequency, measured in Hertz (Hz), is the number of times a cycle passes a point in one second. Frequency is directly linked to wavelength with the formula: speed ÷ frequency = wavelength. This means that high-frequency sounds have short wavelengths and low-frequency sounds have long wavelengths. Frequency corresponds to pitch on a musical scale: higher frequencies have a higher pitch and vice versa.
Intensity is how much energy is transmitted through the sound wave – in simple terms, it’s how loud or how soft a sound is. For our purposes, this is measured in Decibels (dB). Decibels are a logarithmic scale starting with 0 (the quietest threshold for human hearing), and every jump of 10dB means multiplying the intensity by 10. So 20 dB is 10× more intense than 10 dB, and 30 dB is 100× more intense.
In case your head isn’t spinning yet, here’s a complication to throw in the mix: Since sound waves are a physical disturbance of molecules, they are affected by the environment. Temperature and density of the air will change how fast the molecules can move, which changes the actual speed of sound, affecting our formula! The change is generally minimal for voice-over, but useful to know.
What You Sound Like
Now, how do you actually use that knowledge? While performing, you are manipulating your vocal apparatus (also called your vocal anatomy) to affect the timbre and pitch of your voice. This comes down to frequency: the “fundamental frequency” and “overtones” (resonant frequencies).
If you look at the full frequency spectrum (human hearing ranges from about 20Hz to 20,000Hz) and pluck out a single frequency, you will get what’s called a “sine wave.” This is the purest representation of a frequency. This sound consists of ONLY a fundamental frequency. Any singer or instrument can produce the same focused fundamental frequency, along with additional, quieter frequencies. These are the “overtones” or “resonant frequencies.”
Pitch is the most straightforward. It’s the fundamental frequency. How high or low is the core of the sound you are producing? In music, certain frequencies are targeted as notes, with the most common example in Western music being the pitch A4 at 440 Hz. You can affect your performance drastically by changing the pitch, or fundamental frequency, you are producing.
Timbre, pronounced “tam-burr,” is all about those extra frequencies. This is what distinguishes one voice from another, from a trombone, a kazoo, or even a squeaky chair. In standard speech and singing, these additional frequencies generally have a lower intensity than the fundamental, but can also be manipulated. When you equalize, or EQ, in post-production, you are raising or lowering the intensity of different frequencies, affecting the timbre.
As a voice-over performer, manipulating the timbre of your voice is an invaluable tool. For example, if you focus your voice forward and talk through your nose, you are amplifying the mid- to high-frequencies. When you open up and talk from the back of your throat, you are diminishing those higher frequencies and instead highlighting the lower ones. This is why you can sing the same note and have it sound lower or higher: the fundamental frequency stays the same, but all the other frequencies change in intensity, change altogether, or both. This is also true for your equipment: analog gear is often known to “color” a sound, which simply means adding or emphasizing certain frequencies.
As An Audio Engineer (Still You!)
Every voice actor is, to some extent, an audio engineer. Whether it’s setting up a home studio, editing an audition, or producing an audiobook, you will need to dip your toes in the engineering world. [This isn’t a full guide—just key concepts to keep in mind.]
First, when building a recording space, you are focusing heavily on the wavelength and intensity of sound waves. Different materials will reflect and absorb sound waves with different efficiency, but the main variable is wavelength. For example, acoustic foam mainly absorbs short wavelengths (high frequencies), which is why it’s usually discouraged. This is also why your room tone is usually in the low frequencies – the bigger the wavelength, the thicker and denser the materials needed to stop or absorb it.
Second, sound waves interact with each other. They interfere and may increase or decrease in intensity, especially in smaller spaces where they can bounce around and interact more regularly. This is the prime culprit in “boxy” sounding rooms. Smaller rooms create resonant environments for frequencies with wavelengths in the human vocal range.
Third, sound changes when it is captured by a microphone and travels along an analog circuit. Different equipment will add or emphasize different frequencies, which is why microphones sound different. It is also why the “proximity effect” works – most microphones pick up the lower frequencies in your voice better at close range.
In Conclusion
The world of sound and audio is vast, and this blog post only scratches the surface. If you are interested in learning more about audio, I highly recommend checking out https://audiouniversityonline.com. There are a ton of free resources and informative videos on nearly every audio topic, and it’s where I learned a lot of what I put in this post. I am also available for consultations. You can connect with me through my website peterreinvo.com. Good luck on your voice-over adventure. You now charge forward armed with more sonic knowledge, and as we all know, knowledge is power!
VO Pro Members: I’ll be presenting on compression and limiting on April 12th! Be there or be a square wave! If you’re not in VO Pro and would like to attend, Such A Voice is offering a one-month trial, which you can find out more about here.
P.S. If you haven’t yet taken our introductory voice-over class, where we go over everything one needs to know about getting started in the voice-over industry, sign up here!
P.P.S If you want to learn more from VO experts and grow the knowledge you already have, join our VO Pro group!




