By: Ken Humphreys
After a sound wave reaches your eardrum, the real magic begins. When you’re aware of a sound, your ear and brain are working together on the difficult task of selecting which sound to pay attention to, what might be making it, where it’s located and much more. Here are a few of the important “processing” jobs you routinely but unconsciously accomplish:
You can hear a sound at 0 dB (but just barely) yet handle sounds with a trillion times the energy at 120 dB! The price you pay to be able pull off this remarkable feat is that you’re fairly insensitive to changes in sound energy levels. For example, a speaker receiving 100 watts of energy will sound only four times as loud as when it’s receiving 1 watt. One side-effect of this phenomenon is that you don’t need to concern yourself with amplifier power nearly as much as you might think. 70 watts—100 watts—what’s the difference? Only about 1½ dB. Not much, right?
This graph shows the sound intensity range that you’re able to make sense of. Each 10dB increase represents 10 times the energy, but only twice the loudness.
Another neat loudness related trick your ear performs is that it becomes increasingly sensitive to bass when the sound is loud and sensitive to the midrange when everything quiets down. The “loudness” button on your receiver is designed to compensate for this by boosting the bass at lower listening levels. This context-sensitivity was probably quite useful for cavemen by allowing them to derive useful bass information when encountering stampeding wooly mammoths — yet be able to tune into the slight rustle of a skulking saber-toothed tiger. Dinner is served!
When you hear a sound, you can immediately turn your head and face it. You probably take this for granted, but you might not if you knew the number of hard-to-believe calculations your brain just performed. Scientists are discovering that you construct a spatial model in your brain that updates constantly and uses sound as well as sight. Yes, very similar to bats.
To locate sounds in this model you constantly gather information from a variety of sources:
What kind of space are you in? When you hear a sound it has a “signature” that is unique. It arrives at your ear and gets “fingerprinted,” and a few milliseconds later a family of other sounds that bear the same signature arrive in the form of reflections. First, they are associated with the first arrival that created the signature so that the cacophony of other sounds around you can be ignored. Then, by calculating the direction of these delayed arrivals, how long they were delayed and the way that their signature has been “smeared” (whew!) you are able to tell a lot about what kind of environment you are in. For example, you may now know that you are in a small room with large, hard surfaces. In order to do this you had to determine the directions of the original sound and the way it echoed around the room. This was not simply done. Your brain just took into consideration at least three different kinds of information to calculate the direction. First, one ear heard the sound as louder simply because your head created a “sound shadow” and blocked the sound to the ear furthest away. Secondly, the part of your ear that sticks out from your head modified the sound in ways that clue you in regarding the direction the sound came from. And lastly, your brain calculated the phase thing: How much was the delay between the wave arriving at the left ear versus the right ear? Your brain then unconsciously applies this formula to what you just experienced.
And you thought you were bad at math! Your brain is a whiz.
For speaker designers, some of the important points are:
- For familiar sounds, you are very sensitive to “tonal balance,” that is, are the treble, bass and midrange parts in the right proportion to one another? If a speaker’s frequency response graph is “flat”, that tells you that it’s reproducing the sound with the right balance (at least for the position of the measuring microphone). This has been shown to be the most important thing perceived as “accuracy” in the entire audio system.
- How much of a time-delay does there need to be for an arriving sound to be identified as a reflection rather than part of the original sound’s signature? Jury’s out on this one, but if the “delayed arrival” is soon enough, say from reflections off of the grill frame or speakers that aren’t mounted flush, it is heard as part of the signature and you will hear it as a distortion to the tonal balance.
- Your brain mostly ignores reflected signals when evaluating the balance of sound. Bass reflections get treated a little differently.
- You cannot locate bass sounds unless you correctly associate a bass note’s overtones and then locate them in space. This allows for speakers that specialize in low bass—subwoofers—to be placed away from the main speakers and successfully fool you into believing that the bass is coming from the small speakers that reproduce the overtones.
- Here is a neat little experiment you can try at home: Put your amp in “mono’ mode and notice that if you are just slightly off-center, the sound will appear to be coming completely from the speaker closest to you. This is because your brain is a detective, and the sound coming from this speaker gets “fingerprinted”, identified as the first arrival of sound and then used as the sole source of information as to the sound’s location. You are still hearing the other speaker, (try unplugging it) your brain just isn’t using it for location information.
The study of how the brain processes sound – psychoacoustics — is a huge and very interesting body of information. If you’d like to read more, there’s a lot to dig into. You may want to bring a good shovel and start at this wikipedia article. Have fun exploring science!