How Directionality Works
Localizing sounds or identifying the direction they come from, is primarily made possible by having two ears, separated by our head. Just like having two eyes improves depth perception, input from two ears improves our ability to comprehend speech in noise. A person with normal hearing in both ears has a better chance of understanding speech-in-noise than a person with perfect hearing but only one ear.
One ear, or one microphone, is only capable of providing directional information by using a reflector of some sort. A sound from the front is about 3 dB louder to humans than a sound from behind due to the shape of our ears. Turning your back on a sound reduces its volume and increases the volume of sounds in front of you. Turning toward a sound improves the signal-to-noise-ratio (SNR), which in turn improves our ability to understand speech in noise.
Digital hearing aid microphones work the same way by pointing microphone inputs to front and back. Around the turn of the century dual-microphone hearing aids went mainstream. Front and rear microphones and a Digital Sound Processors (DSP) gave hearing aids the ability to separate sounds in front of the wearer from sounds behind. The hearing aid’s brain, DSP, chooses which one to amplify, and which one to ignore.
Directionality, choosing which signal to focus on, makes advanced noise reduction possible. By wirelessly connecting the microphones in both hearing aids the wearer’s sound environment is continuously mapped to identify speech sounds. Advanced wireless hearing aids benefit from doubling the number of microphones in the array and the separation between left and right ears, which enables the DSP to steer its focus toward a speech source and away from noise.
By constantly mapping the wearer’s sound environment, DSPs also decide which of its stored algorithms will yield the best SNR for the current type of noise. Automatic switching enables hearing aids to adapt to changing sound conditions and select the optimal environmental program without input from the wearer.
What to expect
Whether we realize it or not, our brains are constantly comparing the inputs from each ear. When we face the source of a sound, we expect to hear it at equal volume in both ears. Most pairs of hearing aids are programmed to the individual’s hearing loss in each ear to maintain a balanced input from left and right ears, thereby making the sound scape as natural as possible to the wearer.
However, it is important for the wearer to understand that by design, directional technology can focus on and amplify one sound while nulling out another. When hearing aids automatically null a loud noise, it can feel like one ear, usually the one toward the noise, is plugged or is suddenly not hearing as well.
For new wearers, or someone that hasn’t been counseled what to expect a noticeable drop in hearing level can be disconcerting. Experienced wearers come to expect plugged feelings to come and go in noisy environments as their hearing aids automatically switch between environmental programs. Some people are more sensitive to program changes than others, especially when hearing aids go in and out of a directional mode. Reducing the number of environmental programs available in the hearing aid DSPs helps alleviate this sensitivity.
Doing these three things will improve speech comprehension in noise:
- Face the person or subject you want to hear
- Stay within ten feet of your subject
- Position yourself so that noise comes from a different direction than the subject