Binaural Recording 1.0
Binaural recording is one of the most underrated aspects in audio.
Not only does it drive an immersive experience for the listener, it gives an unforgettable experience.
In the upcoming series of posts, I`ll be exploring more about binaural audio and the recording techniques behind it; however before we dive in, there are a few things we need to cover to make sure everyone is on the same page.
To understand more about binaural audio we`ll start by discussing HRTF (head-related transfer function) less commonly known as ATF (anatomical transfer function)
According to Starkey Labs:
“A microphone placed at the entrance of the ear canal can measure the filtering function of the ear (the HRTF). As the system works linearly, any sound may then be filtered using the HRTF to get an estimate of the actual perceived spectrum. An HRTF set, the aggregation of HRTFs from every direction around the head, summarizes all the location-dependent variation of an acoustic signal. Systematic location-dependent variations of HRTFs in a given set help identify the acoustic cues available for human listeners to use for localizing sounds” (1)
Fig1. HRTF azimuth
(Fig1. HRTF azimuth)
HRTF describes how our ears receive sound coming from a specific point in a medium.
There are many different variables that contribute to how a sound will be perceived i.e. (Head shape & density-ears-nasal/oral cavities)
As Simon Carlile says “All sounds arrive at the ear drum as a combined stream of pressure changes that jointly excite the inner ear.” (2), (3)
Sound arrives to our ears and gets through the outer ear to the middle ear.
During this process in which sound passes through the auditory canal- the overall sound energy is increased then gets transmitted to the mid ear (more on its transformation below)
The structure/anatomy of the outer ear conveys/guides sound to the auditory canal and due to the complex interactions that happens between sound coming in and small reflection inside the ear, the outcome is spectral filtering, the direction of the incoming sound dictates what kind of filtering it is, in addition the human body scatters and filters the incoming sounds which is used an auditory cue to the location
HRTF describes how the frequency response, time and level difference of sound arriving at both ears. (4)
How does this work?
It’s obvious that every human body differs (Head/Ears shape)
Sound is presented from a specific point in space then it’s filtered by means of diffraction and reflection before the eardrum and the inner ear transduce it.
The difference in arrival time between both ears helps the human brain to determine the azimuth of a sound, the high frequency response, its relative loudness and the spectral reflections introduced from the human body
The loss of high frequency content, loss of intensity and ratio between dry/reverberated signals cue our brain to determine distance of the sound source.
Recently some developers have explored the idea of matching headphone sound vs. external loudspeakers.
On the basis that if acoustical signal has been processed via HRTF then played on headphones and by applying the same characteristics of speakers in free field then the listening experience should be the same. (5)
1. Moller, H. (1992). "Fundamentals of binaural technology." Applied acoustics 36(3): 171-218.
2. A.S. Bregman, Auditory scene analysis: The perceptual organization of sound, Cambridge, Mass: MIT Press 1990)
3. S. Carlile and D. Pralong, "The location-dependent nature of perceptually salient features of the human head-related transfer function".
4. Alain C., Arnott S. R., Picton T. W. (2001). Bottom-up and top-down influences on auditory scene analysis: evidence from event-related brain potentials.
5. Durin, V., Carlile, S., Guillon, P., Best, V., and Kalluri, S. (2014). "Acoustic analysis of the directional information captured by five different hearing aid styles." J Acoust Soc Am 136(2): 818-828.