Everything You've Ever Wanted to Know About Immersive Audio
Well, not really everything – since I can’t pack everything about immersive audio in one article but I promise to give you a good starting point.
Since the term has been recently gained widespread adoption with Apple’s announcement in July, lots of music mixes who primarily work in 2.0 or maybe 5.1 started exploring formats like Dolby ATMOS, hey that’s cool and everything; however the information about immersive audio can be overwhelming for the faint of heart, and not easy to shift through if you have no idea what you are looking.
So what is immersive audio and why should you care, the definitions are plenty so I will pick one and move on🏃.
The term describes a number of varying experiences and more accurately it is a deep mental involvement in which the listener might experience a disconnect from the physical space, please don’t confuse this with terms like presence or envelopment . We will skip the history lesson and I will assume you understand the concept of spatialization in audio; if not read this by Francis Rumsey.
While 2.0 have gotten us this far, some have explored surround mixing for music. This was somewhat successful; however, there are limitations to surround reproduction. The format was not intended for accurate 360 phantom imaging capability not to mention the front sound stage is narrower than you would expect compared to 2.0. Lastly, the centre channel is a pain for balancing music, keep in mind that panning laws and dual-mic techniques aren’t optimised for 3 speak layout since it has been built for 2. One more downside to conventional surround is that huge gap in the image behind the listener, some argue that these various – particularly for music led to non-standard use of the multi-channels available. 🤔Then what is the solution for all these limitations and is there a way to personalise the experience for each listener.
Enter object-based audio.
Some of you probably guessed it, object audio might hold the answer for those limitations in a channel based approach. Also, there isn’t really a chat about immersive audio without explaining what is the object-based approach.
The channel-based approach is based on having loudspeaker signals are saved as a result of spatial audio production. What the f**k does that mean, well simply put- it means Each channel corresponds to a signal fed to a loudspeaker that is located in a particular position. If the loud-speaker or speakers aren’t where the system expects them to be the results will be unpredictable and maybe a pile of s***t. This approach is cool and everything provided you know exactly how people are going to playback those signals, this was alright in 2.0 but the more formats are added, the more the problems have grown.
Object audio treats signals differently, an object can be identified as a virtual source, which carries an audio signal plus metadata, said data could be position and gain. To playback objects, a renderer is needed. The process generates speaker signals based on information like the position of the loudspeakers and the listener’s position in a virtual space; this works for headphones as well.
So now that you have understood more about the difference between channel and object approaches, let’s look at your options. You got Dolby ATMOS, DTS:X and Fraunhofer among others like Sony 360 !!-offering personalised audio delivery systems based around the MPEG-H audio. The idea of using objects makes it possible to deliver content to the end-user where they can adjust the balance between content elements, effectively offering a personalised audio experience.
So what do you need to mix in one of those formats?!!
I will talk about Dolby ATMOS since it is gaining momentum right now and since a number of music streaming platforms support it, keep in mind you will need to check if your distribution service accepts immersive formats for release.
Dolby ATMOS can use up to 64 channels, the format builds upon conventional surround formats by adding overheads and gives you the option to use up to 118 objects; remember we can move those objects anywhere in a 3D virtual space.
To summarise, an ATMOS mix has 2 main elements:
· Beds: This is channel-based and it is based on your main outs, which can be 7.1.2 or the recommended 7.1.4. keep in mind that your overheads have no back/forward separation, like your stereo bus.
· Objects: Since objects are managed by metadata, it is more accurate and allows positioning within a 3D space which will be rendered at the playback stage.
To mix in ATMOS for music, you got 2 options.
Run the ATMOS rendered on your workstation with your DAW, it’s CPU hungry though so be careful. You can get it from AVID or Dolby’s portal.
Get the Mastering suite which has a bit more controls and features, it runs on a separate workstation usually hooked up over MADI or Dante to your DAW.
At the moment, ATMOS supported DAWs includes Ableton Live, Apple Logic Pro, Avid Pro Tools, and Steinberg Nuendo. That said some DAWs make it easier to work in Dolby Atmos. Avid Pro Tools and Steinberg Nuendo are the only two DAWs that have native integration for Dolby Atmos. You will need Pro Tools Ultimate 2018 and above to access the Dolby Atmos native integration workflows. Don’t forget to pick up the free Dolby Atmos Music Panner plug-in
You can use up to 22 loudspeakers in the Dolby Atmos suite, the more common config is 7.14 and some do a 9.1.6 as well. While you can use 5.1.2 or headphones binaurally, I highly recommend monitoring over 7.1.4 so you can clearly understand how your mix will translate.
For an ideal monitoring environment, your speakers should be the same make and model, you can use a smaller model for your overhead speakers though. Don’t forget you will need an LFE speaker and a monitor controller that can manage all this. Feel free to check Dolby’s recommendations here !!!.
Now that you know how to start working in ATMOS, you might ask yourself; what is the point in creating great content in Dolby ATMOS if we cannot get it in front of consumers. The audience actually can and are able to enjoy immersive audio including ATMOS. Apple recently brought ATMOS to consumers in June 2021.
If you have AirPods or Beats, your device will automatically playback in ATMOS. Note that ATMOS is supported with any other type of headphones.
So now it’s up to you if you are with object-based audio; What are the challenges you are facing?
How do you decide what are objects and what remain beds? How do you go about making the decisions about what goes where?
Please share your thoughts and observations in the comments section below.