Spatial audio has been one of the hottest words in the music-lover scene, especially among listeners, who are in the Apple ecosystem. Apple Airpods Pro earphones, the Airpods Max headphone, even some of the Macs can stream and play audio in spatial format. It is no secret that I have also been experimenting in the domain of “Atmos mastering” (more on this later in this post) and having done that and having completed training curated by Dolby themselves, I can proudly say that Sjötta Online Mastering now offers “Atmos mastering”. (Not long to know why the quotes, I promise.)
Spatial Audio: A New Buzzword to Sell With
Spatial has become such a buzzword that entities outside of the Apple ecosystem have started offering some sort of spatialization: for example, Beyerdynamic have announced earphones that are spatial audio capable and after Apple Music, Tidal have also entered the ring with the possibility to stream music in Dolby Atmos. But also Apple have stated they would bump royalties after songs streamed in Atmos just to make the format more popular among artists and labels.
So it is definitely worth to release in Atmos, but how does Sjötta Sound come into the picture?
What is “Atmos mastering”?
There is a very long-running misinterpretation of words concerning one particular workflow in spatial audio engineering and I really do not know if I should be doing this but I will say it anyway:
Atmos mastering per se, is basically nothing. It is nothing more than arranging readily mastered songs for a DDP print when a CD release is an objective. (Something that is on offer at Sjötta Sound, by the way.)
What do people mean then colloquially when they say “mastered in Atmos?”
They usually mean – and that is what is offered by Sjötta Sound – spatialized stem mixing, resulting in a Dolby Atmos format file package, suitable for submission to distributors to later become the Atmos version of the given release.
Dolby Atmos Stem Mastering thus always starts with the song broken down into stereo stems – not recordings of the same kind of instrument, but preferably functional groups of instruments –, then those stems are brought together in some sort of mastering but not worked into a whole, and then placed into the virtual space Atmos objects and bed provide where they will be adjusted according to their place in the virtual space and – if need be – automated as well.
This is the way it works 99 percent of the time but Dolby does not think so. Sadly.
Atmos First: Reasons it Does Not Work
What Dolby emphasizes is a principle of “Atmos first”, meaning they think that since Atmos is the most complicated format with the most possiblities for expression and technicality, all mixing and mastering should be conducted in an Atmos-capable environment, using an Atmos renderer and all other formats, including stereo will be mixed down by the renderer.
I think this mindset is a bit off, and there are three important reasons why:
The vast majority of people in the end does not listen to music in Atmos. They listen to music in stereo and thus the stereo version of a given recording simply has to have primary place in the workflow.
Opposite the consumer side, there lies the production side: stereo mixdowns automatically created by the renderer are simply subpar to what a person – and not artificial intelligence – with experience and knowledge can create either on the mixing or the mastering side.
Third one is a cruel technicality: time. Since Atmos stem mastering projects take a lot of tinkering and automation, it can happen that it is much more efficient to take shorter but more fruitful time to do the stereo master.
Is Atmos Wrong Altogether?
Not at all. Atmos (Stem) Mastering or spatial stem mixing can do wonderful things to your stereo stems and give a lot of emotional and contextual edge to a song. It is just that Atmos versions should not come to be first because it can be detrimental to maximizing the desired audience.