All about producing and mastering audio for disc, the web and beyond
Sunday, April 8, 2012
I’d like to talk a little bit about stereo processing and mastering.
So what does it mean, ‘stereo processing’?
Maybe the place to start is to talk about what does it mean to think about stereo – there are really two different ways of thinking about stereo. One is left channel / right channel. You have two mono channels that are feeding two channels in an amplifier, two speakers, and the arrival time coming out of one speaker or the other or both at once will determine the perceived direction of any given sound. If you delay one speaker a little bit, everything will sound as if it is coming from the other speaker because of the precedence effect, or the Haas effect as we know it. So that’s one way of thinking about it – as dual mono.
The second way of thinking about it is something called M/S or Mid-Side processing. It is a style of processing audio that’s coming a little bit more into vogue in some plug-ins, compressors, equalizers and analog equipment that is being introduced to the market over the last several years. Going way back, it is best known as a microphone pickup technique.
The principle of M/S, or Mid-Side is to think about everything that’s coming from both speakers at one (that’s common to both channels) as being one component of the sound and everything that’s different, that’s arriving at a different time from one speaker or the other, or the ‘difference channel’ as being the other component of the sound. This is sometimes known as ‘A minus B’. if you want to listen to the difference signal in a mix or a recording, you can take the two channels in a recording, pan them both to mono, flip one of them out of phase and what you end up hearing is the ‘difference signal’ – everything that’s not common to both channels in a recording.
So that’s a different way of thinking about the stereo field and the way that we thinking about processing stereo – what’s important, what’s interesting, what’s meaningful about this?
If you think about it, when you are listening to two speakers, in order to determine direction you are relying very heavily on this concept of phase or arrival time shall we say. If something shifts in the arrival time between two speakers then suddenly you’ll have a very very different sense of its location. If we’re processing the left channel and the right channel separately and we introduce an equalizer that’s not a linear phase equalizer for instance, there’s a strong possibility that that instrument or that part of the spectrum is going to change in it’s perspective in the stereo image. If that’s what’s desired – great – maybe that’s a desirable effect. But more often than not, we want to do the same thing to both channels at the same time in order to keep the phase coherency between the two channels and maintain that sense of stereo image. What’s important about stereo image is not just the localization of an instrument – not just that the trumpet is coming from right field - but also that the integrity of the reverb stays the same, that the width of the stereo field and the sense of openness and space and top end remains the same as what was present in the mix when it came into the mastering studio.
Processing in the mid-side paradigm has a similar kind of concern about it. In other words if you change the arrival time, the phase relationship between the mid and the side channel, you are likely to drastically change the sense of space in a recording. If you exaggerate the side component too much, you are also likely to find that everything that was in the center of the mix might begin to recede for the listener. I think most people would agree that the stuff that appears in the center of the stereo image is very often the most important stuff. For a Pop mix, it’s usually the kick drum, the snare drum, the vocal, the bass – the main players in a mix. So you have to be careful about how much you change things in the mid channel and the side channel with respect to each other.
What to watch for with stereo processing
As you start to think about mid-side, it’s important to note that there some other arenas of what happens to audio that is playing in the M/S sandbox if you will - where they become important.
The first thing I want to point to is when you are using a stereo compressor. If you are feeding the left channel and the right channel separately into two sides of a stereo compressor and you allow the detector circuit for each side to behave independently, you could end up with some very strange behavior when you are listening to the output of the speakers (listening to a stereo mix) when for instance, the drummer hits a rack tom that appears in one speaker that jumps out of the mix a little bit, one channel of the compressor might be compressing heavily whereas the other doesn’t compress hardly at all and the whole stereo image can begin to steer in one direction or another. So most of the time when we are compressing a stereo mix, we’ll work in a mode where the two channels are linked together and the compressor is really paying attention primarily to the mid component of a mix and not so much to the sides.
How codecs affect stereo processing
In a similar sense, MP3 encoding is based on the idea that you want to try to do as little processing as possible to the most important part of the signal, namely the mid, the thing that is common to both signals and throw away as much of the information that is involved in the quote “less important part of the signal” namely low level signals, out of phase signals, very high frequency, very low frequency signals that are out of phase and so if you start to create a mix where you exaggerate the side information and de-accentuate the center channel, when you pass this audio through an MP3 encoder you may find that you are hearing a much more pronounced effect from the MP3 encoder on your mix.
So there again, you want to be careful about exaggerating the side channel too much, exaggerating the sense of space too much because given that we are all having to deliver things with lossy codecs (MP3, AAC), you may end up with some unintended consequences of your processing.