All about producing and mastering audio for disc, the web and beyond

Tuesday, May 8, 2012

Multiband compression

Multi-Band Compression Today I’d like to spend just a little bit of time talking about multi-band compression. Multi-band compression is something that’s been at play in audio for decades. It first came into play in radio station broadcasts. More recently, beginning in the early 1990’s as DSP became cheaper and more affordable, we were able to deploy multi-band compression affordably so that recording, mixing and mastering engineers could use it. What’s good about multi-band compression? Probably the main thing that multi-band compression allows for and that gives you an advantage as opposed to using one single band across the spectrum of compression is that you can set independent attack and release times for different parts of the spectrum. The reason that’s important is that a bass waveform has a much longer period to it. The fundamental frequency of a kick drum in a hip-hop tune is going to take 50–70ms for the low-frequency transient to get through. If you don’t want your compressor to be chomping at the very beginning of that low-frequency transient, you need to make sure to give the attack time enough time before the compressor kicks in so it doesn’t restrain the low end. Of course, 50-70ms is going to be a very long period of time for a compressor to wait if you are thinking about trying to also compress a jangly, bright acoustic guitar or something like that. You might have a tune that has a deep kick drum and a jangly guitar and it’s hard to find a single compression setting that’s going to work equally well in both those parts of the spectrum. A multi-band compressor allows you to divide the spectrum up into typically three or four different sections, apply different time constants, different attacks, different release to each part of the spectrum and to optimize the performance of the different segments of the compressor to the various instruments in a mix. What does this mean for your audio? In theory, it means you can get a compressed signal that’s tailored more to the program, and also because you can restrict the dynamics a little bit more effectively without hearing the compression per se, it theoretically gives the possibility of getting a louder sounding result in your mastering work. Potential problems with multi-band compression Multi-band compression is problematic for a couple of reasons that most people don’t really think about. First of all in order to divide the audio, you have to run through a series of filters – hi-pass, low-pass etc. Every time you run audio through a filter, you lose something in terms of fidelity. That goes a little bit against the credo of the mastering engineer. The idea of mastering is to always make something come out sounding better than how it went in, at least from one perspective. Running through the crossovers that are adding possible a little bit of ringing, a little bit of distortion and a little bit of noise flies in the face of trying to make stuff sound better. The minute you turn on a multi-band compressor, even if you have linear phase filters that are well designed you’re still going to get some change to the audio and it’s usually not a flattering change. That’s problem number one to my way of thinking. Problem number two is that I believe there’s something important about the proportion of harmonics in the sound of any given instrument. For a moment, I’d like to segregate this conversation so I’m making it clear – if you’re talking about dance music or music concrete – electronic music that doesn’t refer to an instrument that you might hear played acoustically then all bets are off in terms of maintaining the integrity of the original instrument. But let’s say you’ve got a recording that’s got a bass, and a guitar, and a vocal, and assume that I’m correct in saying that there’s something important about the proportion of harmonics to the fundamental frequency of any instrument. The minute you put a crossover in a compressor and start doing something different with one range of an instrument and another range of that same instrument, you’re going to start to skew the relationship the fundamental and the early harmonics of that instrument which give it the warm, full, clear part of its sound and the higher harmonics which give it some edge or brilliance. At first it can be a very seductive phenomenon that you get when you get this increased sense of brilliance and control and so on, but to my ear usually what comes out of a multi-band compressor – the instruments themselves – don’t sound as good as how they sounded going in to the multi-band compressor. So keeping that in mind, I think most mastering engineers who work at the top of the craft use multi-band compression sparingly if at all. It’s just something to keep in mind when you turn on a multi-band compressor, just for a moment focus your attention on the individual instruments in the mix. Listen to the bass, listen to the guitar before and after, listen to the vocal and make sure you aren’t harming that instrument in a certain way that ultimately means the thing isn’t going to sound as good as when it went in.

Sunday, April 8, 2012

Stereo Processing - Twice as nice?



Stereo Processing

I’d like to talk a little bit about stereo processing and mastering.

So what does it mean, ‘stereo processing’?

Maybe the place to start is to talk about what does it mean to think about stereo – there are really two different ways of thinking about stereo. One is left channel / right channel. You have two mono channels that are feeding two channels in an amplifier, two speakers, and the arrival time coming out of one speaker or the other or both at once will determine the perceived direction of any given sound. If you delay one speaker a little bit, everything will sound as if it is coming from the other speaker because of the precedence effect, or the Haas effect as we know it. So that’s one way of thinking about it – as dual mono.

The second way of thinking about it is something called M/S or Mid-Side processing. It is a style of processing audio that’s coming a little bit more into vogue in some plug-ins, compressors, equalizers and analog equipment that is being introduced to the market over the last several years. Going way back, it is best known as a microphone pickup technique.

The principle of M/S, or Mid-Side is to think about everything that’s coming from both speakers at one (that’s common to both channels) as being one component of the sound and everything that’s different, that’s arriving at a different time from one speaker or the other, or the ‘difference channel’ as being the other component of the sound. This is sometimes known as ‘A minus B’. if you want to listen to the difference signal in a mix or a recording, you can take the two channels in a recording, pan them both to mono, flip one of them out of phase and what you end up hearing is the ‘difference signal’ – everything that’s not common to both channels in a recording.

So that’s a different way of thinking about the stereo field and the way that we thinking about processing stereo – what’s important, what’s interesting, what’s meaningful about this?

If you think about it, when you are listening to two speakers, in order to determine direction you are relying very heavily on this concept of phase or arrival time shall we say. If something shifts in the arrival time between two speakers then suddenly you’ll have a very very different sense of its location. If we’re processing the left channel and the right channel separately and we introduce an equalizer that’s not a linear phase equalizer for instance, there’s a strong possibility that that instrument or that part of the spectrum is going to change in it’s perspective in the stereo image. If that’s what’s desired – great – maybe that’s a desirable effect. But more often than not, we want to do the same thing to both channels at the same time in order to keep the phase coherency between the two channels and maintain that sense of stereo image. What’s important about stereo image is not just the localization of an instrument – not just that the trumpet is coming from right field - but also that the integrity of the reverb stays the same, that the width of the stereo field and the sense of openness and space and top end remains the same as what was present in the mix when it came into the mastering studio.

M/S Paradigm

Processing in the mid-side paradigm has a similar kind of concern about it. In other words if you change the arrival time, the phase relationship between the mid and the side channel, you are likely to drastically change the sense of space in a recording. If you exaggerate the side component too much, you are also likely to find that everything that was in the center of the mix might begin to recede for the listener. I think most people would agree that the stuff that appears in the center of the stereo image is very often the most important stuff. For a Pop mix, it’s usually the kick drum, the snare drum, the vocal, the bass – the main players in a mix. So you have to be careful about how much you change things in the mid channel and the side channel with respect to each other.

What to watch for with stereo processing

As you start to think about mid-side, it’s important to note that there some other arenas of what happens to audio that is playing in the M/S sandbox if you will - where they become important.

The first thing I want to point to is when you are using a stereo compressor. If you are feeding the left channel and the right channel separately into two sides of a stereo compressor and you allow the detector circuit for each side to behave independently, you could end up with some very strange behavior when you are listening to the output of the speakers (listening to a stereo mix) when for instance, the drummer hits a rack tom that appears in one speaker that jumps out of the mix a little bit, one channel of the compressor might be compressing heavily whereas the other doesn’t compress hardly at all and the whole stereo image can begin to steer in one direction or another. So most of the time when we are compressing a stereo mix, we’ll work in a mode where the two channels are linked together and the compressor is really paying attention primarily to the mid component of a mix and not so much to the sides.

How codecs affect stereo processing

In a similar sense, MP3 encoding is based on the idea that you want to try to do as little processing as possible to the most important part of the signal, namely the mid, the thing that is common to both signals and throw away as much of the information that is involved in the quote “less important part of the signal” namely low level signals, out of phase signals, very high frequency, very low frequency signals that are out of phase and so if you start to create a mix where you exaggerate the side information and de-accentuate the center channel, when you pass this audio through an MP3 encoder you may find that you are hearing a much more pronounced effect from the MP3 encoder on your mix.

So there again, you want to be careful about exaggerating the side channel too much, exaggerating the sense of space too much because given that we are all having to deliver things with lossy codecs (MP3, AAC), you may end up with some unintended consequences of your processing.

Sunday, March 18, 2012

Reverb and Mastering



Reverb

Today, I’d like to talk a little bit about using reverb in mastering.

The applications for reverb in mastering are in some cases I think quite obvious, but in other cases a little bit less so and some of you may be surprised to learn that reverb comes into play in mastering at all.

Application One – Reverb Tails

The first one is using reverb to lengthen the tail of a song that’s been cut off. One of my pet peeves is that an overeager engineer, or artist or producer will decide that they want to save a little bit of time in the mastering studio and cut the very beginning or the very ending to a song closely so that you don’t have to worry about doing that in the mastering studio. In doing so, they think they’re going to save themselves money in the mastering session, but invariably what happens is that they cut the beginning too close or they cut off something at the very end because their studio is noisy or they aren’t listening carefully and suddenly you end up with a song where you wish the end of the piece could go on another couple of seconds. Or there might be a noise at the end of a mix and you have to pull a fade on the tail to get a ringing note down before the noise comes in and causes an interruption in the attention of the listener. So in any case, we might use reverb to extend the tail and in this case it’s a pretty simple think to apply reverb just to the very end, have it gradually rise as the last note decays and try and create some sort of sense of a natural extension to the last note for as long as need be.

Application Two – Acoustic Space

The second place that reverb comes into play in mastering is when you are trying to match the sound of two different recordings that are coming from two different acoustic spaces – one might be dry and one might be ambient – that are going to be included in a single album’s worth of material. This comes into play most often when I’m working with something that comes from various orchestras from around the world. I’ve received recordings made in Seattle, Bratislava, Warsaw and Prague and each engineering crew has their own aesthetic about how much reverb they like to allow to creep in, or ambience they like to have creep in to the sound of the recording. In some cases they are bound by the dimensions of the space that the orchestra is playing in.

So if I’m doing a record by a single composer and their works and it’s recording in multiple places, I might apply some reverb to some of the recordings to bring them into a similar sonic universe when going from one to the next.

Application Three – Creating A Sense Of Depth

The third instance where reverb might come into play in mastering is when I want to be able to create a little bit of a sense of depth in a recording. There are those moments where I will have done everything that I can think of to do that seems to be making a recording sound better with EQ, with compression, where I just want a very tiny sense of warmth and sort of a widening and deepening of the soundstage. And more EQ is just making it worse, more compression is just making it worse and I’ll try adding a hint of a very short, not very bright bit if reverb to the sound of the recording.

My recipe is usually rolling off the top end of the reverb, setting the roll-off somewhere around 2.5 or 3 kHz, having a decay of about 2/3 of a second and just a little touch of it – sometimes that gives me just that sense of depth that I’m after in a very natural way, in a way that an EQ or a compressor is not able to do.

You have to be careful, because if you add reverb to a heavy metal tune, a punk rock tune or something that really needs to maintain it’s immediacy and edge, reverb will soften the general sense of the program usually. but It can come in handy.

Sunday, January 15, 2012

Another record we are proud of...

What a joy to be able to help bring so much expression, joy and, well, music into the world.

Here's a review of just one example of a project we are proud to be a part of. Powerhouse modern big band with some inspired performing....and dynamics to boot!

http://www.jazmuzic.com/2011/12/cd-review-new-world-jazz-composers.html

Tuesday, January 10, 2012

Another vote for good sound....

This piece gives some nice ammunition to those of us who believe that good sound is an important part of musical expression, and that regular ole people, even the mp3 generation (!) can discern differences in sound quality....and they might even prefer higher resolution.

http://tinyurl.com/7t7s4m3

Provides just a tiny hit of job security!

As I have said many times, I don't have much of a gripe about mp3....or cassette or whatever, so long as it is where it belongs: in the domain of the consumer. In fact mp3 represents a leap forward for people in many parts of the globe.

Where I DO have a problem with it is when it is confused with professional production tools. That is a non-starter in every way. It is tough soetimes to distinguish between a professional tool and a hobbyist toy when they both reside on the same platform; laptop, desktop or tablet. Indeed there is a world of difference however!