All about producing and mastering audio for disc, the web and beyond
Showing posts with label compression. Show all posts
Showing posts with label compression. Show all posts

Tuesday, May 8, 2012

Multiband compression

Multi-Band Compression Today I’d like to spend just a little bit of time talking about multi-band compression. Multi-band compression is something that’s been at play in audio for decades. It first came into play in radio station broadcasts. More recently, beginning in the early 1990’s as DSP became cheaper and more affordable, we were able to deploy multi-band compression affordably so that recording, mixing and mastering engineers could use it. What’s good about multi-band compression? Probably the main thing that multi-band compression allows for and that gives you an advantage as opposed to using one single band across the spectrum of compression is that you can set independent attack and release times for different parts of the spectrum. The reason that’s important is that a bass waveform has a much longer period to it. The fundamental frequency of a kick drum in a hip-hop tune is going to take 50–70ms for the low-frequency transient to get through. If you don’t want your compressor to be chomping at the very beginning of that low-frequency transient, you need to make sure to give the attack time enough time before the compressor kicks in so it doesn’t restrain the low end. Of course, 50-70ms is going to be a very long period of time for a compressor to wait if you are thinking about trying to also compress a jangly, bright acoustic guitar or something like that. You might have a tune that has a deep kick drum and a jangly guitar and it’s hard to find a single compression setting that’s going to work equally well in both those parts of the spectrum. A multi-band compressor allows you to divide the spectrum up into typically three or four different sections, apply different time constants, different attacks, different release to each part of the spectrum and to optimize the performance of the different segments of the compressor to the various instruments in a mix. What does this mean for your audio? In theory, it means you can get a compressed signal that’s tailored more to the program, and also because you can restrict the dynamics a little bit more effectively without hearing the compression per se, it theoretically gives the possibility of getting a louder sounding result in your mastering work. Potential problems with multi-band compression Multi-band compression is problematic for a couple of reasons that most people don’t really think about. First of all in order to divide the audio, you have to run through a series of filters – hi-pass, low-pass etc. Every time you run audio through a filter, you lose something in terms of fidelity. That goes a little bit against the credo of the mastering engineer. The idea of mastering is to always make something come out sounding better than how it went in, at least from one perspective. Running through the crossovers that are adding possible a little bit of ringing, a little bit of distortion and a little bit of noise flies in the face of trying to make stuff sound better. The minute you turn on a multi-band compressor, even if you have linear phase filters that are well designed you’re still going to get some change to the audio and it’s usually not a flattering change. That’s problem number one to my way of thinking. Problem number two is that I believe there’s something important about the proportion of harmonics in the sound of any given instrument. For a moment, I’d like to segregate this conversation so I’m making it clear – if you’re talking about dance music or music concrete – electronic music that doesn’t refer to an instrument that you might hear played acoustically then all bets are off in terms of maintaining the integrity of the original instrument. But let’s say you’ve got a recording that’s got a bass, and a guitar, and a vocal, and assume that I’m correct in saying that there’s something important about the proportion of harmonics to the fundamental frequency of any instrument. The minute you put a crossover in a compressor and start doing something different with one range of an instrument and another range of that same instrument, you’re going to start to skew the relationship the fundamental and the early harmonics of that instrument which give it the warm, full, clear part of its sound and the higher harmonics which give it some edge or brilliance. At first it can be a very seductive phenomenon that you get when you get this increased sense of brilliance and control and so on, but to my ear usually what comes out of a multi-band compressor – the instruments themselves – don’t sound as good as how they sounded going in to the multi-band compressor. So keeping that in mind, I think most mastering engineers who work at the top of the craft use multi-band compression sparingly if at all. It’s just something to keep in mind when you turn on a multi-band compressor, just for a moment focus your attention on the individual instruments in the mix. Listen to the bass, listen to the guitar before and after, listen to the vocal and make sure you aren’t harming that instrument in a certain way that ultimately means the thing isn’t going to sound as good as when it went in.

Monday, August 15, 2011

Analog Versus Digital Signal Processing (dynamics)




Limiters (peak limiters, protection circuits)


Most common is a digital plugin. Plugins tend to be much faster, cleaner, and have less overshoot than what you get in the analog-domain.


Compressors


When you look at an equipment roster of a high-end mastering studio, a compressor will more likely than not be seen in any of the studio’s analog gear. It seems that DSP (digital signal processing) plugins do an excellent job of recreating the dynamic-range control that happens in an analog compressor, yet they don’t quite seem to sound just as good.


What could be the reasons for this? Let’s speculate a little.


When audio comes out into the analog-domain, you get added distortion and noise. These are not necessarily characteristics that will be programmed into the digital circuits (or, algorithms). As a result, you get a subtly different overall presentation of the sound.


The detection circuit (the device used to tell the compressor when to compress the audio passing through it) is what really drives the action of a compressor. Another possibility, which was proposed to me by George Massenberg, is that sample rate for the detector circuit in a digital compressor needs to be much higher than the typical sample rates we are using now, because of the nuances that you typically get at the output of a compression stage. 44.1kHz may be sufficient for the audio passing through the compressor, but it may not present enough detail for the audio that is feeding the detection circuit for the compressor to do as good a job as its analog counterpart. It is speculation, but it is an interesting point to consider.


A lot of the time you will find mastering engineers using an analog circuit not because they’re going to use an EQ to equalize, or a compressor to compress, but because there is something about the filtering that takes place when running audio through that analog gear that changes the sound in a desirable way. So, a great compressor may be used not to compress, but simply due to the tone-shaping sound that is imparted to the program. This seems to be a common factor missing from many current DSP equivalents.


Monday, July 11, 2011

Compression in Mastering (Part 3)

Welcome to Part 3: 'Compression' of the new video-blog series in which Jonathan Wyner of M-WORKS Mastering will be discussing various aspects of the mastering process. Let us know your thoughts, questions and opinions! Stay tuned for a new video and post next week.



Part 3 – Compression


How much compression to use?


Mastering engineers generally don’t use a lot of compression. If any compression is applied during the mastering process, it is usually very subtle. Low ratios (1.2:1 to 2:1) with high thresholds that yield around 2-3 dBs of gain reduction – at most – is common.


Compression and audio fidelity.


In an absolute audiophile sense: compression never sounds good! When compressing one loses depth, gains noise and loses dynamic range, all of which make a recording sound worse. To learn to use compression effectively, one should focus on whether it makes the music sound better. One needs to be able to differentiate between the music and the recording.


The idea of using compression – usually – is to reduce the dynamic range so as to make the different elements in an arrangement sound more clearly to the listener.


Should the mix engineer send a compressed or uncompressed 2-Mix?


If you are a more experienced mix engineer and/or you feel like you’ve got the compression sounding just how you want it, then print the mix with the compression and send it to the mastering engineer (M.E). Every compressor behaves and reacts differently, and those characteristic nuances that you (the artist and/or mixing engineer) have learned to love in the mix may not be so easily replicated by the M.E.


However, if you’re nervous that your compressor is ‘misbehaving’ or you are unsure whether you are using too much compression, it is a good idea to send two versions of the mix. Send the M.E the uncompressed mix and the compressed mix so that the he has it for reference. This way, the M.E will be able to decide if he can improve the uncompressed mix or work with your compressed mix and take it a step further!


Hope you enjoyed this. Please let me know your thoughts, and what you may like to see in future here on the blog.