Tuesday, May 8, 2012
Multiband compression
Sunday, April 8, 2012
Stereo Processing - Twice as nice?
Stereo Processing
I’d like to talk a little bit about stereo processing and mastering.
So what does it mean, ‘stereo processing’?
Maybe the place to start is to talk about what does it mean to think about stereo – there are really two different ways of thinking about stereo. One is left channel / right channel. You have two mono channels that are feeding two channels in an amplifier, two speakers, and the arrival time coming out of one speaker or the other or both at once will determine the perceived direction of any given sound. If you delay one speaker a little bit, everything will sound as if it is coming from the other speaker because of the precedence effect, or the Haas effect as we know it. So that’s one way of thinking about it – as dual mono.
The second way of thinking about it is something called M/S or Mid-Side processing. It is a style of processing audio that’s coming a little bit more into vogue in some plug-ins, compressors, equalizers and analog equipment that is being introduced to the market over the last several years. Going way back, it is best known as a microphone pickup technique.
The principle of M/S, or Mid-Side is to think about everything that’s coming from both speakers at one (that’s common to both channels) as being one component of the sound and everything that’s different, that’s arriving at a different time from one speaker or the other, or the ‘difference channel’ as being the other component of the sound. This is sometimes known as ‘A minus B’. if you want to listen to the difference signal in a mix or a recording, you can take the two channels in a recording, pan them both to mono, flip one of them out of phase and what you end up hearing is the ‘difference signal’ – everything that’s not common to both channels in a recording.
So that’s a different way of thinking about the stereo field and the way that we thinking about processing stereo – what’s important, what’s interesting, what’s meaningful about this?
If you think about it, when you are listening to two speakers, in order to determine direction you are relying very heavily on this concept of phase or arrival time shall we say. If something shifts in the arrival time between two speakers then suddenly you’ll have a very very different sense of its location. If we’re processing the left channel and the right channel separately and we introduce an equalizer that’s not a linear phase equalizer for instance, there’s a strong possibility that that instrument or that part of the spectrum is going to change in it’s perspective in the stereo image. If that’s what’s desired – great – maybe that’s a desirable effect. But more often than not, we want to do the same thing to both channels at the same time in order to keep the phase coherency between the two channels and maintain that sense of stereo image. What’s important about stereo image is not just the localization of an instrument – not just that the trumpet is coming from right field - but also that the integrity of the reverb stays the same, that the width of the stereo field and the sense of openness and space and top end remains the same as what was present in the mix when it came into the mastering studio.
M/S Paradigm
Processing in the mid-side paradigm has a similar kind of concern about it. In other words if you change the arrival time, the phase relationship between the mid and the side channel, you are likely to drastically change the sense of space in a recording. If you exaggerate the side component too much, you are also likely to find that everything that was in the center of the mix might begin to recede for the listener. I think most people would agree that the stuff that appears in the center of the stereo image is very often the most important stuff. For a Pop mix, it’s usually the kick drum, the snare drum, the vocal, the bass – the main players in a mix. So you have to be careful about how much you change things in the mid channel and the side channel with respect to each other.
What to watch for with stereo processing
As you start to think about mid-side, it’s important to note that there some other arenas of what happens to audio that is playing in the M/S sandbox if you will - where they become important.
The first thing I want to point to is when you are using a stereo compressor. If you are feeding the left channel and the right channel separately into two sides of a stereo compressor and you allow the detector circuit for each side to behave independently, you could end up with some very strange behavior when you are listening to the output of the speakers (listening to a stereo mix) when for instance, the drummer hits a rack tom that appears in one speaker that jumps out of the mix a little bit, one channel of the compressor might be compressing heavily whereas the other doesn’t compress hardly at all and the whole stereo image can begin to steer in one direction or another. So most of the time when we are compressing a stereo mix, we’ll work in a mode where the two channels are linked together and the compressor is really paying attention primarily to the mid component of a mix and not so much to the sides.
How codecs affect stereo processing
In a similar sense, MP3 encoding is based on the idea that you want to try to do as little processing as possible to the most important part of the signal, namely the mid, the thing that is common to both signals and throw away as much of the information that is involved in the quote “less important part of the signal” namely low level signals, out of phase signals, very high frequency, very low frequency signals that are out of phase and so if you start to create a mix where you exaggerate the side information and de-accentuate the center channel, when you pass this audio through an MP3 encoder you may find that you are hearing a much more pronounced effect from the MP3 encoder on your mix.
So there again, you want to be careful about exaggerating the side channel too much, exaggerating the sense of space too much because given that we are all having to deliver things with lossy codecs (MP3, AAC), you may end up with some unintended consequences of your processing.
Wednesday, August 24, 2011
Metadata, ISRC, UP and QC
What is metadata?
Any information that is included within a program, whether it is for a download or creating a disc, that is not the program itself. It is embedded within the digital file (as a download or when burned to disc). Examples are track IDs, start and stop IDs, ISRCs, UPCs, and CD Text information. One of the things that the mastering engineer is responsible for is understanding what these are and including all this information in a master.
What is an ISRC?
It stands for International Standard Recording Code. It is a number that is allocated to any publisher (record label, artist, or anybody that is owns a catalogue of music). It is registered in the USA with the RIAA (Recording Industry Association of America) and in Europe with GEMA (a performance rights organisation) . The code is a unique identifier that gets attached to every single piece of music (each song within a record would have its own ISRC code). Any time that the music is played over the air, downloaded or streamed, the identifier is logged. This is vital in the payment process .
What is a UPC?
It stands for Universal Product Code. Is a number assigned to a product. Traditionally it has been a physical item, such as a cereal box in a grocery store, which has a bar code (and correlating number) to scan at the checkout to identify what that product is. The same is true of CDs or DVDs. But they are also used to track downloads in some cases, so you should register and include it in your product.
What is QC?
This is what is known as Quality Check. Mastering is the final process before distribution. As a result, it is the mastering engineer’s job to make sure that there are absolutely no flaws in the program (a dropout or a click for example). The mastering engineer should give the client assurance that there is no problem with the audio.
Monday, August 22, 2011
On Stereo-Imaging
Stereo-imaging tools are often included in DIY mastering packages or equalisers with stereo spreading facilities (most commonly offering some kind of mid-side processing options), so it is important to acknowledge their purpose and their limits.
As with many of the specialist tools we have for processing audio, they are great at solving specific problems. If you have something mono or largely in mono, for example, and you need to try and widen it, you can add reverb or perhaps exaggerate the little stereo information that already exists in the track.
But what exactly is the ‘stereo information’?
Well, it has to do with the relationship between the ‘out-of-phase’ information, and the ‘in-phase’ information. Anything that is in-phase happens at exactly the same time in both channels, and that information will appear to be centered. If anything is slightly delayed off to one side or the other, compared to the center of the image, it is ‘out-of-phase’ and is one of the things that creates a stereo sense of spread.
So when you are mixing, you are using pan pots, delays and reverbs (etc) to create a stereo image of individual elements within an overall stereo mix. However, when you go in during mastering and increase the out-of-phase component, you are changing the relationship between the out-of-phase and in-phase parts of the signal of the entire mix. You therefore are able to radically change the sense of the stereo image and the placement of each individual instrument in a song, which can be very dangerous if not treated with care. And although you may increase the perceived wideness, this is at the expense of the in-phase components in the song. That is to say, the elements that are dead-center, which also tend to be the most important elements of most productions – vocals, snare, kick, bass – are weakened.
So sometimes – in its various forms – stereo-imaging can be used to good effect. However, one should err on the side of caution.
Friday, July 29, 2011
Should I (the Artist) Attend The Mastering Session
Should I Attend The Mastering Session
Having the artist attend the session can often be useful for both parties, and make for effective and fast communication. However, it is not absolutely essential that the artist attends the session and it is commonplace that mastering sessions and communications are dealt with via FTP (file transfer protocol) and email/telephone respectively.
What If I Can’t Attend?
As mentioned, it is not essential that artist attends the mastering session. There are certain things that should be provided to the mastering engineer, however:
· The sequence – This is the song order of your album. There are many possible orders and this should be decided and given to the mastering engineer before the session.
· Notes – any questions and concerns you may have. For example, perhaps you feel one song is too quiet, or the vocal is not quite bright enough, or you want a warm and darker master. Let the mastering engineer know this, as he will otherwise take lead from what he is hearing and assume that it is the creative intention of the artist and producer.
Wednesday, July 27, 2011
The Cost and Value of Mastering
The Cost and Value of Mastering
What can one realistically expect to pay for mastering?
The range is huge and one tends to get what one pays for. Here is a good guide (for a 10-12 track record):
· $200-$300 - Cheaper options /mainly online
· $700-$2000 - An engineer with a fair amount of experience, and this usually allows for a full day in the mastering studio and a couple revisions.
· $2000 + - usually multiple days in the mastering studio, or one of the top mastering engineers around (Bob Ludwig, Doug Sax)
Fitting Mastering Into The Budget
Mastering is the final stage of the recording-making process, and it is by no means the least important. If one considers all the time that one spends making the record, the money for the recording, the mixing, hiring musicians, and the amount of CDs/downloads to be sold (etc), paying a little more for quality mastering amounts to not much extra cost per unit. One should try and budget for this at the beginning stages of the recording process, even though mastering is the final step!
Monday, July 11, 2011
Compression in Mastering (Part 3)
Welcome to Part 3: 'Compression' of the new video-blog series in which Jonathan Wyner of M-WORKS Mastering will be discussing various aspects of the mastering process. Let us know your thoughts, questions and opinions! Stay tuned for a new video and post next week.
Part 3 – Compression
How much compression to use?
Mastering engineers generally don’t use a lot of compression. If any compression is applied during the mastering process, it is usually very subtle. Low ratios (1.2:1 to 2:1) with high thresholds that yield around 2-3 dBs of gain reduction – at most – is common.
Compression and audio fidelity.
In an absolute audiophile sense: compression never sounds good! When compressing one loses depth, gains noise and loses dynamic range, all of which make a recording sound worse. To learn to use compression effectively, one should focus on whether it makes the music sound better. One needs to be able to differentiate between the music and the recording.
The idea of using compression – usually – is to reduce the dynamic range so as to make the different elements in an arrangement sound more clearly to the listener.
Should the mix engineer send a compressed or uncompressed 2-Mix?
If you are a more experienced mix engineer and/or you feel like you’ve got the compression sounding just how you want it, then print the mix with the compression and send it to the mastering engineer (M.E). Every compressor behaves and reacts differently, and those characteristic nuances that you (the artist and/or mixing engineer) have learned to love in the mix may not be so easily replicated by the M.E.
However, if you’re nervous that your compressor is ‘misbehaving’ or you are unsure whether you are using too much compression, it is a good idea to send two versions of the mix. Send the M.E the uncompressed mix and the compressed mix so that the he has it for reference. This way, the M.E will be able to decide if he can improve the uncompressed mix or work with your compressed mix and take it a step further!
Hope you enjoyed this. Please let me know your thoughts, and what you may like to see in future here on the blog.
Tuesday, July 5, 2011
Equalisation in Mastering (Part 2)
This is Part 2: 'Equalization' of the new video-blog series in which Jonathan Wyner of M-WORKS Mastering will be discussing various aspects of the mastering process. Let us know your thoughts, questions and opinions! Next week, compression.
Part 2: Equalization
Why were equalizers created?
Equalizers were invented to compensate for deficiencies in recording mediums (for example, to increase intelligibility over phone-lines). This idea of a corrective equalizer is very much at play in mastering.
An example is if a mixing engineer is perhaps mixing in an overly dull environment. In this case, he will produce overly bright mixes (to compensate). It is then the mastering engineer’s job to try and figure out the inverse EQ to get the mixes sounding more like the mix engineer thought they sounded.
To Cut? Or to Boost?
I think mastering engineers in general find themselves cutting more than boosting.
Listen for areas that sound cloudy, or that contain unpleasant harmonic content and don’t contain much of the fundamental frequency of the instrument. These areas can be gently and carefully carved out.
Older-style equalizers tend to have narrow-bandwidth cuts and broader-bandwidth boosts. This tends to sound better and is a safe, general rule to follow when EQ’ing.
Are there common areas you (the Mastering Engineer) find yourself working on?
There are no set-rules. However, if you find yourself doing the same thing for each master you work on – you may be compensating for a deficiency in your room/listening environment. So try be aware of this.
There are a few common areas that one can focus on, though:
· Usually some clearing out (cutting) can be done in the low-midrange (focus on the relationship with the bass and the vocal, or try to reveal the bass more clearly for example).
· Low-frequency information also tends to be a common area that requires attention at mastering (focus on the relationship between the kick drum and the bass, for example).
· Use small adjustments, and constantly check back with the original. The goal is simply to make the recording sound better! If you improve it, even slightly, then you are doing well!
Small EQ moves to make Big changes.
Most of the boosts and cuts that I am doing are no more than 0.5-1dB. The reasons for that are:
· You are working with a complex waveform that is a balanced recording. Thus, big changes are likely to alter the balance in a way that may not reflect the artist’s intention.
· An EQ filter sounds better – that is, it has less distortion and less ringing – if you use broad bandwidths (‘Q’s) and are making small moves (in dBs) with it.
So sometimes in mastering you will use up to 12 different EQ filters, but each one will be doing just a little bit. That is pretty typical of a mastering engineer’s use of an equalizer.
Join us next week for Compression!
Monday, June 27, 2011
The Mastering Mindset (Part 1)
Part 1: The Mastering Mindset
What is mastering?
· It is the last creative step, and the first step in distribution.
· It’s the last chance to catch any mistakes. When it leaves mastering, there cannot be any flaws in the master.
· It needs to function well in all common formats: mp3s, CDs etc.
· Getting it ready for distribution in these formats is key.
What are the responsibilities of the mastering engineer?
· In the case of a whole album, to take all the disparate pieces and unify them sonically (level and tone-wise).
· Pacing between the tracks should reflect the mood of the songs and allow each one to breathe or run-on to each other, depending on the desired effect.
· Enhance the sound – create a more open sound, a deeper sound, a fuller sound or a warmer sound etc – in a way that benefits the sound of the record.
· Sometimes you can enhance the dynamic range by turning up some sections and turning down other sections, reduce the dynamic range to create a louder master.
· Thus, the mastering engineer has some creative input. However, there is only so much that he can do in mastering, so that the mastering engineer relies on the mixing engineer to do a good job and to get the mix as close as possible to the desired sound.
What mastering isn’t...
· Mastering is not about making everything bright and loud!
· In this overcrowded and noisy world, the temptation to create a louder and brighter mix in hope of drawing attention is strong. However, such recordings tend to be hard to listen to for a sustained period of time, so that people may be reluctant (consciously or subconsciously) to come back and listen to them again and again.
Knowing what the artist wants…
· It’s important to know what the artist wants, so that you don’t land up going in a different direction.
· Have a discussion with the artist. You are more likely to keep him happy and achieve his vision. Sometimes, indeed, you learn from doing something you may not have considered before.
· At the end of the day, it’s art. And there are no recipes for what it ‘has’ to sound like. Always try to support the song’s meaning and artist’s vision.
Different styles of music...
· There will always be similarities – there is such a thing as ‘too much bass’ or ‘too much treble’ regardless of the style.
· However, different styles of music will require a different approach. For example, you want the low-end to lead in reggae, you want a wide stereo-image with depth (hearing into the reverb tails) and aggression through guitars with metal, a wide and true dynamic range with a classical recording...
· So, it is important to be informed about different styles of music and what each style wants.
What are the prerequisites for mastering at a high level?
A great monitoring system.
· Relatively neutral – no part of the spectrum is exaggerated)
· It’s phase accurate – the sound arriving from each speaker is arriving at the listener at the same time)
· Low distortion – so that nothing is being introduced by the monitoring system that is not in the recording. It also allows for longer listening periods without much fatigue.
A room that can support the monitoring system
· A quiet environment – so that you can ensure what you are hearing is directly from the speakers, and not ambient sound pollution.
· Size – the room needs to be large enough to allow the low-frequency information to be heard properly, for the waves to propagate.
What if I’m mastering at home?
· Get a good pair of headphones, with good low-frequency response.
· Listen to a lot of recordings you know and like, and become accustomed to the way that those sound in your listening environment(s).
· Have multiple monitoring environments so that you can get different points of reference (computer speakers, headphones, studio monitors, car speakers etc).
· Different speaker-systems will exaggerate different parts of the sound differently – so at least you’re not hearing a single distorting of the sound.