All about producing and mastering audio for disc, the web and beyond
Showing posts with label audio. Show all posts
Showing posts with label audio. Show all posts

Friday, June 7, 2013

Book is out, and landing page is ready so...

Audio Mastering: Essential Practices is now on the shelves of the internet near you.  I sincerely hope some find it interesting and informative.  I've attempted to give some perspective on what mastering is (at least as I see it).  In my mind it isn't mysterious, or magical, but it IS a discipline that requires practice and understanding.  This is my contribution to help point people in the direction towards all the above.

If you are willing, I would be pleased to have reviews posted on Amazon and elsewhere and happy to receive feedback.

Here's a short introduction


Saturday, March 23, 2013

Dynamic Range Day 2013 - What is Loud?

Once again the folks from the Dynamic Range Day website are doing the good work to make people aware of the damage done by applying hyper-compression and limiting to audio mixes, and reminding everyone that they have a level control on their playback system! I applaud those efforts. To that end there are changes in playback tools and platforms that result in a ‘level playing field’ when it comes to matching level of various material. Those changes will require mix engineers to re-evaluate their work and reconsider what goes into making a track sound ‘good and loud’. Back in the day, before the advent of the digital limiter, it was awfully clear who was good at making loud records. Hypercompression was obvious and brickwall limiting sounded terrible. I suspect we’re heading back in that direction so to that end, here’s a video with a brief overview of the subject. There’s much more to be said of course….. What makes a mix good and loud?

Sunday, March 18, 2012

Reverb and Mastering



Reverb

Today, I’d like to talk a little bit about using reverb in mastering.

The applications for reverb in mastering are in some cases I think quite obvious, but in other cases a little bit less so and some of you may be surprised to learn that reverb comes into play in mastering at all.

Application One – Reverb Tails

The first one is using reverb to lengthen the tail of a song that’s been cut off. One of my pet peeves is that an overeager engineer, or artist or producer will decide that they want to save a little bit of time in the mastering studio and cut the very beginning or the very ending to a song closely so that you don’t have to worry about doing that in the mastering studio. In doing so, they think they’re going to save themselves money in the mastering session, but invariably what happens is that they cut the beginning too close or they cut off something at the very end because their studio is noisy or they aren’t listening carefully and suddenly you end up with a song where you wish the end of the piece could go on another couple of seconds. Or there might be a noise at the end of a mix and you have to pull a fade on the tail to get a ringing note down before the noise comes in and causes an interruption in the attention of the listener. So in any case, we might use reverb to extend the tail and in this case it’s a pretty simple think to apply reverb just to the very end, have it gradually rise as the last note decays and try and create some sort of sense of a natural extension to the last note for as long as need be.

Application Two – Acoustic Space

The second place that reverb comes into play in mastering is when you are trying to match the sound of two different recordings that are coming from two different acoustic spaces – one might be dry and one might be ambient – that are going to be included in a single album’s worth of material. This comes into play most often when I’m working with something that comes from various orchestras from around the world. I’ve received recordings made in Seattle, Bratislava, Warsaw and Prague and each engineering crew has their own aesthetic about how much reverb they like to allow to creep in, or ambience they like to have creep in to the sound of the recording. In some cases they are bound by the dimensions of the space that the orchestra is playing in.

So if I’m doing a record by a single composer and their works and it’s recording in multiple places, I might apply some reverb to some of the recordings to bring them into a similar sonic universe when going from one to the next.

Application Three – Creating A Sense Of Depth

The third instance where reverb might come into play in mastering is when I want to be able to create a little bit of a sense of depth in a recording. There are those moments where I will have done everything that I can think of to do that seems to be making a recording sound better with EQ, with compression, where I just want a very tiny sense of warmth and sort of a widening and deepening of the soundstage. And more EQ is just making it worse, more compression is just making it worse and I’ll try adding a hint of a very short, not very bright bit if reverb to the sound of the recording.

My recipe is usually rolling off the top end of the reverb, setting the roll-off somewhere around 2.5 or 3 kHz, having a decay of about 2/3 of a second and just a little touch of it – sometimes that gives me just that sense of depth that I’m after in a very natural way, in a way that an EQ or a compressor is not able to do.

You have to be careful, because if you add reverb to a heavy metal tune, a punk rock tune or something that really needs to maintain it’s immediacy and edge, reverb will soften the general sense of the program usually. but It can come in handy.

Monday, August 8, 2011

CDR Quality - Or Lack Thereof

While we're on the subject of good sound (how's THAT for a non-sequitur?!) there's something that's slowly creeping into the world of music production, namely poor quality CDR's.

While I don't want to get into a long discussion of the CRC (error correction) that's built into the playback of a CDR, you should know that the CD format was built to tolerate errors. In some cases it will fix errors on playback perfectly. In other cases, if it can't reconstruct the data, it will 'approximate' the data. The implication is, when you play a CD you don't know if you are hearing exactly what was recorded to it. The difference is usually very subtle, and arguably fine for most consumers, but not so fine for those of us that work hard to craft recordings.

This problem is not so prevalent with replicated (pressed) CD's, but moreso with duplicated (burned) CD-R's....and it's getting worse. At my studio we routinely check every disc that's intended as a master, and in the last 6 months we have noticed a significant decline in the quality of the burns to disc. We can still get a workable master, but sometimes we run into a batch of discs that are unusable. We wouldn't know it if we didn't test, and that makes me wonder how many discs people make that are malfunctioning in ways they might find unacceptable....if they knew.

THe answer is probably a move away from redbook audio to full resolution data transfer from local servers for consumers and studios alike....but for now we check our discs carefully, and make ddpi masters when we can.

Wednesday, July 27, 2011

The Cost and Value of Mastering

The Cost and Value of Mastering




What can one realistically expect to pay for mastering?


The range is huge and one tends to get what one pays for. Here is a good guide (for a 10-12 track record):

· $200-$300 - Cheaper options /mainly online

· $700-$2000 - An engineer with a fair amount of experience, and this usually allows for a full day in the mastering studio and a couple revisions.

· $2000 + - usually multiple days in the mastering studio, or one of the top mastering engineers around (Bob Ludwig, Doug Sax)


Fitting Mastering Into The Budget


Mastering is the final stage of the recording-making process, and it is by no means the least important. If one considers all the time that one spends making the record, the money for the recording, the mixing, hiring musicians, and the amount of CDs/downloads to be sold (etc), paying a little more for quality mastering amounts to not much extra cost per unit. One should try and budget for this at the beginning stages of the recording process, even though mastering is the final step!

Monday, July 11, 2011

Compression in Mastering (Part 3)

Welcome to Part 3: 'Compression' of the new video-blog series in which Jonathan Wyner of M-WORKS Mastering will be discussing various aspects of the mastering process. Let us know your thoughts, questions and opinions! Stay tuned for a new video and post next week.



Part 3 – Compression


How much compression to use?


Mastering engineers generally don’t use a lot of compression. If any compression is applied during the mastering process, it is usually very subtle. Low ratios (1.2:1 to 2:1) with high thresholds that yield around 2-3 dBs of gain reduction – at most – is common.


Compression and audio fidelity.


In an absolute audiophile sense: compression never sounds good! When compressing one loses depth, gains noise and loses dynamic range, all of which make a recording sound worse. To learn to use compression effectively, one should focus on whether it makes the music sound better. One needs to be able to differentiate between the music and the recording.


The idea of using compression – usually – is to reduce the dynamic range so as to make the different elements in an arrangement sound more clearly to the listener.


Should the mix engineer send a compressed or uncompressed 2-Mix?


If you are a more experienced mix engineer and/or you feel like you’ve got the compression sounding just how you want it, then print the mix with the compression and send it to the mastering engineer (M.E). Every compressor behaves and reacts differently, and those characteristic nuances that you (the artist and/or mixing engineer) have learned to love in the mix may not be so easily replicated by the M.E.


However, if you’re nervous that your compressor is ‘misbehaving’ or you are unsure whether you are using too much compression, it is a good idea to send two versions of the mix. Send the M.E the uncompressed mix and the compressed mix so that the he has it for reference. This way, the M.E will be able to decide if he can improve the uncompressed mix or work with your compressed mix and take it a step further!


Hope you enjoyed this. Please let me know your thoughts, and what you may like to see in future here on the blog.

Tuesday, July 5, 2011

Equalisation in Mastering (Part 2)

This is Part 2: 'Equalization' of the new video-blog series in which Jonathan Wyner of M-WORKS Mastering will be discussing various aspects of the mastering process. Let us know your thoughts, questions and opinions! Next week, compression.



Part 2: Equalization


Why were equalizers created?


Equalizers were invented to compensate for deficiencies in recording mediums (for example, to increase intelligibility over phone-lines). This idea of a corrective equalizer is very much at play in mastering.


An example is if a mixing engineer is perhaps mixing in an overly dull environment. In this case, he will produce overly bright mixes (to compensate). It is then the mastering engineer’s job to try and figure out the inverse EQ to get the mixes sounding more like the mix engineer thought they sounded.


To Cut? Or to Boost?


I think mastering engineers in general find themselves cutting more than boosting.

Listen for areas that sound cloudy, or that contain unpleasant harmonic content and don’t contain much of the fundamental frequency of the instrument. These areas can be gently and carefully carved out.


Older-style equalizers tend to have narrow-bandwidth cuts and broader-bandwidth boosts. This tends to sound better and is a safe, general rule to follow when EQ’ing.


Are there common areas you (the Mastering Engineer) find yourself working on?


There are no set-rules. However, if you find yourself doing the same thing for each master you work on – you may be compensating for a deficiency in your room/listening environment. So try be aware of this.


There are a few common areas that one can focus on, though:


· Usually some clearing out (cutting) can be done in the low-midrange (focus on the relationship with the bass and the vocal, or try to reveal the bass more clearly for example).

· Low-frequency information also tends to be a common area that requires attention at mastering (focus on the relationship between the kick drum and the bass, for example).

· Use small adjustments, and constantly check back with the original. The goal is simply to make the recording sound better! If you improve it, even slightly, then you are doing well!


Small EQ moves to make Big changes.


Most of the boosts and cuts that I am doing are no more than 0.5-1dB. The reasons for that are:


· You are working with a complex waveform that is a balanced recording. Thus, big changes are likely to alter the balance in a way that may not reflect the artist’s intention.

· An EQ filter sounds better – that is, it has less distortion and less ringing – if you use broad bandwidths (‘Q’s) and are making small moves (in dBs) with it.


So sometimes in mastering you will use up to 12 different EQ filters, but each one will be doing just a little bit. That is pretty typical of a mastering engineer’s use of an equalizer.


Join us next week for Compression!

Wednesday, December 15, 2010

40 years of music

Not so very long ago I had the pleasure/honor/responsibility of mastering a compilation representing SOME of the best music released on the Rounder Records (www.Rounder.com) label. For those that don't know Rounder, the label has a VERY diverse catalogue including releases on various sub-labels (Philo/BullsEye/etc and many more). The original focus of the label and I dare say the legacy is one of finding and releasing some of the finest 'Americana', folk music and other 'native' American musics.

The boxed set is divided into 4 decades of music, 1970's, 80's, 90's, 00's with each disc containing about 15 songs from each decade. That's a relatively small sampling of all the music that Rounder released, much less the music released by all labels in the aggregate. Even so, it is absolutely remarkable in the way that each decade seems to have a noticeable difference in production style. The decade of the 70's has the widest variety, from the standpoint of style, tone, dynamic range. The 80's decade has some of the strangest sounding tracks. It seems that decade was one where sonic exploration was rampant while we were beginning to figure out how to work within the new digital container, THe 90's began to show signs of what we think of as current modern audio practice. Still quite dynamic, but gradually the transient detail is compromised in favor of low frequency 'warmth'. Lastly, in the ought's we see the rush to modern presentation, with a lot of limiting, slightly exaggerated bass and treble boost and less ambient information and depth. In other words, the last decades records are really loud!

It was an interesting chalenge to make these all live together, my focus being primarily to let each disc stand on it's own and not worry about level matching the 70's to the 00's, though I gave some consideration to the above.

It was a fascinating exercise and even more, a very interesting listening experience. One that keeps showing me new things each time I hear it.....

Saturday, December 4, 2010

Plug ins - in search of the usable compressor

Many many plug ins in the sea, and many of them are useful. We have good plugin equalizers, limiters, dither engines.....but really good compressor plugin that will compete with hardware is an ongoing challenge. To date the Algorithmix Splitcom seems the best of the two channel linkable sort, and the Massenburg MDW is an excellent mono compressor.

The latest entry is a plugin compressor by Elysia (the Alpha)....can a $200 plugin hold it's own.? The hardware version is $10k! http://bit.ly/dpJ44Q

It certainly has all the flexibility you would ever desire including side chain filtering, linking/unlinking, wet/dry mix (read parallel compression) and more.

In my tests thus far I am fairly impressed. It seems to lack some of the edginess that comes with less than ideal detector design. I'll have a better idea after another week of use and abuse.

Saturday, November 27, 2010

OK here we go, Moore's law in action again?

Moore's law states that the number of transistors per square inch on integrated circuits would doubled every year for the foreseeable future.

In subsequent years (since 1965), the pace slowed down a bit, but data density has doubled approximately every 18 months. Most experts, including Moore expect this to hold for at least another two decades.

So we probably want to hold on to out hats if not wallets as the next speed upgrades are inevitable. Could this be the next salvo:

http://techresearch.intel.com/ProjectDetails.aspx?Id=143

Lightpeak would be a new protocol for interconnecting devices that would allow blazingly fast data transfer. What do we get from it? Higher resolution, more tracks? Probably both and with it comes larger capacity on our storage devices.

We should probably be prepared to upgrade our hardware, interfaces, cup's and storage every few years or so. Painful, but true....I hope the landfills can take it!