All about producing and mastering audio for disc, the web and beyond

Wednesday, August 24, 2011

Metadata, ISRC, UP and QC

What is metadata?

Any information that is included within a program, whether it is for a download or creating a disc, that is not the program itself. It is embedded within the digital file (as a download or when burned to disc). Examples are track IDs, start and stop IDs, ISRCs, UPCs, and CD Text information. One of the things that the mastering engineer is responsible for is understanding what these are and including all this information in a master.

What is an ISRC?

It stands for International Standard Recording Code. It is a number that is allocated to any publisher (record label, artist, or anybody that is owns a catalogue of music). It is registered in the USA with the RIAA (Recording Industry Association of America) and in Europe with GEMA (a performance rights organisation) . The code is a unique identifier that gets attached to every single piece of music (each song within a record would have its own ISRC code). Any time that the music is played over the air, downloaded or streamed, the identifier is logged. This is vital in the payment process .

What is a UPC?

It stands for Universal Product Code. Is a number assigned to a product. Traditionally it has been a physical item, such as a cereal box in a grocery store, which has a bar code (and correlating number) to scan at the checkout to identify what that product is. The same is true of CDs or DVDs. But they are also used to track downloads in some cases, so you should register and include it in your product.

What is QC?

This is what is known as Quality Check. Mastering is the final process before distribution. As a result, it is the mastering engineer’s job to make sure that there are absolutely no flaws in the program (a dropout or a click for example). The mastering engineer should give the client assurance that there is no problem with the audio.

Monday, August 22, 2011

On Stereo-Imaging

Stereo-imaging tools are often included in DIY mastering packages or equalisers with stereo spreading facilities (most commonly offering some kind of mid-side processing options), so it is important to acknowledge their purpose and their limits.

As with many of the specialist tools we have for processing audio, they are great at solving specific problems. If you have something mono or largely in mono, for example, and you need to try and widen it, you can add reverb or perhaps exaggerate the little stereo information that already exists in the track.

But what exactly is the ‘stereo information’?

Well, it has to do with the relationship between the ‘out-of-phase’ information, and the ‘in-phase’ information. Anything that is in-phase happens at exactly the same time in both channels, and that information will appear to be centered. If anything is slightly delayed off to one side or the other, compared to the center of the image, it is ‘out-of-phase’ and is one of the things that creates a stereo sense of spread.

So when you are mixing, you are using pan pots, delays and reverbs (etc) to create a stereo image of individual elements within an overall stereo mix. However, when you go in during mastering and increase the out-of-phase component, you are changing the relationship between the out-of-phase and in-phase parts of the signal of the entire mix. You therefore are able to radically change the sense of the stereo image and the placement of each individual instrument in a song, which can be very dangerous if not treated with care. And although you may increase the perceived wideness, this is at the expense of the in-phase components in the song. That is to say, the elements that are dead-center, which also tend to be the most important elements of most productions – vocals, snare, kick, bass – are weakened.

So sometimes – in its various forms – stereo-imaging can be used to good effect. However, one should err on the side of caution.

Monday, August 15, 2011

Analog Versus Digital Signal Processing (dynamics)

Limiters (peak limiters, protection circuits)

Most common is a digital plugin. Plugins tend to be much faster, cleaner, and have less overshoot than what you get in the analog-domain.


When you look at an equipment roster of a high-end mastering studio, a compressor will more likely than not be seen in any of the studio’s analog gear. It seems that DSP (digital signal processing) plugins do an excellent job of recreating the dynamic-range control that happens in an analog compressor, yet they don’t quite seem to sound just as good.

What could be the reasons for this? Let’s speculate a little.

When audio comes out into the analog-domain, you get added distortion and noise. These are not necessarily characteristics that will be programmed into the digital circuits (or, algorithms). As a result, you get a subtly different overall presentation of the sound.

The detection circuit (the device used to tell the compressor when to compress the audio passing through it) is what really drives the action of a compressor. Another possibility, which was proposed to me by George Massenberg, is that sample rate for the detector circuit in a digital compressor needs to be much higher than the typical sample rates we are using now, because of the nuances that you typically get at the output of a compression stage. 44.1kHz may be sufficient for the audio passing through the compressor, but it may not present enough detail for the audio that is feeding the detection circuit for the compressor to do as good a job as its analog counterpart. It is speculation, but it is an interesting point to consider.

A lot of the time you will find mastering engineers using an analog circuit not because they’re going to use an EQ to equalize, or a compressor to compress, but because there is something about the filtering that takes place when running audio through that analog gear that changes the sound in a desirable way. So, a great compressor may be used not to compress, but simply due to the tone-shaping sound that is imparted to the program. This seems to be a common factor missing from many current DSP equivalents.

Monday, August 8, 2011

CDR Quality - Or Lack Thereof

While we're on the subject of good sound (how's THAT for a non-sequitur?!) there's something that's slowly creeping into the world of music production, namely poor quality CDR's.

While I don't want to get into a long discussion of the CRC (error correction) that's built into the playback of a CDR, you should know that the CD format was built to tolerate errors. In some cases it will fix errors on playback perfectly. In other cases, if it can't reconstruct the data, it will 'approximate' the data. The implication is, when you play a CD you don't know if you are hearing exactly what was recorded to it. The difference is usually very subtle, and arguably fine for most consumers, but not so fine for those of us that work hard to craft recordings.

This problem is not so prevalent with replicated (pressed) CD's, but moreso with duplicated (burned) CD-R's....and it's getting worse. At my studio we routinely check every disc that's intended as a master, and in the last 6 months we have noticed a significant decline in the quality of the burns to disc. We can still get a workable master, but sometimes we run into a batch of discs that are unusable. We wouldn't know it if we didn't test, and that makes me wonder how many discs people make that are malfunctioning in ways they might find unacceptable....if they knew.

THe answer is probably a move away from redbook audio to full resolution data transfer from local servers for consumers and studios alike....but for now we check our discs carefully, and make ddpi masters when we can.

Thursday, August 4, 2011

The hype in "Compression Rules! Rick Rubin Masters Red Hot Chili Peppers Just For iTunes"

If you take the announcement about a track being mastered specifically for itunes at face value the idea is intriguing. So we know fidelity and itunes are somewhat at odds with each other. mp3/aac just don't sound how could they make it sound better? Has Rick Rubin got some secret sauce?

So let's take a look at what might make an aac/mp3 sound 'better'. FIrst ff lossy encoders generate distortion, so to have a better sounding mp3, turn it down...what's the likelihood that that happened in this case? Also consider that RIck Rubin and his team are renowned for creating very very compressed masters. The comment from the audiophile website "...but considering this production Trio's history of sonic destruction it did not shock me." goes to the point. So do we think they would compress less, and turn the level down to make it sound better on itunes? What are the chances? It remains to be heard I suppose but really I wonder, Is this mostly an attention grab aka hype?