All about producing and mastering audio for disc, the web and beyond
Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Monday, August 15, 2011

Analog Versus Digital Signal Processing (dynamics)




Limiters (peak limiters, protection circuits)


Most common is a digital plugin. Plugins tend to be much faster, cleaner, and have less overshoot than what you get in the analog-domain.


Compressors


When you look at an equipment roster of a high-end mastering studio, a compressor will more likely than not be seen in any of the studio’s analog gear. It seems that DSP (digital signal processing) plugins do an excellent job of recreating the dynamic-range control that happens in an analog compressor, yet they don’t quite seem to sound just as good.


What could be the reasons for this? Let’s speculate a little.


When audio comes out into the analog-domain, you get added distortion and noise. These are not necessarily characteristics that will be programmed into the digital circuits (or, algorithms). As a result, you get a subtly different overall presentation of the sound.


The detection circuit (the device used to tell the compressor when to compress the audio passing through it) is what really drives the action of a compressor. Another possibility, which was proposed to me by George Massenberg, is that sample rate for the detector circuit in a digital compressor needs to be much higher than the typical sample rates we are using now, because of the nuances that you typically get at the output of a compression stage. 44.1kHz may be sufficient for the audio passing through the compressor, but it may not present enough detail for the audio that is feeding the detection circuit for the compressor to do as good a job as its analog counterpart. It is speculation, but it is an interesting point to consider.


A lot of the time you will find mastering engineers using an analog circuit not because they’re going to use an EQ to equalize, or a compressor to compress, but because there is something about the filtering that takes place when running audio through that analog gear that changes the sound in a desirable way. So, a great compressor may be used not to compress, but simply due to the tone-shaping sound that is imparted to the program. This seems to be a common factor missing from many current DSP equivalents.


Monday, July 11, 2011

Compression in Mastering (Part 3)

Welcome to Part 3: 'Compression' of the new video-blog series in which Jonathan Wyner of M-WORKS Mastering will be discussing various aspects of the mastering process. Let us know your thoughts, questions and opinions! Stay tuned for a new video and post next week.



Part 3 – Compression


How much compression to use?


Mastering engineers generally don’t use a lot of compression. If any compression is applied during the mastering process, it is usually very subtle. Low ratios (1.2:1 to 2:1) with high thresholds that yield around 2-3 dBs of gain reduction – at most – is common.


Compression and audio fidelity.


In an absolute audiophile sense: compression never sounds good! When compressing one loses depth, gains noise and loses dynamic range, all of which make a recording sound worse. To learn to use compression effectively, one should focus on whether it makes the music sound better. One needs to be able to differentiate between the music and the recording.


The idea of using compression – usually – is to reduce the dynamic range so as to make the different elements in an arrangement sound more clearly to the listener.


Should the mix engineer send a compressed or uncompressed 2-Mix?


If you are a more experienced mix engineer and/or you feel like you’ve got the compression sounding just how you want it, then print the mix with the compression and send it to the mastering engineer (M.E). Every compressor behaves and reacts differently, and those characteristic nuances that you (the artist and/or mixing engineer) have learned to love in the mix may not be so easily replicated by the M.E.


However, if you’re nervous that your compressor is ‘misbehaving’ or you are unsure whether you are using too much compression, it is a good idea to send two versions of the mix. Send the M.E the uncompressed mix and the compressed mix so that the he has it for reference. This way, the M.E will be able to decide if he can improve the uncompressed mix or work with your compressed mix and take it a step further!


Hope you enjoyed this. Please let me know your thoughts, and what you may like to see in future here on the blog.

Tuesday, July 5, 2011

Equalisation in Mastering (Part 2)

This is Part 2: 'Equalization' of the new video-blog series in which Jonathan Wyner of M-WORKS Mastering will be discussing various aspects of the mastering process. Let us know your thoughts, questions and opinions! Next week, compression.



Part 2: Equalization


Why were equalizers created?


Equalizers were invented to compensate for deficiencies in recording mediums (for example, to increase intelligibility over phone-lines). This idea of a corrective equalizer is very much at play in mastering.


An example is if a mixing engineer is perhaps mixing in an overly dull environment. In this case, he will produce overly bright mixes (to compensate). It is then the mastering engineer’s job to try and figure out the inverse EQ to get the mixes sounding more like the mix engineer thought they sounded.


To Cut? Or to Boost?


I think mastering engineers in general find themselves cutting more than boosting.

Listen for areas that sound cloudy, or that contain unpleasant harmonic content and don’t contain much of the fundamental frequency of the instrument. These areas can be gently and carefully carved out.


Older-style equalizers tend to have narrow-bandwidth cuts and broader-bandwidth boosts. This tends to sound better and is a safe, general rule to follow when EQ’ing.


Are there common areas you (the Mastering Engineer) find yourself working on?


There are no set-rules. However, if you find yourself doing the same thing for each master you work on – you may be compensating for a deficiency in your room/listening environment. So try be aware of this.


There are a few common areas that one can focus on, though:


· Usually some clearing out (cutting) can be done in the low-midrange (focus on the relationship with the bass and the vocal, or try to reveal the bass more clearly for example).

· Low-frequency information also tends to be a common area that requires attention at mastering (focus on the relationship between the kick drum and the bass, for example).

· Use small adjustments, and constantly check back with the original. The goal is simply to make the recording sound better! If you improve it, even slightly, then you are doing well!


Small EQ moves to make Big changes.


Most of the boosts and cuts that I am doing are no more than 0.5-1dB. The reasons for that are:


· You are working with a complex waveform that is a balanced recording. Thus, big changes are likely to alter the balance in a way that may not reflect the artist’s intention.

· An EQ filter sounds better – that is, it has less distortion and less ringing – if you use broad bandwidths (‘Q’s) and are making small moves (in dBs) with it.


So sometimes in mastering you will use up to 12 different EQ filters, but each one will be doing just a little bit. That is pretty typical of a mastering engineer’s use of an equalizer.


Join us next week for Compression!

Saturday, November 27, 2010

OK here we go, Moore's law in action again?

Moore's law states that the number of transistors per square inch on integrated circuits would doubled every year for the foreseeable future.

In subsequent years (since 1965), the pace slowed down a bit, but data density has doubled approximately every 18 months. Most experts, including Moore expect this to hold for at least another two decades.

So we probably want to hold on to out hats if not wallets as the next speed upgrades are inevitable. Could this be the next salvo:

http://techresearch.intel.com/ProjectDetails.aspx?Id=143

Lightpeak would be a new protocol for interconnecting devices that would allow blazingly fast data transfer. What do we get from it? Higher resolution, more tracks? Probably both and with it comes larger capacity on our storage devices.

We should probably be prepared to upgrade our hardware, interfaces, cup's and storage every few years or so. Painful, but true....I hope the landfills can take it!