Navigation
Search
|
Why Is “Normalize” a Dirty Word?
Friday May 17, 2019. 02:00 PM , from Sweetwater inSync
Mention “normalize” in any recording-oriented online group, and some “experts” will tell you doing normalization is an amateur move, and you should never do it. Others will tell you it’s a valuable and useful tool. So who’s right?
As with so many aspects of audio — they both are. Let’s bust the myths that surround normalization. What Is Normalization? There are two types of normalization — peak and average.Peak normalization detects an audio file’s maximum (peak) level and then raises (or lowers) it to a target peak level. That target peak level is often 0dBFS (maximum available value) but could be a different level, like 6dB below the maximum value. Average normalization detects an audio file’s average level, and similarly raises or lowers it to a target average level. What’s the Difference Between Average and Peak Levels? Shure gives a great analogy: the Himalayan Mountains’ average elevation is 18,000 feet, but Mt. Everest’s peak elevation is 29,029 feet. VU meters represent average signal level, which is why some percussive sounds often don’t register that high a meter reading, even though the signal may be distorting. Our ears tend to perceive level based on a signal’s average level rather than its peak level. For example, a percussive sound with a rapid decay can have a high peak level, but a low average level, so overall it doesn’t sound that loud. On the other hand, a guitar’s distorted power chord — which is pretty much maxed out all the time — may have the same peak level as the percussive sound, but because of its very high average level, it will sound much louder. Many DAWs offer metering that shows both peak and average levels (fig. 1). This article concentrates on peak normalization. Figure 1: Magix Samplitude’s level meter shows the average reading as wide solid bars, and the instantaneous peak reading as the thinner, outside bars. Two lines (outlined in blue) show the maximum peak that’s attained by the signal. Does Normalization Degrade the Sound? For those who say normalization can degrade the sound, it indeed can — if you build a time machine, go back to the mid-1980s, and do your processing in Sound Designer on a Mac Plus. Back in the days of 16-bit audio engines, virtually any processing could theoretically cause degradation, because some operations would round off the digital numbers representing the audio. If you did enough processes, these errors could add up and eventually alter the sound. With today’s high-resolution audio engines (e.g., 32-bit floating point), this simply isn’t an issue. The math handles huge numbers easily. Doesn’t Normalization Affect an Audio File’s Signal-to-noise Ratio? No. Normalization is no different from turning up a volume control. If an audio file’s noise floor is ‑88dB and it peaks at -12dB, peak normalizing to -6dB will make everything 6dB louder — the noise floor will be at -82dB, and the peak will be at -6dB. The signal-to-noise ratio doesn’t change even though the absolute noise floor will be 6dB higher. Should You Normalize Tracks in Multitrack Projects? This gets more complicated, because there can be situations where it’s beneficial, and situations where it isn’t. The biggest issue is found if you don’t do gain staging properly — which brings its own set of problems — because maxing out your audio can cause a ripple effect down a chain of plug-ins. However, normalization can be beneficial for feeding plug-ins a reasonably consistent input level. For example, someone on a recording forum said that if you’re using dynamics plug-ins, you should never use normalization but instead turn up the processor’s input control. Yet there’s no functional difference between the two — either one increases the input level to the dynamics processor. I’ve developed my own presets for dynamics processors and amp sims. These assume a nominal peak input level of -6dB, so I normalize audio feeding them to a peak level of -6dB. This way, I’m not constantly juggling the input control so that the level hits the preset’s optimum input level. Ideally, though, you should be recording at levels that are consistent, so normalization is a moot point. Then again, it’s not an ideal world — just ask anyone who owns a computer. Phrase-by-phrase Vocal Normalization — Is it Really Helpful? You don’t always normalize to 0; you can “normalize” to any target level. Often with vocals, some phrases will be lower or higher in level than other phrases. By isolating these phrases and “normalizing” them to a target level (i.e., similar to the rest of the vocal), you can create a more consistent level, which will make subsequent dynamics processors not have to work so hard (fig. 2). Of course, vocals have natural dynamics you don’t want to neuter — so use your ears. Adjust levels only where levels need adjusting, which may or may not correlate with the waveform you see onscreen, or a meter reading. Figure 2: The top image from Studio One shows a vocal before normalizing individual phrases, while the bottom image shows the vocal after manually normalizing the phrases to the same subjective level. What About Situations Where You Shouldn’t Normalize? Here’s why you have to be really careful about normalizing to 0dB — intersample distortion, or as one so-called professional mastering engineer called it, “technical BS.” Well, it’s not. Here’s why. Most digital output meters measure the level of digital samples. So if the sample reads “0” (as in every LED lit up except the red Over indicator), you’re okay as long as you don’t go over full scale, right? Not necessarily. After this audio goes through a digital-to-analog converter’s smoothing filter, the reproduced arc between the sample values can produce a higher level that, if you’re already running up against the limits of headroom, will exceed the unforgiving nature of analog hardware (fig. 3). Audio that appears to be safely at 0 (or under) can be as much as 3dB over the analog output’s maximum headroom, which can lead to audible distortion. This may be one reason (aside from fold-over distortion from virtual synthesizers) that some people feel 96kHz sounds better — with the samples closer together, perhaps it’s more difficult to have intersample distortion (at least that’s my hypothesis — I haven’t tested it). Figure 3: With the analog audio waveform sampled in (A), raising the digital audio’s level to the maximum available headroom (B) can, after going through the smoothing filter that reconstructs the analog waveform (C), cause the output signal to exceed the available analog headroom. The solution to detecting this kind of distortion is True Peak metering, which takes intersample distortion into account (fig. 4). It’s instructive to look at an output meter that has both conventional and True Peak metering, because you start to realize that 0dB might not be what you think it is. Figure 4: The analytics section of Studio One’s Project Page shows True Peak readings (outlined in yellow for clarity). In this case, the output will be almost a dB over 0 when reconstructed through the D-to-A converter’s smoothing filter. How About Using Normalization to Have the Same Track-to-track Levels on Albums? That idea is flawed from the start, because even though two songs can have the same peak level, whichever song has a higher average level will sound louder. In fact, the technology behind the loudness wars is forcing a high average level (e.g., through limiting or maximization), consistent with the peaks (if there are any left!) not going into distortion. Normalizing peak levels doesn’t contribute significantly to the loudness wars. Normalizing to average levels can actually be a useful tool with album assembly, because you’ll perceive the songs to be at the same general loudness, and then you can make any needed tweaks to have them hit the same subjective level (while also making sure they don’t exceed the available headroom). Regardless, despite the conventional wisdom, normalizing to peak levels can be useful with album assembly for CDs. Because you shouldn’t go over True Peak 0, if all the songs are normalized to True Peak 0, then you know for sure that the perceived softest song will be as loud as it can be. At that point, you can dial back the levels of the louder songs so that they’re balanced properly with the soft song. However, today it’s a streaming world, and the rules have changed, because streaming services adjust music levels based on LUFS readings. If you’re not familiar with the term and its implications, check out the article What Is LUFS, and Why Should I Care? The bottom line is that it means you can master according to the kind of dynamics you want to hear, and the end result won’t sound any louder — or softer — than anyone else’s music when your music is streamed in a playlist. In any event, it’s important to remember that normalization is a tool. Like any tool, the results obtained from that tool depend on who’s using it. Don’t be scared of normalization, but don’t hit that “normalize” button automatically, either. As mentioned at the beginning, both sides are right — so choose the right tool for the right job, and you’ll be just fine. The post Why Is “Normalize” a Dirty Word? appeared first on inSync.
https://www.sweetwater.com/insync/?p=94029
|
115 sources
Current Date
May, Wed 7 - 23:36 CEST
|