Three types of musician you'll prefer to work with in the studio, and one type that you won't
A simple 8-mic drum mix, with video
Two microphone preamplifiers compared at Abbey Road Studio 2 - tube and transistor
Click removal at the start of a track
The new Apple HomePod smart speaker - what difference will it make to your mixing and mastering?
How much difference does mastering really make? [with audio]
Fixing a problem note with Auto-Tune
Make an attention-getting lo-fi introduction for a track
When using a drum virtual instrument, should you record each drum to its own individual track?
Demonstrating the Waves J37 analog tape emulation plug-in and comparison with a real tape recorder
Subscribe to access our latest, up-to-the-minute articles with hints, tips and adventures in audio in the weekly Audio Masterclass Newsletter.
Any recording medium has a certain dynamic range within which it can capture signals. The low end of the dynamic range, where the quiet signals live, is limited by noise and quantization errors. The high end is limited simply by running out of bits; the signal can go no higher.
But in any recording situation, you're never quite sure how high the incoming levels will go. This applies particularly to live recording, and even more so to recordings that are unrehearsed.
So you have to leave some headroom between the maximum incoming level you expect, and the highest level your recording system can handle (known as 0 dBFS).
You might regard this headroom as wasted dynamic range, but when the signal level does indeed go higher than you ever expected it to, you will be thankful that your recording is undistorted. Leave inadequate headroom, and gross distortion is the sure and certain outcome.
But then, when the recording is made, you are sat looking at the screen and the meter is showing you how much dynamic range you 'wasted'. And it troubles you somehow.
The solution though is obvious - simply normalize the level. Most professional disk-based recording systems have this feature where the level of a recording is raised so that it peaks at 0 dBFS. After normalisation, it seems that no dynamic range is wasted.
But there are problems...
The first problem is that it doesn't make the recording sound any better. The recording sounds exactly the same as it did before, it just occupies a higher range of levels. The noise level of the signal has also been raised, so the signal to noise ratio has not improved one little bit.
And actually, it doesn't sound the same - it sounds worse!
Normalizing works by multiplying the value of each sample by a set amount. Any mathematical process applied to sample values results in an error at low levels, and this error needs to be masked.
Masking of errors is done by adding dither - a noise signal that, in basic terms, covers up the problems.
So your normalized recording is may peak at 0 dBFS, and it may look better on the meters, but it is actually noisier than it was before.
There is the further problem of inter-sample peaks, which can occur if a high-level signal is further processed.
So the moral is not to normalize your recordings. Just don't do it because it gives you no benefit and makes the sound quality worse.
The only time it is appropriate to normalize a recording is at the mastering stage, when it is prepared for CD release, or release on any other medium. It is part of the CD specification that recordings should at some point exceed -2 dBFS, and there is no reason why you should not peak exactly at 0 dBFS.
If you want to hear what normalizing does to your recordings, pull the input fader all the way down and record some 'silence', then normalize that. (Turn down your monitors before listening to the result.) The gritty, grainy quality of the high-level noise produced will tell you all that is bad about this process.