It's a sign of something when you start dreaming about audio. A sign of what, I don't know, but sometimes dreams offer insights that are not available in the normal waking world.
So there I was, immersed in my dream when a female tech-metal band approached me with a question. They were dressed in black with metallic detailing, but other than the fact that I remember this, which might say more about me personally than I ought to reveal, the issue is genderless and relevant to all musical genres.
The question came from the bassist of the band and it went something like this... "Would you record us please? You have 24-bit equipment. We only have 18 bits and it's causing a lot of distortion."
OK. That made me think. What on earth could be the problem with 18-bit recording that 24 bits could solve? I pondered this for a while - still in my dream - but I had to wait for a demonstration for the issue to be resolved. Tech metal requires a lot of harmonic generation (that's the nice word for 'distortion'). The band were plugging their guitars straight in to their 18-bit audio interface and turning up the gain so that they got distortion. But it was too much, and the wrong type. It sounded too distorted and just plain bad.
Right. I'm back in the real world now, a little bit wondering how I couldn't know something in my dream that would be revealed to me later on, when I made up the whole damned thing inside my head.
But the band's concern is a real issue, in fact two issues that are independent of each other. Basic issues to be sure, but we all had to learn the basics once and both of these issues are very relevant to all kinds of recording.
One of the fundamental parameters of digital audio is how many bits are used. 24-bit is the current norm and without any shadow of a doubt whatsoever 24 bits are enough. 32 bits may be the future, but that will be another article for another day.
If I go back to the old days of analog recording, there was always a struggle between noise and distortion. Record at a low level then the signal would be clean, but you would hear the noise in quieter passages of music. Record at a high level and the noise would be less, but at the cost of clearly-audible distortion on peaks and higher levels.
My introduction to digital audio very many years ago was an Akai system, which I don't think ever came to market commercially. It was 14-bit and compared to analog tape it sounded great. No audible noise, no audible distortion. With the recording level set so that peaks came just below 0 dBFS there was no clipping and no distortion on peaks, and the noise level was significantly lower than an analog recorder could achieve. The theoretical signal-to-noise ratio of a 14-bit system is 84 dB, which means that a recording made at just the right gain setting will have a noise level so low that it is almost inaudible, and will almost certainly be masked by ambient noise in most real-world listening scenarios.
It's worth saying at this point that 14 bits might sound like not very many. Compare this however with the 10-bit NICAM system that was used in television before the era of digital broadcasting. It didn't sound at all bad. (Near-Instantaneous Companded Audio Multiplex, if you were wondering.)
As you may know or remember, the compact disc format uses 16 bits. Granted this is eight bits fewer than the 24-bit standard we now prefer, but with levels optimized it sounds absolutely fine. You could make a perfectly good multitrack recording with 16 bits, then mix it to a 16-bit stereo file that would satisfy any real-world listener. The 18-bit system of my dream world fiction would be fine for any practical purpose.
The reason we use 24 bits now, by the way, is simply to allow more headroom. There is a 'sweet spot' in 16-bit recording where the level is high enough to be well away from the noise, and low enough to allow enough headroom to avoid the possibility of clipping. 24-bit recording extends this sweet spot into a 'sweet ball park' so you can record without audible noise and have as much headroom as anyone could ever need.
One more point relates to the tech-metal genre. Normally this kind of music starts loud, ends loud, and is loud all the way in between. It doesn't actually require a huge signal-to-noise ratio because it is always loud enough to mask any noise. It wouldn't surprise me if a successful recording could be made using as few as 8 bits. Anyone care to try?
The other issue here is completely independent of the number of bits. That is whether it is a good thing to record a clean signal, or whether a clean signal is uninteresting without added harmonics (which as I said earlier is the same thing as distortion).
My mind goes back to a session a long time ago where I was playing bass guitar. The engineer had decided to record me through a DI (direct injection) box. The recording medium was 24-track analog tape.
The session started with learning and rehearsing a new song. Then we went for a take. On listening to this in the control room it was clear that the bass lacked texture and was completely and utterly - despite my best efforts at playing - uninteresting.
Then someone remembered that there was an old guitar amp in the basement. Yes there was - an old Marshall 50 watt amp and a cabinet with a tear in the grille cloth. I could smell the mould. But I plugged in and tried it out. There was no setting that didn't sound ridiculously distorted and clearly there was something wrong with the amp, but it was collectively decided that we should try it out. So we went for a take and then listened back in the control room.
In the context of the other instruments and guide vocal, the bass sounded great! What sounded like too much distortion in isolation was just right in the control room mix.
I feel that the time is right for a musical example. Here is a short example of bass, played directly into the instrument input of the audio interface...
So what about creating distortion through setting the gain too high so that the signal clips? Well it's worth a try. I've adjusted the level after recording so that it is approximately the same as the clean version.
Maybe I've gone a bit too far with this. Or maybe not far enough for your taste. Personally I don't like it. Perhaps it's because I learned audio in an era when the battle against distortion was constant. But turning up the preamp gain so far that the signal distorts is a technique that has been used in the past, famously by The Beatles and 10cc, and continues to be used in the present. (As an aside, these two examples from the past are of electronic distortion rather than digital distortion, but the end results are very similar.)
The solution is either to record through an amplifier and loudspeaker cabinet, or use a harmonic generation plug-in. So here is my third example which is played though a tiny 5-watt Fender Champion 600 amplifier and miked with a Shure SM57. There is no EQ, compression or any other kind of processing. The frequency balance is different to the previous examples due to the voicing of the amp and its speaker, and there's a bit of a buzz from the speaker but surprisingly not so much on the lower notes.
Clearly the winner is either of the two last examples. The amp emulation is cleaner, but the real amp has more of an organic texture. In the end, it's down to personal preference.
Electric and bass guitar need harmonic generation (distortion) to sound at their best. Simple clipping usually doesn't sound good, so an amplifier or amp emulation plug-in should be used.
These are, as I said earlier, both very basic issues. But we all had to learn the basics once and it is useful to be reminded of them from time to time.
Just to finish off I thought it would be nice to hear a real-life tech-metal band. It will give your ears a good clean out...Come on the FREE COURSE TOUR