16-bit vs. 24-bit - Less noise or more detail?
Comments on this video
You can comment on this video at YouTube
Just watched a video by an audio engineer for a record company. He stated that the only difference between 16-bit and 24-bit is dynamic range and you would have to hear the music at insanely high volumes to notice any difference.
In the modern world of mp3, loudness war, music played on smartphones, I consider myself an audiophile, but 16/44,1KHz is enough for me, because the background noise present on the master recordings is always higher than the ground noise of the 16bit. Than why should I bother with 24/48 or 24/96Khz tracks, sacrificing more storage space? Not to mention that it is already difficult to find records that exploit correctly the entire 16 bit range. Maybe the difference can be heard on 24bit files of some classic or jazz music, but they are not my genre.
16 bits are adequate to make noise-free recordings. 24 bits gives us ‘slack’. For example, an engineer on a live date doesn’t get the opportunity to set gain before the converter and notes the channel is peaking at minus 30. No problem!! Feeding a 24 bit converter at -30 or even -40 is perfectly valid. Just leave it and enjoy the song. 16 bit would be deep into the noise at -40, completely unacceptable. That’s pretty much it.
Detail doesn’t really enter the equation as long as you are not in the noise. A perfectly set 14 bit converter captures the same detail as a 24 bit converter because we can’t hear more than 85dB of dynamic range on a single channel contributing to a stereo mix. This isn’t just theory, this is memory. I recall those days when success was tracking at -2. It was stressful and resulted in overloads nobody needed.
Cactus sorbet sounds wonderful
Yes, and that prickly aftertaste.
This was fantastic when you consider that the Audiophiles will in the same breath say that vinyl sounds far better than 24bit audio. So about half the detail of their 24bit sound file,, that they will say FLAC sounds worse than WAV as there is some form of data compression in haw it’s encoded, but that’s a separate comment. Glad we didn’t take into consideration the noise generated from not using medical grade electric plugs, and how we skimped out with consumer usb cable to our DAC we bought off Amazon.
ah yes the master baiter
Audiophile here. I hate audiophiles. (by the way, its both)
If we stay in the analogue domain first, before addressing the video's question, then let's first wonder what "bandwidth" is needed for a 440Hz music note. That's a loaded question, though. Play that central A note on a piano, violin, flute, clarinet, saxophone, etc. Each time it is 440Hz, right? But none of these have perfect sines as wave shape. So there is information on the 440Hz that we can analyse so as to recognize the instrument (voice, kind of sound), and we can place it somewhere in space.
But to recognise individual voices/instruments must be learned and that takes many learning moments. If you never went to a live classical concert, then you don't know what is wrong with a recording of that (if there is something wrong).
Imagine the clarinet produces a block wave and the flute sine wave, but a cello has a sone wave with a couple simple spikes in them.
Here we have left the 440Hz as each of the deviations are in a much higher frequency range.
Our ears are physically limited to about 20kHz, when measured in sine waves, but that is a coarse abstraction.
Whatever the point of the 20 kHz, in electronic recording and playback, the question is not what we can hear - assuming an exceptionally well trained listener - but what bandwidth is needed in an amplifier, cross-over filter or speaker to play a perfect 440Hz block wave, or to get the two tiny spikes that make a cello sound like cello, sound like a cello in playback, because the tiny spikes are at the right place and in the right form. Because - phase fault, if the spike shifts it's no longer a cello we hear, and if the shape distorts, it's no longer a cello too.
And the same applies to aunt Lucy's voice, or uncle Stephen's.
I saw a pianist in a Steinway grand piano parking room of a concert building - 5 model D grand pianos. She had to select one that best matched her music in character and best matched her touché in its action. She played all five and about number 2 said, "I played this one last year".
I saw a blind person in a new large room that asked, are there windows there, pointing at windows, and is there an open door over there, pointing at one.
The auditive brain is much faster than the visual and it relies on wave shape processing in an acoustically relevant to human survival band. Electrical engineers underestimate how good it is. Listeners accept electronically mixed recordings that have "micro-dynamics" for left right but conflicting phase information. This all goes back to wave shape. If you ever saw how good an amplifier can play back a block wave at 20 kHz then may be aware to never have seen a block wave, but spiked versions with over/under shoot, or saw tooth, or sine approximations based on a perfect block wave input.
My pre-amplifier as a bandwidth of 1MHz. The power amp "only" of 150kHz with peak current delivery capability of 60A per channel.
An electrical engineer once responded to my reasoning about wave shape that Fourier analysis could create a block wave with only five sine waves. Well, you need five half sines each of different frequency to fill the positive block and there is no amplifier in the world that can do that. I guess. And, what signal source does the Foerier transformation?
How many bits do I need? What sample rate?
Lets go to the bandwidth requirement first. You tell me.
Oh, a big problem in life and science is that people think something is not true when they cannot imagine it to be true.
Remember the geocentric model of the cosmos?
The problems are all over the place and humans are very capable of filtering the message out of the noise, but that doesn't mean it sounds nice or correct. If we throw speakers into the debate then it becomes hopeless almost. With lousy amplifiers and loudspeakers we do not need a lot.
Good content, and Audio Phil had me laughing really hard. Well done, good sir!
I feel so sorry for audio plebs. I as an audiophile have the luxury of hearing the musician's thoughts as they play. Their thoughts are a little on the warm side, but sometimes they can be crispy or brittle but always warm.
I love your perspective on audiophiles
32bit is next!
Definitely less distortion with 24 bits. Listen to the pianissimo passages on a 16bit classical CD with the highest gain your system is capable of. If you can't hear the distortions, then you need a new set of ears. Why is that so? Because pianissimo gives us at most 8-11 bits of actual resolution and that's just not enough for high quality audio. So why does it usually not matter? Because the distortion on that pianissimo falls below the hearing threshold of the human ear. The only way one can actually perceive it is by amplifying it. In other words: 16bit PCM is plenty good enough AT ONE GAIN setting. It is not nearly good enough for professional use with high dynamic range signals.
@@nicksterj There is no resolution limit if you are oversampling, which you aren't with a 44.1kHz sampling rate. You need to read the fine print of the theory.
@@nicksterj I don't need you to prove to me again that you don't understand the theory. I got it the first time around. :-)
@@nicksterj I never believe bullshit put out by random people without sufficient knowledge. You simply don't know what you don't know and, what's worse, you don't want to learn because that would hurt your ego. ;-)
Dither is noise. Sure you can hear noise, when you increase the volume enough. But this does not mean that there is person on earth that can hear (or feel) that noise. No one can hear the rustling of leaves in the distance while a Boing is taking off nearby. At the end the blood in your ears produce more noise than the flipping of the bits during the dithering.
That is only true as long as you don't adjust the dynamic range of the signal. It's not true any longer if you apply any form of post-gain or dynamic compression.
If the music is good then you only need 16 bits 🙃
I don't think I’d make the difference when I listen to music. On the other hand when I play (digital) piano I’m using an acoustic simulation program named PianoTeq that uses high resolution (32 bits 92000 Hz) which I listen with a 24/96 audio interface and headphones. When playing it’s important to forget that your instrument isn’t real so any realistic detail has its importance, be it the tiny reveberation of the sound once you’ve released the keys, or sympathetic vibrations of loose strings (PianoTeq allows you to adjust the wear of you piano to make it sound bit less brilliant and a bit more convincing, so there may be loose strings). I should try to downsample a recording and see what it does to those tiny details that have no importance for the listener but have some for the performer. Maybe there is a placebo effect or something.
I've just recently discovered your channel within the last week and after several of your videos, I'm really enjoying the don't-take-it-so-seriously attitude you bring to your posts. Dispassionately examining the rhetoric is very helpful in dialing back the outrage on a topic like 16-bit vs. 24-bit. Even if I can hear the difference in bit rate, I'm more likely to hear glaring differences like room noise or poor mic placement, even when I'm listening analytically. Not to say there isn't a difference, but as you point out, there are some differences that don't make that much difference to the end user. But besides that, no single parameter on its own (bit rate, sampling rate, noise floor, etc.) is the silver bullet that provides the end-all definition of what makes a good recording. (To be fair, the misuse and abuse of some parameters can indeed yield a definitively bad recording, but that's not the same as the hair-splitting between 'near-perfect' and 'nearer-than-that-to-perfect'.)
That said, from time to time I still listen to -- and enjoy -- CD's, LP's (to a lesser extent 45 and 78 RPM singles), cassettes, open reel tape and even 8-track cartridges. I call it the 'fidelity-to-what?' principle. Music is such a subjective topic when it comes to nailing down what makes it enjoyable for you, and it's mainly an emotional aspiration rather than intellectual. So if I'm willing, perhaps even prefer, to listen to a cassette from the 1980's or a cartridge from the 1970's, I may not be in pursuit at that moment of 'absolute fidelity to the original sound', I may be seeking the same feeling I felt the first time I heard that song, and remembering the people I was in company with at the time. Thinking of the car I drove or the place I lived. Or in the case of media that predates my own life, experiencing the sound the way contemporaries of that technology did. Fidelity to a memory or emotion, if you will.
For example, I remember hearing certain songs on CD in the 1980's that didn't sound quite 'right' to me, because they didn't match my memory of hearing them the first time on AM radio in the early 1970's. Expanding from mono to stereo, losing the noise and static, and widening the frequency range emphasized different sounds in the recordings, but often the speed was off, seemed slow. I later learned that some Top 40 stations had a habit of running their turntables slightly faster, perhaps 46 or 47 RPM, so the music would sound a bit livelier (and by gaining a few seconds per song, you might have another minute or two per hour of advertising time to sell). But the point is, better audio fidelity actually interfered a bit with my enjoyment of some songs until I understood what was going on, simply because they didn't match my emotional memory. (I expect this generation may come to feel that way about 128K MP3's.)
By the same token, as an audio engineer (and enthusiast), I'm not against creating and listening to the best we can possibly muster in the sonic arts; in fact, I think we should. Right now I work in a small studio that produces mainly podcasts and audiobooks, and I'm always looking for ways to eliminate audio distractions, especially noise. But I take the view that while we should obviously tend to the technical parameters, we should do so in service to the content we are recording without getting distracted into making the technology the main thing.
Yet, in all things, charity. Whatever format a person likes and what they are willing to spend on it is fine with me. If I had elephant bucks to spend, I would likely purchase more classic gear, because it's fun! But I will also admit that even then, the grail for me would not be ALL about the sound alone...some of it would be visual, tactile, nostalgic, and again, emotional.
I couldn't agree more, excellent text
I do not disagree with your axioms. There will always be people who want a good quality and they are really passionate about it, so why would I wanna disappoint them?
Edit: But I do not agree that audio needs to be 24bit. My audio interface for example only supports 16bit, so I never listen to 24bit audio at home. That in turn also means I make music in 16bit. I'm fine with that for now. The music can sound great and if it was designed with this bitdepth, then it was also meant for that bitdepth. Only if someone converted a 24bit audio file to a 16bit one, while being able to listen to 24bit audio, I'd maybe feel like I might be missing out on something subtle
@@nicksterj It depends what you want to do with the signal. 16 bits is not primarily more noise. It's mostly more distortion for low amplitude signals. 16 bit is worth roughly 96dB of dynamic range. 8 bits are only 48dB, so if you are listening to signals that are -40dB or less on a 16bit system, then you are getting the distortions of a 8-9 bit system... and they are awful. We are talking less than telephone quality awful. This is usually masked by the low volume, but if you want to run audio with 16 bit quantization through a compressor, then you are in for a nasty surprise. When would you want to do this? A typical application would be extended sustain on guitar samples by using a digital compressor effect. Not much of a problem at 24bit, but a serious challenge at 16bit quantization.
@@nicksterj And I didn't argue that a dithered 16 bit system isn't useful as a distribution medium. It just doesn't work for much else in the audio chain. For that we need to do a lot better. I even gave you an example for which it does not suffice.
@@nicksterj "Detail" is not an engineering criterion. Harmonic distortion, in particular intermodulation distortion is. 8bit quantization has absolutely horrible IM components, so every time you get down to the -40-50dB level your 16bit system is going to hurt your signal audibly and not just a little. If you don't believe me... just try it.
I think here is to compare 16-bit to 24-bit on the dB level is not mattering to much. The signal amplitude will stay the same, but basically you get more steps of the amplitude to recover the original signal. Where white lower bit rate you have to add additional noise in form of ditter that you don’t notice the difference, good examples comparison you can also find in pictures.
Devil is in the detail
I will probably have to run some more experiments but it does not sound the same to me. I just downscaled some Led Zeppelin. Not a big difference but clearly more smooth and finer definiton on 24bit. And I listnen on low volume and notebook fan in front of me. IMO it all depends very much on the listening enviroment and loudness too. The more loud you play the more detail those extra bits shall reveal. As to enviroment the more quiet the more we can focus on music detail. Last not least the oversampling magic which is always there is designed to trick our hearing somehow too ... That should not be forgotten . I am professional musician so my ears are long time trained but by no means I can relate to those "pros"| stating that they can not distinguish between 16 and 24 bits.
@@nicksterj That is simply not true. A good microphone has close to 120dB dynamic range. Very expensive condenser microphones are specified with 132dB. That's equivalent to up to 22-23bits. This gets even worse when you are producing multi-channel recordings, because now the noise averages out, but the signals may not, so you can actually get more dynamic range out of e.g. a multi-microphone orchestral recording than out of a single channel.
@@nicksterj I have an audio system on my table right now that has close to 120dB SNR. I don't know what you are talking about and neither do you, it seems.
If you actually read my posts then you will have noticed that I said that 16bits are enough AT CONSTANT GAIN. The production process is the opposite of constant gain and that's where you absolutely need higher resolution.
That I don't turn the volume knob while listening to classical music is not true, either. Do you know why? Because I have a noisy apartment. My local noise floor drowns out the pianissimo, hence I have to raise the level during those passages to hear the music and that is when 16bit artifacts become quite audible.
@@nicksterj If you are recording in 24 bit and you are producing in 24 bit, then why in the world would you ship in 16 bit??? I don't get the logic here. Is there some sort of pandemic of the inferior file format virus going on? Did people miss that one can get 512GByte USB sticks these days? What is wrong with all of you? ;-)
Let me repeat this, again. 16 bit has horrible and audible distortion during pianissimo passages. If you can't hear that because you have a stereo system where the volume control knob has been welded to the front panel, then maybe you shouldn't be in a discussion about audio formats. ;-)
@@nicksterj It's not my problem that Fauxtaku doesn't know how to design audio electronics correctly. I do. My systems are close to physics limits. I am not even using anything special. It's just the manufacturer's ADC and DAC test boards and a few home made amplifier stages with properly chosen low noise opamps. Stuff that every electronics design engineer should be able to do in his or her sleep. This ain't amateur hour over here. ;-)
@@nicksterj I gave everybody the evidence. Get a recording with pianissimo passages and turn up the volume. It's as simple an experiment as one can do and it works. At this point you are just like an obstinate kid who doesn't want to eat his dinner, kid. ;-)
I've just moved from a 24-bit recording interface, to one that does 32-bit floating point, and my DAW software supports that. I haven't had it long, so I've only done some preliminary testing. Aside from the convenience of having so much dynamic range that you can essentially set your mic levels after the fact, there is a bit of an audible difference. I honestly wasn't expecting to hear a difference, but when changing only the interface, the 32-bit recordings sound more 3-dimensional, more dynamic, and like I can hear farther into the mix. It's a subtle difference, but enough that I noticed it when I wasn't listening critically at all.
it could be that your new interface has a better A/D or D/A conversion, which would mean it's not the bitdepth
@@BeatsbastelnI moved from Apogee to MOTU. Both interfaces have the same DAC chip. The difference in the AD stage is a 24-bit AKM chip in the Apogee vs a 32-bit floating point ESS chip in the MOTU. Both units have virtually identical THD, noise, and dynamic range specs on the input.
32 bit floating point is only 24bit mantissa... so if you think that you are doing yourself a favor there, then you weren't paying attention in your numerical methods class in university. You would be infinitely better off with proper use of a 32bit integer library for signal processing.
@@lepidoptera9337 Thankfully, I was a Computer Science major who paid attention. I’m glad to explain how 32-bit floats can describe lower-level signals, despite having a 24-bit mantissa. It’s because they can have negative exponents. So if the lowest-level signal that a 24-bit integer can describe is 1, a 32-bit float can describe a signal as low as 1.2x10^-38. A 32-bit float audio file has a theoretical dynamic range of -758 dBFS to +770 dBFS, and across that range, it can describe every amplitude with the same granularity that a 24-bit integer file can describe a dynamic range of just -144dBFS to 0 dBFS. This pushes the noise floor of a 32-bit float file off into complete irrelevance, and also means you can record at any volume level, and set mix your levels later.
@@lepidoptera9337 Just to follow up on the integer vs float portion of your comment, with PCM audio, each bit of integer gets you 6 dB of dynamic range, so with 32-bit, you can have 192 dB of dynamic range if you format it as an integer, or 1528 db of dynamic range if you format it as a float, both with the same resolution.
To me using 24 bit for anything that was originally recorded on analogue tape, is laughable: you are basically wasting 8 to 10 bits just to accurately sample the hiss of the master tape.
I clearly fall in the 99.9% of those who do not hear any difference between 16 and 24 bits, but my non-insulated, not-acoustically treated listening room has a noise floor of roughly 40 dB(A) at 2 am in the morning (it gets worse at daytime); assuming I do not want to destroy my hear drums (or being killed by my neighbours), there is no way I can get any close to 96 dB above my ambient noise floor.
The hiss might be part of the recorded performance. I have a track recorded on tape that has a pause of several seconds. I tried making a version where this pause is digitally 'black'. Didn't sound right.
@@AudioMasterclass I'm considering 16 vs 24 only from the perspective of the final consumer, for any pro audio applications 24 is undoubtedly the way to go. But when I see on-line "audiophile" streaming services offering 24/96 stream of album recorded in the 60's or 70's I really cannot see the point: of the (big) data stream that reaches you audio reproduction system, 30% is music, the rest is the "master tape" experience, i.e. all the noises, hiss, distortions (that you very clearly outlined in your analogue tape revival series) that are inherent to the analogue magnetic tape.
You can't hear the difference because you don't know what you need to listen to. One can hear the difference very, very clearly during pianissimo passages. The real need for 24 bit arises if you want to post-process the signal. As soon as you are applying any kind of post-gain to low volume signals then 16 bit systems start sounding like a 1960s telephone line. You have to understand that 16bit was NEVER meant as a mastering format. It was a format for a distribution channel and as such it is "good enough".
24bit, Dolby C, DBX, and of course, an Eames Lounger for me!!! 😎
Genuine Eames or replica?
It always makes me laugh that most audiophiles are above the age of 50.
If you know anything about human hearing, you start losing your hearing range after the age of 50.
I would bet £100 this guy wouldn't be able to tell the difference between and 320 mp3, and a Wav put through a good dac, if he was blind folded.
Care to take the test fella?
Your point seems to be that ageing audiophiles should downgrade their systems to match their degraded hearing. Thank you for that, I reckon it's going to make a very popular, and profitable, video for me.
I would partly agree with you because my tinnitus volume is now somewhere at the -22dB range. But since a 24bit ADC/DAC chip costs like one buck more than a 16bit chip... who gives a frell? I still have that buck. ;-)
I'm just hear for Audio Phil...
The sarcastic undertones 💀
We should all be recording in 32bit float. Digital noise is non-existent and with the right AD converters clips can be totally recovered.
I have my aged DAC with 100dB Dynamic range that specifies SnR without THD. I turn it to maximum to have maximum Dynamic range (100dB) then I attenuate the output of DAC by 30dB analog potentiometer (variable resistor potentiometer) to pass it to my amp with more than 124dB full Dynamic range/more than 99dB at 1W signal output. No question I don't hear my amp noise from my licening position since my speakers have sensitivity only 90dBA/(m*W). 0dBA - is the lowest sound the human can hear, while the amp is -9dB below 0dBA level, so it is 2.83 times less signal volume that human can hear. But I also don't hear my DAC noise since it is -100 - 30 = -130dB below maximum signal my amp can produce, and -6dB = -130dB+124dB below my amp+speakers own noise level equal to -9dBA (-15dBA accumulated).
Now if I play my CD-DA player via my aged DAC will I hear any noise? No, I will not since CD-DA dithering will be between -120dB and -130dB level - way out from any human hearing capabilities. Do I need 24bit music to level noise to -150dB level? I don't think so since the amp is still the main source of noise at -9dBA level.
oh the shade! 🤣
A gauche, dilettante, opinionated twat holds forth on yootoob.
I can hear an audible difference between 16 bit CD's and 20 bit HDCD's on my home system and to my ear, 20 bit sounds slightly better.
Impressive with a dynamic resolution of 15-bit. That's all you can hear.
I have always recorded in 24 bit. However as a consumer format, 16 is good but it is so much easier to maximize the quality of 16bit if you start with 24 and then convert the final product to 16.
Is there any quality concerns when converting from 16-bit tracks into 24 later on? or vice versa?
I find your videos very entertaining and informative. I'm the kinda of idiot that would blow a load of money on high end gear just because on paper a number is higher than another number so it has to be better, even if to my ears there's no decernible difference. Your videos keep me more grounded.
It’s an odd thing, but 24 bit recordings sound richer and fuller to me than 16 - rather like the difference between whole milk and skim milk. It’s not clear to me why this should be - I can only assume that slightly less noise and slightly more detail have a cumulative effect. 16 almost sounds ghostly compared to 24.
Whole milk all the way.
Also, think that a very quiet room has a noise level of 20-30dB and you damage your ears if listening louder than 80dB. That is 50-60dB dynamic range. The 16-bit quantization noise will be more than 30dB below ambient noise. Even without dither it can be very difficult to hear distortion 30-40dB below noise level.
Your channel would be so much better without all the ridicule for people that don`t know better. Is a lot of stuff targeting "audiphile consumers" scammy? Sure. Do your videos help to convince them? I doubt it.
32-bit PCM Vs DSD
Surely there are much more important factors making a crappy recording crappy than it being in 16-bits instead of 24. Maybe focus should be elsewhere. Just because there exists some low-hanging fruit (using 24 bits instead of 16) does not necessarily make it important. My axiom #0: More bang is better than more bang-for-buck.
Have there been any properly blind tests where people could reliably identify a 24 bit musical signal against the same signal, but degraded to 16 bits and dithered? I would suspect not, but if there was, I'd be more likely to reconsider.
The difference is inaudible for all practical purposes. Redbook CD rules!!
Betty and Debbie the antidote to .....well you know what! :0)
For me it's all about headroom. This is important for the recording and production phases. I was given a raw 24bit live recording where the levels had been incorrectly set for the first few minutes and I needed to bring the levels up to match the rest of the recording ... I feared the worst whilst remembering trying such things in the old world of analogue recording where the previously inaudible noise floor would intrude unpleasantly. I therefore was blown away when I simply selected the low level section and brought it up to the rest of the recording using Audacity. To my ears it was audibly perfect with no noise floor issues at all! However, an end product distributed in a 16bit format (once all the production tweaks have been done) is fine for my ears.
Never heard the lightest difference between 16 and 24bit. Nor did a younger audio engineer I played blind a/b comparisons to. Unlike what people have been been told 320kbs has audible differences. And with the right music you can tell one from the other. You can even pick 320kbs after its been recorded onto cassette, because the same flaws get written onto the tape and don't go away, despite the higher noise floor and other imperfections. 24 makes a lot of sense for live mixing or mastering. But 16 is better than anything we can hear and plenty good enough for a final product.
There's a lot of music that too much information stars to detracts from what could be a great song.
Back in the early digital days, I was at a friend's studio where he had a 12 bit sampler. I asked if that wasn't too poor quality to be used in making recordings. He told me (and it made sense to me) that any samples that were used would be inconsequential relative to the rest of the mix.
12-bit samples are noisy or grainy or both. But since this was all that was available at the time, producers accepted this and made music that worked with the 12-bit texture.
I've got an Eames replica (I can't afford an original since I spent all my money on cables) They're not as comfortable as they look
Thanks for this explanation. Here’s an anecdotal story of why 16 bits isn’t enough, and why, as a professional video editor, I always do my final mixing at 24 bits, even if the tracks I’m working with start at 16 bits.
A couple of decades ago, I put together a “reference” audiophile CD for an audiophile club. I ripped tracks from commercial CDs, assembled them, then burned a compilation CD. The copies sounded like the originals, but the live tracks had applause cut off, and levels of some tracks were too low. So I brought the ripped, uncompressed AIFF files into a Pro Tools project to fix the minor issues. I added fade-ins and fade-outs to the live tracks, and adjusted levels on a few tracks to make things more consistent. I spit the problem tracks out of Pro Tools at 16/44.1 and burned new CDs. I was proud of myself until I listened critically to the results on headphones. All the tracks that came out of Pro Tools sounded noticeably degraded. It was not subtle.
When doing final mixing on my video projects, I typically use EQ, noise reduction and compression/limiting. I can clearly hear my processing degrading the 16/48 tracks, which limits how far I can push the processing if working at 16 bits. At 24 bits, the same mix with the same processing sounds cleaner. To me, it’s not subtle. Would my clients care when listening to my deliverables? Probably not. But I care because I know what to listen for and can easily hear the difference.
Editing in 16 bits is sort of like editing with analog. Every operation degrades things.Maddening.
Is there a more annoying guy on YouTube?
I don't know. If you do then tell me and I'll take lessons.
@@AudioMasterclass - Maybe you could teach me?
I'm an audiophile purest. I only listen to music played back on my "Webster-Chicago Model 7 Wire Recorder". It's the only way to truly experience the musical message that the artist is trying to convey.
Audiophools think they can hear the difference - until they enter a double blind A/B test. I own a midlevel DAC, headphones amplifier, and headphones, but a person can take things so far that they aren't listening to the music, but instead they are listening to the equipment, and isn't that missing the whole point of listening to music in the first place?
The problem is the not enough sample rate instead. 24 -32bit is marketing bullshit.
All this wouldn't be a problem, if music still was analogue!!
Loving the nice debates in the comments and proving that YouTube can be interesting on a number of different levels.
I'm sceptical about anything higher than 16/44.1 - the industry is using the higher numbers to make money out of gullible people. If people are hearing "an improvement" then it's either down to a mastering difference between the two versions, or they are possibly believing they are hearing an improvement because it's bigger numbers and the gear costs more - so it must be better! In a long running experiment, someone has proved that there is no discernable difference between 16/44.1 and higher resolutions.
A roundabout way of saying that CD quality is all you'll ever need unless you believe in snake oil (Audiophile Oil). I concur.
The real factor is how a system is setup -- speaker positioning is crucial.
I seem to remember early CD recordings or players only supporting 14 bits because 16 bits was a little ahead of the technology. What I do remember is that the Magnavox player I bought in the mid-80's made music sound brittle. Thankfully it self-destructed after a couple years.
We have SONY to thank for raising the audio CD spec from 14 to 16 bit. Philips' original proposal for the CD spec was 14 bit. But by that time Philips was already too far advanced on the development/production of their single-channel 14-bit TDA1540 DAC. So early CD players based on the Philips TDA1540 DAC had to make do with the 14-bit DAC (+ 4 x oversampling thanks to the SAA7030 filter) for playing back 16-bit audio CDs. Being single channel it also needed to be multiplexed between the left and right channels. In 1985 Philips released their first 2-channel 16-bit Audio DAC on the market, the TDA1541, and carried forward the oversampling technology with its matching SAA7220 4 x oversampling filter.
One of the first consumer digital recorder made by Technics/Panasonic (based on VHS tapes no less) was using 14 bits - which back in 1979/1980 was considered good enough. Sony pushed for 16 bits as it was a round number and exactly two bytes. As explained in the other comments, Philips released it's first DAC TDA1540 as a 14 bits DAC. But thanks to the magic of noise shaping (it is not just over-sampling) it could achieve true 16 bits resolution. This is the path that later lead to 1 bit DAC - what Philips did in 1985 still amazes me to this day, and I have a couple of CD player based on the TDA1540 and they do sound nice !
What about playing "cassette tapes" so much you can hear the other side of the tape playing "backwards".
You jest, and your jest is amusing.
Actually possible in theory, but it is not a tape fault; it is either extremely poor tape/head alignment (I don't mean azimuth) or a faulty/worn actuator for the rotating head in an auto-reverse mechanism. I think in theory a badly worn capstan bearing could also cause this by pulling the tape at a skewed angle through the head tape guides, enough for the adjacent (B-side) track to just touch the edge of the head gap, but that would go paired with a significant azimuth error and loss of most high frequencies.
Generally I do prefer a longer word length, 24bit 48khz is perfectly great data, dynamics, process, latency just works.
Curiously I've noticed as this trend 96k bla bla, coincides with the more deployment of room acoustic dampening etc.
Bass traps excellent, too much high damp not so good as it can really mask ambient detail, realism is often provided by a sensibly tuned room.
So my theory is as the resolution of the source goes up so does the poor off axis response of most loudspeakers, becoming more noticeable,
in the reflective range along with timing image smearing ,phase problems, and crazy room dampening is a band aid, apart from bass traps they are cool.
Guess Im hoping loudspeaker tech will catch up with the source, DSP is pretty effective but any great great recording starts with a great source and that
needs great transducers.
This is anecdotal at best so take from it what you will. Years ago, we're talking Pro Tools7 era, I was recording a group and they were going to get the tracks mastered elsewhere. We were about finished when they finally mentioned that their mastering guy wanted 24/48. I said, "Sorry guys, I recorded 16/44.1" This is followed by a little bit of grumbling from the mastering engineer. A few weeks later I get a call from said engineer saying, "This is the best 16/44.1 recording I've ever heard." To me that just came across as silly. I'm sure you've heard better. I'm recording in a project studio in a log cabin with very minimal gear. My only goals were the right mics in the right place and capture the best performance. That same band came back to me for their next album as well.
Don't get me wrong here, I wouldn't argue that 16/44.1 is objectively better than 24/48 but to me the difference is minimal. Prior to that time period I was working with ADAT XT20 machines. I subscribe to the concept that if you record garbage at any bit depth and sample rate you'll still have garbage.
So, as I said, take from that what you will. I'm not saying that what I did was right/wrong or better/worse, only at the end of the day everyone was satisfied.
If you record multitrack in 16 bits you can mix to 24 and get 24-bit resolution because most or all of the tracks will be lower than 0 dB on the fader. Also reverb tails.
@AudioMasterclass Pardon me if the following question seems a bit naive, I know what sounds good but I don't really worry about the technical side of things that much.
So, you're saying that I can multi-track in 16/44.1 but bounce out 24/48 stereo file assuming I'm not clipping the the whole thing out? (which, of course, I would not) What benefit, if any, would that provide? Just a little bit of extra wiggle room for the mastering engineer?
The benefit of 16-bit during production is that the computer doesn't work so hard, therefore you can have more tracks / more processes. That's the theory anyway but I doubt if many producers do this.
@@gabrielgodwin9953 When you apply effects there are math operations applied to samples, many involving multiplications (division is just multiplication by the reciprocal). These tend to product values with fractions of a whole sample step. If the project is in 24 bits or higher, these fractions don't need to be rounded off (rounding produces distortion). When the 16-bit mix is made proper dither will be applied. As mentioned, if you take a 16-bit track and halve the volume, you lose a bit if you stay in 16 bits, but in 24 bits you keep the full resolution. Or consider reducing to 75% volume. You can't reduce 65536 possible sample values to 49152 values without merging some pairs into the same output value. Distortion.
This reminds me of the issue of recording on a 4 track cassette and later recording digitally. The analog cassette buried a great multitude of sins and the digital put a glaring spotlight on the same. The same guitar player who objectively played better over that 20 year span was taken aback by how clearly digital exposed every mistake no matter how seemingly minor sounding during recording. 😃
It’s true that analogue does seem more forgiving. I even sing better in tune on analogue, or at least seem to.
I think there may be another factor at play why sometimes 24 bit is perceived as better: ISC, (Inter Sample Clipping) or lack of it. 24bit usually goes along with higher sampling fq, at least 88.2. At 44.1 ISC does occur on tracks mastered too hot. at double or higher sampling fq, thus more samples, the DAW would show the signal exceeding 0dB F and thus one would lower the level, thus fas less ISC and the DAC's reconstruction filter can restore as it should. Obviously, at 192kHz even less ISC (and at frequencies where it wouldn't bother anyone). Perhaps that is the better detail, more precise effect one hears with higher bitrate recordings? not the the bit count, but the sampling frequency is what makes the difference. just a thought...
Intersample peaks leading potentially to intersample clips are an issue. However because it is increasingly common to use true peak metering we can soon hopefully stop worrying.
IME I find different dither types are audibly different when converting 24bit to 16bit. These different dither types set different low level masking of very low level signal and are identifiable by their individual signature timbre, kind of similar to differing oversampling noise shaping algorithms sounding different ie Panasonic MASH and Pioneer, Yamaha, Sony etc equivalents.
If dither is to be implemented then great care must be taken to audition for the nature of the LSB error masking and its influence on the low level sounds and the overall sound.
IME low level sounds can end up with distinctly unnatural timbre and dynamics (according to dither type/settings) and once heard cannot be unheard haha.
So if I can't have 24bit then straight 16bit it is for me, the low level programme can sound slightly coarse but at least the dirt is natural sounding and correlates with the programme unlike pseudo random psychoacoustic eq shaping of the signal noise floor.
My 2c 🙂.
Natural-sounding digital dirt. You are clearly a true audiophile.
Haha, nah I am as far from the ordinary 'audiophile' as one could get.......40+ years repairing and manufacturing and using pro live and studio audio gear....plus plenty of 'audiophile' stuff.
So a lifetime of auditioning audio stuff of all types, a lifetime of learning and understanding audio truths and discriminating the not true.
Thanks for your great and informative show and your wry humour, please keep on bashing the real 'true audiophiles' haha.
I have tinnitus. 12 bits are quite enough for me!!!
Dolby C and chrome tapes made cassettes just about acceptable to me.
What is happening when you play 16bit or even 24bit on device that don't support for example 24bit playing 16bit ?
Any 24 bit DAC can play 16 bit files.
Where you run into trouble with some older DACs that don't do 24bits is that playing the file as 16 bits strips away the least significant 8 bits.
I sometimes repurchase music in 24-bit format when it becomes available, and even though I definitely have the highly accurate equipment required, I perceive very little improvement (at best). Still I purchase 24-bit here and there, because (much as with 4K in the movie realm) I expect that careful remastering has taken place as part of the process, at least usually. The satisfaction of having the definitive version (I suppose that's a third dimension, David, besides noise and detail) of something that's worth having the definitive version of, is often tempting.
Chasing the definitive version is a quest in itself. I watch Parlogram https://www.youtube.com/@Parlogram so I can quest vicariously.
I use my ears and mark my CD albums from Red book astonishing down to utter trash. Out of my 13,000 CD collection I would have well over 10% that are trash. Another 30% at least are tolerable and maybe a few thousand are INCREDIBLE!!!!! @@AudioMasterclass
True revolution or Musical Change can happen only if we make different Sounds Speakers and headphones with different response to frequency giving different output results and current situation with low quality, high end and studio sound reproducing devices Listeners are listening not critical like we do and context is different something like just another Track or a just another Story telling through the Sounds/ Music. Like Hz the more is enabling more accurate sound manipulation and processing but end results will be none ... but if devices become more quality then maybe we will hear noticable advantage. Cassettes were blurry for me compared to Digital Disc .
Nice story. Daft arguments No real conclusions except sustaining “Some might hear the difference!” myths. “Yes, I can positively taste from which side of that certain hill in France the grapes in this glass of whine came from!” In the real world many other parameters are more important. Like the fact that really dynamic recordings are very rare today. Loudness rules.
For digital recordings lower noise allows greater clarity and separation to be heard or percieved on a good system. The problem is that the sound is so good that you don't need to concentrate to hear it. On vinyl recordings however the very high noise level forces the brain the concentrate and strain to hear the detail behind the noise and therefore the hearer is encouraged to listen to the music more actively and often perceives this as better sound quality even when it is not. Experiment... take a top quality 24bit recording and mix in some snap, crackle and pop from vinyl and it instantly sounds better to an audiophile. Odd that.
SO LISTEN MUSIC DONT LOOSE YOUR TIME BY SEARCHING CHANGING THE FORM OF WHEEL😅😅😅😅😅😅
GREETINGS FOR YOUR WORK.KEEP GOING THIS WAY.👌👌👌
YOU ARE AMONG THE FEW SOUND SPECIALIST WHICH EXIST IN ONLINE COMUNITY.
GO IN THE CAVE AND STAY THERE.LETS THE QUALIFIED SOUND SPECIALIST TO PRONOUNCE IN THIS MATTER.
And they public unprofessional opinion online.
BUT ALWAYS WILL BE PEOPLE WHICH THEY PRETEND THAT THEY ARE SOUND PRO WITH DIPLOMA DEGREE BUT IN FACT THEY SELL TOMATOES😅😅😅😅
CD IS ENOUGH.
16 BITS ITS ENOUGH.
EVERYTHING BEGINS WITH RECORDING QUALITY. A LOT OF STUDIOS ARENT HI QUALITY.
The hierarchy is: the quality of the music, then the quality of the performance, then the quality of the recording, then the quality of the audio system. Whichever one is inadequate first affects the end result.
A LOT OF THEM DONT HAVE MONEY TO PAY THE GAS FOR THEIR CAR BUT THEY WANT 5 ZERO IN USD FOR THEIR SYSTEM😅😅😅😅😅😅😅😅😅
PEOPLE LOOSE THE BASE OF SOUND.SIMPLY LISTEN MUSIC.
PEOPLE WANT 96 BITS IF IT WERE POSIBLE.PEOPLE WILL NOT BE PLEASED EVER.😅😅😅😅😅😅😅
15 hours old now this video, and the comment section is both as entertaining and insightful as expected (as is your video of course, David! 😎).
But in this whole audiophile debate, including the idea that microphone placement is the most important aspect, isn't the really most important aspect overlooked?
THE MUSIC (and the people/band playing it!). I have some genuinely poor quality recordings of one of my favourite blues-rock bands jamming live on stage, with the bass mixed far too loud, the singer's microphone too soft, excessive microphone distortion, analogue clipping; you name it. But both the band and the audience are having a fantastic time, and the groove is unbelievable. Then I don't care about 16 vs 24 bit, or any of the recording flaws; I still very much enjoy listening to it, almost feeling it like I was there, despite all the technical flaws...
In some ways a poor quality recording can be a help; if I still thoroughly enjoy the music, then I know there is something inherently good about that music, and the musicians performing it.
Music that needs those extra bits to make that extra bit of "magic" happen (sic., no pun intended), I have always found a little suspect. I do have some myself, but that music often seems to have more of an outer-worldly feel, rather than the 'human' touch... (and yes, that's not just electronic music, there's some classical music like that as well).
Agreed, without the music what have we got? You can take away the super expensive gear and still have something to listen to, but if you take away the music what have you got left? - nothing but expensive electronics!
@@straymusictracksfromdavoro6510 I have the impression that some audiophiles need nothing more than a function generator and spectrum analyzer; music is optional.
But who am I to judge?; if it makes them happy...
@@nrezmerski Ironically, as far as encoding is concerned, digital (e.g. PCM) is one of few encodings that can perfectly encode a true square wave. It is the pre-sample-filtering, DA-conversion and analogue amplifier circuitry that poses the limitations.
However, that has little to do with signal energy contained in the audible bandwidth of human hearing though; they need a course in harmonic/Fourier signal analysis & theory.
@@nrezmerski Fair enough, my argument was a theoretical one, thinking about PCM purely as an encoding format for encoding stepped functions of a very limited range of specific frequencies, not as a sampling methodology for a wide range of waves; I didn't make that clear...
PCM as an encoding method in conjunction with an idealised sample-hold filter, as calculated by a computer, could generate a square-wave perfectly and would include all the higher-order harmonics of the encoded square wave. But that would only work for stepped waves and only for a limited range of step-timings (in the limit) perfectly coinciding with the PCM "sample/data" timings, not for any other arbitrary waves. But obviously such an idealised sample-hold filter doesn't exist in reality.
E.g. the 44.1kHz data sequence +1,-1,+1,-1,... in conjunction with a idealised perfect sample-hold filter will generate a 22.05kHz square wave with all the correct higher order harmonics, but only if you don't implement any other filtering. (and this would be wholly unsuitable as an audio signal, possibly damaging audio components)
If you really want "the best", shouldn't you work with 32-bit floating point (or even 64-bit FP). Why would you need that dynamic range? That is, unless you record exploding supernovas.
32-bit float recording is good if you have the convertors for it. Following that it's certainly very convenient almost never having to worry about clipping. I said 'almost'. For mixing, again yes. For mastering - er, no. It would be a complete disaster, as I shall reveal in a forthcoming video.
Some of the best DAC's can only achieve 21bits if you are lucky a lot reach less so 24bits is lost anyway, the majority of recorded music doesn't even reach 16bit once mixed and mastered, if you were to hear the noise from a 16bit stored recording your system would have to be pretty damn loud probably beyond normal listening levels too, also your analogue components would have introduced more noise and distortion before it reaches you anyway.
Ummm ... no. The "21 bit" rating is for analog clarity and noise floor... It's another confusing spec. cooked up by confused people to confuse other people.
A 24 bit digital sample always uses all 24 bits and a 16 bit sample always uses all 16 bits.
Well yes. Low level noises recorded right into the files.
But that lower 4 bits is at -120dbfs and would be totally inaudible anywhere outside an anechoic chamber. Certainly not audible in the average listening room.
Any recommendations for a good upgrade for my tinnitus, so I will get a noice-free experience? ¯\(ツ)/¯
Is dither worse than the noise of the audience during a live performance?
I've heard that some audiophiles are turning to Atmos to reproduce the full all-round audience experience.
Nice video. I like your distinction between ‘enthusiast’ and ‘audiophile’. I’ve come to realise I’m the former as I don’t have the money to become the latter. I like being happy enjoying the music with my tech that is good enough. 16-bit 44.1kHz for me 😊
Why do we have to have a war on the the word "audiophile" ???
I know major recording engineers (real names) and they don't mind being referred to as "audiophile" because it sends a message that they are interested in doing quality word whereas other in their trade are just in it for turnover and money and that brought us the compression wars otherwise known as loudness wars. There is some really bad recorded stuff out there or good stuff that was then wrecked in the mastering process. Trust me, I am speaking from experience.
I don't mind being called and "audiophile" and I have good ears and claim no more. Don't make war on words, make war on bad recordings and rubbish.
@@D800Lover Agree, I don`t even know where there is this urge to attack the people in the first place. Why not just leave them alone? Sure, you can tell them that they get ripped off, but why make fun of them all the time?
@@johannalvarsson9299 - I can be as critical and skeptic as anybody, just don't throw the baby out with the bathwater, right?
@@nrezmerski - And are you a music lover? What great sin to call yourself an "audiophile" because you like music to sound like real music in your own home? I say, keep it up!!! And audiophiles often spend within their budgets and still call themselves audiophiles.
@@nrezmerski - The reason people laugh at other people is bad behaviour. My mother thankfully taught me right. I am not rich and I can describe myself as an audiophile without somebody making videos at my expense? That's pompous and clickbait and deposits in the bank. So I decided, that deserves some pushback. Most audiophiles are not rich dudes.
dither is needed any time the bit depth is reduced to avoid quantization noise. combining multiple 16-bit pure digital signals into a single 16-bit output needs dithering if the bit depth is being reduced. I would love to have a listening environment available where -96dB against a max reference level (100dB SPL?) is even detectable; maybe I need to find a local audiophile who has this.
from my experience, dolby C does a decent job on cassettes, and dolby S is comparable to noise levels on CDs (we'll ignore wow and flutter), but I'll still take dolby B over nothing at all, especially on type I tapes. I blame the current unavailability of new competent cassette decks and limited availability of used ones coupled with misplaced nostalgia for the sad state of no noise reduction at all on most new cassette releases. ("but it's analog!" yes, but my otari has a lot more hit points than your nakamichi.)
Is there ANYONE who can tell the difference in a blind test? I consider higher bit depths to be probably useful before mastering but 16-bit is perfect mastering for our senses. Maybe my cat can tell the difference but I don't really care.
What? You don't care about your cat?
@@AudioMasterclass The slippery slope of cat abuse starts with this and proceeds to removing them from my best listening chair after only a couple minutes.
Both Sony and Phillips said they were going to improve the CD including sample rate.....but CDs are not the popular .mp3 put the kibosh on that
Any audible differences between 24 and 16 bits are likely caused by the lowpass filter of the dac @44.1 kHz sampling rate, not by the difference in bit depth.
In this day and age, there's a lot of room correction, digital crossovers, surround sound or other processing, which is applied to the final recording. Not just during production. God knows what other things there will be in the future. 24 bit recordings provide an extra layer of "rounding" to make sure the 14 or 16 bits' payload will still get to us as well preserved as possible, even thirty years from now.
Those systems can take the 16-bit input and work in 24 bits from there on, including for output. This gives them the room for rounding etc. The only issue would be the dithering added to 16-bit, which would become noise that the system would also treat like signal. Once you add dithering you're basically saying that it's finalized and should only be sent straight to a DAC. It's putting assumptions about the output system into the recording.
If it looks like bullshit, sounds like bullshit, and quacks like bullshit then congratulations, those speaker and cable up grades can be yours for a bargain TV only special price.
But wait, if you're one of the first 6 million suckers , you can get double your purchase for free, but wait.....
i did listen to some cassettes, but I was too young to even have the idea of copying music
This is a good example why audiophiles are not worth pissing on. Here we get the truth.
I did see a video claiming DC caused transformers to buzz.
The video, I just wanted to scream and throw the phone. He was on about fixing the humming and buzzing of the mains transformer by using power line purification. I left a comment pointing out the stupidity.
DC saturation of transformers. I'll throw that to the audiophiles and see what comes back.
Which sounds better, a dirty record, a dirty cd, or a dirty lover? Not just bits but naughty bits. So many variables, so little time. XOXO
You can comment on this video at YouTube