Adventures In Audio

Audiophiles - You're wasting your bits!

Comments on this video

You can comment on this video at YouTube

@AudioMasterclass:  I said in the video that I would pin a comment. I've read all of the comments so far and the consensus of those that seem to be by experts is that it won't work. This is fine but I think it's worth considering because if we understand why it won't work then we understand digital audio better. The impression I've formed so far is if this could be done it would involve a lot of messing around with mapping and, when done, we'd be back exactly where we were before. Please add your own comment if you haven't already.

@bocaratonopera replies to @AudioMasterclass: Quantization of a band-limited signal does not directly change the waveform if we use more bits. What more bits do is simply lower the noise floor. You could argue that the result of different noise floors added to the signal does change the wave form, but the perceptual difference is in the noise floor level. Quantization does not simply trace a freeform waveform by uniting the dots or drawing a staircase graphic using the dots (sample values). It actually calculates and retrieves the (only possible) mathematical function that would run through those dots, containing only values within the bandwidth allowed and reconstructs the only possible solution. That and only that function, whether you code it at 8 bits, 16 bits or 24 bits is the same. The only way you could find a different shape would be if you had aliases outside of the band limits.
So using more bits to encode a certain, preferential dynamic range, even if possible, would bring no benefit, or it would only raise the noise floor.
A lower noise floor has benefits when using DSP for room correction, for instance and when you stack up processing, which as time goes by seems to be more and more probable.
Cheers!

@jackhastings9800 replies to @AudioMasterclass: You are wrong . Is this some type of Spanish Inquisition! You say to move BITS, that would be stealing bits. I only steal Lupans! You turn bits from beautiful little bits to naughty Bits. And we do not want to hear about Naughty Bits!
Dennys Moor

@jackhastings9800 replies to @AudioMasterclass: As an audiophile, you are wrong. We like big cans over small cans. DM

@Planardude:  Hi bit audio may be a bit of a scam. Can you really hear difference? , Perhaps as I prefer FLAC files over the Spotify MP3 I fail every comparison test to differentiate, Whether if this is placebo or actual difference I can't comment, Back in the early 80s when I first embraced PCs i paid thousands of dollars for a 5 MB hard drive, now hard drives are measured in gigabytes and terabytes and are relatively cheap. I have the privilege of storing music at the highest resolutions possible. I use the highest resolution FLAC files I can get, Can I hear the difference above CD quality- I strongly dought it. Storage is now cheap so why not use the best available now and see what the future holds

@Chrisspru:  on decibels and bits:

decibels are a linear number that describes a logarythmic process

so the last levels steps map big jumps, while the first steps map tiny ones. 90 to 93 db is magnitudes bigger than 0 to 3.

so the lowest level bits are important for smoothness.

the db of a bitrate is just a value where noise becomes audible if a higher dynamic range where to be encoded. stretching these dynamics just makes noise audible.


"using the 8 silent bits" is already done. it does not mean the music gets silent, but that the music is 8 bits smoother in fine detail already.

@mynameizmaineimis1880 replies to @Chrisspru: So do bits in PCM encode a value that corresponds to logarithmic decibels and not values directly proportional to the captured signal's strength?

@Chrisspru replies to @Chrisspru: @@mynameizmaineimis1880 basicly. higher and higher bits encode a wider and wider range, with the small bits added allowing for very fine steps. with the finest step being way below audible. then dithering is applied, making the volume steps into a truely smooth volume curve, but adding noise relating to the smallest volume step used. with 24 bits, and setting the max volume to the uncomfortability threshhold, the noise floor is far in the inaudible and way below regular "silent room" noise floor.

@paulstubbs7678:  A better way, is keep the data rate used, but reduce the sample size to 16bits, but at a higher sample rate, resulting in less artefacts from the reconstruction filter.

@MartinMaynard:  Again you equate 16bit against 24bit.why not 16bit against 18bit as a perceivable difference in hearing and use the other 6bits as recording overhead. I agree you if you were to listen to the music and you had a 'bit knob' you could adjust you would hear big changes in quality as you turn it up but won't hear much more when you get to 17/18 but I believe you would certainly hear it on classical solos. By the way we talk of Audiophiles with their expensive kit but you don't talk much about Recorderphiles paying silly prices for old school analogue mics, desks outbound devices and plugins!

@AlucardNoir:  I think the way 24 bit is used now makes both oversampling and dithering pointless - which is why it's used in the industry. Technically we could do what you're saying, but I don't know if that would help much industry wise. I'm not talking about audiphiles and the companies that sell to them but to the actual music producers and studios out there. 16 bit wasn't selected by picking a ball from a box. There's math behind it, math that pretty much ensures that everything we can hear can be recorded in 16 bits. 24 bits was selected because working on 16 bits can degrade the signal. If we'd reduce just what we can hear in 24 bits that wouldn't do much if anything for listeners but it would have a devastating impact on the industry since they'd need to move to higher bitrates for mastering which would require complete equipment changes.

That all being said, I'd rather the industry stopped reencoding their masters to 16 bit. 16 bit files made from 24 bit seem a lot like cutting the crust off your bread when you make a sandwich. And I know, I know, I'm not going to hear what's left on the cutting room floor, but if space isn't an issue, why process something more than it needs to.

@CraigHollabaugh:  Us old gray hairs understand this. Thanks for the video.

@ЛюбомирРусев-ф6в:  My view is we can't simply trunkate / zeroize the 8 least significant bits in 24-bit quantized signal for analysis and comparison. This will cause huge distortion. Instead, we should compare full scale signal evenly quantized with 65536 steps (2^^16) vs full scale signal evenly quantized with 16 777 216 steps (2^^24).

@TWEAKER01:  Another way of putting it: encoding what we "can't hear" prevents truncation errors accumulating at levels that we otherwise can hear.

@TWEAKER01:  Higher word lengths (bit depths) is all about minimizing your (cumulative) losses right through to the end result. And then, even within the noise of 16 bit dither, audio signal (ie: music detail and depth) is preserved.
Some people mistake "resolution" for precision – the precision of best representing the audio after capture or dsp. Note : dsp also happens during playback within any software player volume control or EQ.

@paul-francislaw9774:  Great idea. As an AUDIOPHILE I would be so happy if it worked. 👂🦻

@MARTIN201199:  No mater what you say. Hi Res Audio with proper equipment sound much better that regular CD quality.

@kennethvalbjoern:  All the sampling I've ever done was 16 bit, 44.1kHz, and nobody ever complained about noise or crunchy sound. So I'll continue that way.

@diegocanale1124:  At the end of the day what really matters is proper mastering. It's high compression that is killing music.

@AlanSwain-k7p:  This guy's presentation style is sensational. In another universe he is presenting Tomorrow's World on TV

@AudioMasterclass replies to @AlanSwain-k7p: Tomorrow's World - I watched every episode.

@nudebaboon4874 replies to @AlanSwain-k7p: ​@@AudioMasterclassRaymond Baxter, William Woolard.👌

@carlitomelon4610:  MQA?

@AudioMasterclass replies to @carlitomelon4610: No thank you. Lenbrook Group, owner of MQA, seems to want to own audio. I resist.

@carlitomelon4610 replies to @carlitomelon4610: Acknowledged, but you seem to be touting a similar concept?
Personally HR Qobuz (more like 20 bit) does it for me. Sounds more spacious than CD, but who knows why?

@AudioMasterclass replies to @carlitomelon4610: @@carlitomelon4610 If by chance my idea were possible, then if it were proprietary I would dislike it as much as MQA. Anything that works by mystery can't be trusted.

@carlitomelon4610 replies to @carlitomelon4610: @@AudioMasterclass
We're missing the point of capitalism, no?
Ok, develop it and release it as open source?
Good man!
🎶

@nitram419:  I found the 24 bits resolution of a centre channel (in a multi-channel 5.1 flac) extremely useful !
The audio in the centre channel was mastered at at least 16 dB lower than the main left/right channels.
So I raised the audio in that centre channel (using Audacity's floating-point 32 bit mode) back up to the level of the L & R mains, and and re-exported the 5.1 flac as 24 bit. Through my five Tannoy Revolution-Signature surround speakers it all sounded perfect. Summary: Thanks to the high bit-depth I successfully 'recovered' perfect audio without any audible distortion. Had I raised the centre channel from 16 bits, then the result would probably have revealed audible 'grain'.

@myyoutubepremiumchannel:  I turn to Rainbow Books for my explanation. Thank you.

@phomchick:  16 bits gives us approximately 65,000 discrete steps between 0db and -96 db. No one can hear the difference between step 32,768 and step 32,767. If we increased the loudness resolution by 16 million more steps, that would just give us 256 times more steps we couldn’t hear at the cost of more complexity and noise.

@randolfwerner2117:  Accepting 16 Bit resolution as good enough basically means to accept that the smallest difference you can notice in e.g. a 2V signal is ~0.000015V. If you want more resolution, you just add bits in the usual way, 24Bit would give you ~0,00000006V resolution. To me 24 Bit is beneficial for mastering and 16 Bit is good enough for transport. However, if you use any kind of equalization during playback for fixing room, loudspeaker or headphone issues some extra headroom in your playback devices can help a little bit to avoid clipping or loosing details in the noise floor. So having perhaps ~18 Bit resolution in your playback devices can make a difference to correctly equalize your 16 Bit signal.

On the other hand, some are arguing that if you want to reproduce the full dynamic range of a life event at home you need the same range in the entire chain. Sometimes 115 db is mentioned as a reasonable limit, since it looks like even concerts of moderate loudness seem to reach this range (which sounds surprising to me, search for “Music: how loud is loud?” from AudioScienceReview on youtube).

@TrevorDodd-ev1sx:  I wonder what external influences can change the way you listen to music and the enjoyment you get from it.
I recently had Covid and my hearing was so sensitive that I couldn't listen to anything regardless of how I listened to it.
This led me to thinking whether mood or emotions were more important than the quality or equipment you are using.
Sorry that this isn't directly related to this video.

@bba935:  I'm with you on the same page. Ironically I do buy 24 bit (aka hi-res audio) FLAC files because some of them are remasters that are aimed at the audiophile crowd and are actually mastered better than many studio CD releases. It's not the case every time and I know that same master would sound the same at 16 bits, but that's not how they are selling it. One of the best examples of this is the High Def Tape Transfer release of Bill Evans Trio - At Shelly Manne-Hole. It's easily the best sounding version of that album. Again, I don't chalk that up to it being 24 bits.

@AudioMasterclass replies to @bba935: This is correct. The differences are down to what kind of messing about (or none) went on in the mastering studio.

@mariokrizan1400:  Really, do we listen beyond human possibility, everything inaudible that marketing sells us??? Probably not. And everything ends as always, in the quality of the original recording. After that, 16 Bits are enough for our human ability to hear. Around here we have a saying that says “short and to the point.” . . That was your comment. Wonderful 👏👏👏👏👏 Greetings 🙋‍♂️

@JohnnyFocal:  Are you feeling alright? Its about the first time you have talked sense and with authority. Like all systems its only as good as the guy or girl using it. Good 16 bits used well is better than badly used 24 bits. A bad workman always blames his tools and this applies here.

@Roosville1:  I'll just wade back in, I think some are taking bits = dB, and mixing power terms and voltage terms.. For a voltage DAC , there will be a reference voltage. EG: say 2.5V Maximum output is then 2.5V, minimum is 0V (in reality this is where DAC non-lilnearity / offset hits and zero isn't zero)
You are just dividing the reference voltage by the bit count either 2^16 (65536steps / 34uV per step ) or 2^24 (16,777,216 steps / 150nV per step ) Now don't run off with all this additional resolution, there are problems, preincipally, realising this resolution by the time all the noise sources and component tolerance ect.

@geoff37s57:  Concentrate on speaker quality, set up and room acoustics. This will yield real audible effects and you can decide if it is actually better or just different. Pretending you can hear differences in Bit depth is the road to insanity and misery.

@JBlueVan:  Great so they screwed up 24bit

@richardmarkham8369:  Surely the 'bottom' eight bits are the least significant bits? If for arguments sake, you have a 16 bit encoded signal, assuming the maximum possible level, is say 1v (equivalent to a binary value of 65535). Each bit therefore repesents about 15uV assuming a linear decoding. If you then encode to 32 bits, you can resolve down to 7.5uV. 8 bits can resolve 4mV.

Big assumption that digital audio is encoded linearly. Maybe its uLaw (Mu Law encoded?)

The more bits the more you can resolve small changes in amplitude. The more samples per second, the more you can resolve frequency.

@glennlove461:  You lost me on the last bit

@AudioMasterclass replies to @glennlove461: Very amusing.

@theundertaker5963:  Shitting on Audiophiles is a main stay of this channel, and one of the main reasons I come back to see, and hear, the many creative ways he does the aforementioned shitting 😂😂

@AudioMasterclass replies to @theundertaker5963: My interpretation would be 'gently teasing', as they often do to me.

@theundertaker5963 replies to @theundertaker5963: @@AudioMasterclass keep it up, I absolutely love it!

@martineyles:  What you described sounds like NICAM, but with 24 bits.

@AudioMasterclass replies to @martineyles: As I've mentioned in previous videos, for its time, I'm a fan of NICAM. It's kind of blown out of the water now by 32-bit float, but in its day it was a fantastic step forward for TV.

@fabiosantesarti4081:  The thing you are proposing in this video isn't something similar to the "bit mapping" procedure that Sony was using in restoring and remastering old recordings in order to re-release them?

@andrewbrazier9664 replies to @fabiosantesarti4081: Super Bit Mapping was sometimes referenced on CD album covers. I have a few can't remember which artsts
🇬🇧

@shpater:  1) Your suggestion is already built in to 24 bit audio as follows:
If you take a full swing (0dB) 100 Hz 16 bit digitized sine wave signal and them sum it with a -97dB 10Khz digitized signal, then the 10KHz signal will disapear as it has no power to change the level of any bit of the digitized 100Hz. however, if you digitize the 10Khz at 24 bits and add it to the digitized 10Khz signal in a 24 bit resolution then between each bit step of the 100Hz digitized signal you will find a 10KHz signal with a resolution of 8 bits.
2) on another of your video I have mentioned that I use 20 bit dither for a 24 bit recording and you have been wondering would that be necessary? The answer is that a good DAC output and a very good preamp input can have not more than 120dB to 129dB of SNR. this means that the "Presues" lower bits of a 24 bit content are being masked by the input noise level of the analoge electronics, there for a ditheri level of 20bits provides a -120dB modulated noise level which allow the lower bits to overcome the internal noise level of the equipment.
Thanks a lot for your videos and for bringing this topic.

@AudioMasterclass replies to @shpater: I'm definitely a fan of dither for 16-bit. For 24-bit however there seems to be vastly more opinion that it's a waste of time and possibly makes things worse. I don't dither my 24-bit masters and I doubt if it's going to cost me any of my potential Spotify income.

@shpater replies to @shpater: @AudioMasterclass 
Nobody is going to find you are not dithering your 24 bit. For capturing or archiving purpose there is no need at all. Only need if for playback and this can be done by the player's DSP. How ever, best analog amplifier will mask the lower bits with input noise. Dither overcome this. Is it audible? Is a question, not an answer.

@AudioMasterclass replies to @shpater: @@shpater That's funny. I pay my taxes so they're not going to catch me on that. Not dithering my 24-bit masters... I worry about that knock on the door.

@themetamorph:  Bona riah!

@phrtao:  I love your videos and you are usually spot on with everything you say but I think you might be missing something with this one. As you correctly say when you go from using 16 bit samples to 24 bit samples the extra 8 bits add 256 extra levels of possible output voltage. The levels subdivide what was the previous smallest increment (quantisation level) possible with 16 bits. What most people do not understand is that these extra levels effectively get added between EACH of the previous 65536 levels. Obviously 24^2 (16777216) is a number 256 times bigger than 2^16 (65336) but we are still talking about the same range of voltages on the output ( about 2 volts). With 24 bit we now have 16,777,216 possible values rather than 65,336. The maximum value of the sample actually relates to the denominator on a fraction. There is a point at which this extra level of data will become inaudible but it is not as cut and dried as you make out. Most modern DtoA conversions actually uses Pulse Density Modulation rather than Pulse Code Modulation (this is what is sometimes known as a '1 bit DAC'). When using PDM the converter circuit is just a simple low pass filter (yes with PDM the analog signal is actually present within the digital waveform, it's like magic!). When you add extra bits of resolution to PDM you dramatically increase the effective carrier frequency of the signal so the filter used can be much more gentle (which some say is desirable).
I hope this helps to dispel some myths and confusion. After all audiophiles are not that technical, which is why there is a healthy industry selling turntables costing in the tens of thousands of pounds (Always room for a little audiophile shaming).

@drewwilson1477:  The real issue is that the ear is logarithmic and our digital universe is linear. If we had started with a playback DAC based on logarithmic steps rather than linear we would need far fewer bits and the bits would appear linear to the human ear. Now if only I could cure my tinnitus 12 bit music would sound fantastic. My noise floor is outrageously loud.

@aquaevitae:  24bit is good for production purposes coz' it gives more headroom to adjust the sound and forgives more than 16bit if levels in recording moment wasn't optimal, but as we know well documented limitations of human hearing it's totally useless for consuming purposes. So for listening experience 16bit is all we need or ever will need.

@Gary_Hun:  What people need to understand is that there's a direct correlation between "da bits" depth, and the sample count. If we increase one, the other must be increased accordingly, to accomplish anything. Take a sound editing software, zoom into seeing the signal sample by sample, and you will understand. If the samples are too many but the bits are insufficient, there will be samples of the exact same height one after an other, rendering them unnecessary to be there. So there's a need to add an extra bit. Or two. As many as to result in all the samples differing in height from each other.

@kobush18:  Nice 👍 💯

@melaniezette886:  The solution is the use of floating 32 and 64 bits already in use in dsp... No more overload, variable resolution with very high resolution near 0 reference level.
In use in field recorders, daws, media players. Once done the output can be a perfect good enough 16 bits at its maximum potential.

@melaniezette886 replies to @melaniezette886: Floating numbers are the only way to change resolution vs signal level. I must say it's not easy to understand for me. Scientific notation numbers is the way and so Wikipedia

@ClaytonMacleod:  Thank you for illustrating without a doubt that you do not understand digital sampling at all. Digital audio does not have a resolution. Bit depth determines the noise floor, not how detailed the audio is. “Digital Show & Tell” visually illustrates how you are incorrect, again. You do not know what you are talking about.

@AudioMasterclass replies to @ClaytonMacleod: Haha it's you again. You're funny and your comments make my day.

@ClaytonMacleod replies to @ClaytonMacleod: @@AudioMasterclass Read the other comments from other people saying exactly the same thing I’m saying. I wonder why that is…

@CardinaleAlessandro:  Hi. For what i've read, the rule of thumb why 1 bit represent 6db is because every 6 db the volume doubles , as in binary every bit added doubles the maximum representable number . But if you encode in 16 bit a signal that has a maximum volulme of... 80db, you are using all the combination of 16 bit fot just that max volume. So i believe mr. David that what you propose is already how things work in reality. Let me know if my thoughts have a fault. Thank you for your videos, i found them interesting inspiring and i always learn something new. Also your speech is perfect and i understand almost anything, being of another country

@Tyco072:  How could the signal arbitrarily decide to use only the highest 16 bits of the total 24bits? All 24 bits are available for the signal. It depends only by the amplitude of the signal.
As the user "Douglas_Blake_579" wrote in his comment, 16 bit audio = 65536 levels. In a 2 volt signal the steps are 0.00003 volts or 30 microvolts. The same 2 volt signal at 24 b its would have 16,777,216 levels, where the steps are only 0.11 microvolts. Therefore the waveform is much more accurately reconstructed in the amplitude. Whether this difference is audible in comparison to the 16 bits steps, I can't tell at all.
At -1dB very likely it is not audible, because 16 bits are enough good, but near the bottom level at -96dB and lower, when the signal uses only the first few bits in the 16 bit format, the 24 bit signal has yet plenty of resolution.
It is technically wrong to say that increasing the bits, the only advantage is in the noise floor.
If you sample music at only 8 bits, not only the noise floor is higher. You hear clearly the effect of the more rough steps in the amplitude of the waveform. You hear clearly the missing of the resolution.
The question should be whether the finest step resolution at level near -96db, is worth to have in the final distribution file. (And for which type of music)

@Tyco072 replies to @Tyco072: @nicksterj Thank you. That comparison is also a very good point. I have only one CD so good mastered that at the end of the fadeout of a track, the level goes so down, that I can hear the quantization distortion just before the sound totally disappears in the almost zero background noise. Probably it is a DDD CD without dithering, year 1994. But to hear that quantization distortion I have to use headphones and rise up the volume knob of the amp at max level. Therefore I don't care about to use dithering, because I never listen any tracks at so high volume level. I prefer to don't use dithering, to don't modify randomly all the bits. Furthermore, adding dithering multiple times when you edit again the same track in the successive time, it would add more noise (there is a video that shows it). So I prefer to don't use dithering. But if the CD was 24 bit, that very little distortion at the end of that track would not be audible. I use that CD to test the quality of sound cards.

@straymusictracksfromdavoro6510:  Yeah, all fine.......I've nothing further to add.

@jimhines5145:  I totally agree with your assessment. I think you don't believe that frequencies above 20khz do not affect harmonics in the audible range (or the opposite). Back in the 70s, most turntable cartridges had a 30+khz top response. Today, they are mostly all rated at 20-20k, but probably do better than that on the high frequencies. It's just become a standard more or less. With 16 bit CDs, we have a notch filter at 20khz that cuts everything above that. This is why some CDs, maybe even many CDs, lose their warmth. Those frequencies we cannot hear inject with the ones we can and the harmonics is where the magic happens.

@Tyco072 replies to @jimhines5145: No. This is still an open debate. If a CD sounds "cold" it has much more to do with the mastering, especially for the early CDs of the 1980's. And to judge whether a CD has lost the "warmth" you have to compare it with the original master sound, not with the vinyl. Vinyl adds warmth to the original sound, and it is a flaw, not a benefit. AM radio adds also warmth to the sound, but it is not more faithful to the original sound at all.

@jimhines5145 replies to @jimhines5145: @nicksterj There is nothing stopping you from testing this for yourself. It is a rather simple test and easy to do. You will notice a difference if your system is capable.

@Tyco072 replies to @jimhines5145: ​@@jimhines5145 I have already done many comparisons with professional headphones and very good equipment. I never heard more details coming from a vinyl than from a CD. The myth of the contribute given by the ultra high frequencies above 20 KHz hasn't been yet scientifically and significantly demonstrated. I hear much more the heavy flaws of vinyl, than the phantom contribute of those super high frequencies. If you like more the vinyl, it is a matter of taste, not quality.

@jimhines5145 replies to @jimhines5145: @@Tyco072 What I like about vinyl is that a completely different mastering process has to take place. Albums that are mixed really hot for CDs would not survive a vinyl mastering in current form. It is in many cases pretty amazing how much better the vinyl release sounds compared with the CD release due to having to reduce the dynamics, while making it much more dynamic in the end. Classical music would be an exception to this. Digital music is very well suited to Classical and some Jazz as well. But with rock/pop, vinyl will usually always be better for dynamics. Just my own opinion.

@Tyco072 replies to @jimhines5145: ​@@jimhines5145 I agree, but mastering is a completely different theme. It has nothing to do with the overall quality of the media. That vinyl and cassettes can't be pushed flat to 0dB like the mastering engineers do for CDs and streaming, it doesn't make the vinyl a better, or less obsolete, format than what it is. The faults of vinyls are too bad and macroscopic. People should concentrate on boycotting any music that is bad mastered, also on streaming, then the producers will change their mastering guidelines. The loudness war has destroyed music, not only the quality of CDs. Fortunately I like very few music made after the loudness war started, about in 1994.

@ckturvey:  I think we should use those 8-bits and use a bit of Steganography to encode secret messages that only the highest resolving systems can discern. Only a true audiophile with the right audiophile gear can hear the message... :)

@hugueslecorre4893:  You need to use analog expender before ADC so that 96db dynamic range fits in 24bit and compress back after DAC to get back 96db but with higher resolution.

@julianperry4856:  What I THINK you're proposing in a roundabout way is non-linear quantization, and I believe a lot of oversampling noise-shaper algorithms already do this: pushing the noise floor around into inaudible frequencies, then filtering them out. Oversampling by both bit depth and bitrate can move a goodly amount of the noise floor by this method.
From the excellent "Principles of Digital Audio" by Ken Pohlmann....
------
In a simple noise shaper, for example, 28-bit data words from the filter are rounded and dithered to create the most significant 16-bit words. The 12 least significant bits (the quantization error that is created) are delayed by one sampling period and subtracted from the next data word, as shown in Fig. 4.17A. The result is a shaped noise floor. The delayed quantization error, added to the next sample, reduces quantization error in the output signal; the quantization noise floor is decreased by about 7 dB in the audio band, as shown in Fig. 4.17B. As the input audio signal changes more rapidly, the effect of the error feedback decreases; thus quantization error increases with increasing audio frequency. For example, approaching 96 kHz, the error feedback comes back in phase with the input, and noise is maximized, as also shown in Fig. 4.17B. However, the out-of-band noise is high in frequency and thus less audible, and can be attenuated by the output filter. This trade-off of low in-band noise, at the expense of higher out-of-band noise, is inherent in noise shaping. Noise shaping is used, and must be used, in low-bit converters to yield a satisfactorily low in-band noise floor. Noise shaping is often used in conjunction with oversampling. When the audio signal is oversampled, the bandwidth is extended, creating more spectral space for the elevated high-frequency noise curve.
------

@GCKelloch:  I think the low level resolution of a 24bit file is also more linear than 16bit. Try generating a 500Hz sine wave at -65dBFS to a 44.1k or 48k 16bit and 24bit file. Look at the resulting wave shapes. Not sure the difference is generally audible, but there might be situations it could be. Possibly very low level music passages, or the perceived "realism" of low freq acoustic instrument note harmonic relationships?

@bradwalker1259:  I think the conceptual issue is that, since our hearing is more-or-less logarithmic, we find measuring and discussing audio is best described by dB, which is logarithmic. Binary digital numbers are inherently linear. So trying to directly compare or force one to match the other results in "apples to oranges" logic. Once you get to the limit of human hearing (in dB), there's no point in more bits, if proper dithering was used to get to that bit-depth from the studio recording (proper dithering in this case means more-or-less triangular PDF dither).

@ziggystardust4627:  Maybe there’s a signal theory guy who could correct me, but what you’re describing to me sounds like you are including the same range (therefore the same noise floor) but simply doing it less efficiently. I don’t see where you pick up fidelity in doing this. 16 bits already in codes perfectly down to a 96DB noise floor. It just seems to be frivolous wasting of bits.

Whether you’re right or wrong, there’s going to be a signal theory person out there who’s going to respond with either, “you’re full of beans,” or, “You are an unmitigated genius!“

@NoName-sf3ew:  Even if you can’t hear the difference the volume is louder on higher quality recordings. That alone makes me want better quality.

@pablohrrg8677:  How do we differentiate between bigger range and bigger resolution?
When I hear a recording where a whispered voice is almost the same level as an opera singer at full voice, there are many bits wasted.
Something I don't remember hearing mentioned is dynamic range. Where do you put your "0dB"?

@donjohnstone3707:  Unfortunately there is too much incomprehensible gobbledigook in most of the comments, indicating that many commenters have little or no genuine expertise about the subject in question or the technical issues involved.

@Zickcermacity:  The only way to "use all the bits" is to use compressors and limiters to keep peak levels constantly at or within one half dB of 0dB Full Scale.

In fact, along with clients demanding their CD be the loudest yet, this "using all the bits" mentality helped to drive the loudness war.

That's why even some early CDs don't seem to have the depth and dynamics of the LP versions of those albums.

@teashea1:  nice video ---- quite interesting

@Roosville1:  I think the comment at 2:15 was the most significant "and no dither" . Noise shaping for me is what cemented the 16 bit as a perfect enough standard. Take a 16bit recording convert to 24 bit record a tone below 96dB, apply noise shaping in the conversion back to 16 bit. Remove the original recording, amplify the noise floor and there in the residuial white noise is that tone recorded below the 16 bit limit, extracted from the final 16 bit recording Magic.

@merakrut replies to @Roosville1: A tone (sine wave), yes. Ambient sound from Music, no.

@jxtq27 replies to @Roosville1: @@merakrut Ambient sound/music absolutely! There are two tricks here that are concealing the magic. The first is that noise shaping implies oversampling, which allows us reach below the bit depth. For example, we could embed a bandlimited signal on a CD with a bandwidth of 2kHz, which although it's not audiophile grade, is certainly sufficient to recognize music, understand speech, etc. At that bandwidth, a CD is oversampling at a rate of about 11x. Log2(11) is about 3.5, so we get better than 3 bits extra. At 6dB / bit, that means we get an "extra" 19.5 dB. Not too shabby. Of course, it's not free - we paid for it with noise at higher frequencies, so it doesn't violate Shannon. The other thing that's kind of glossed over here is the idea of "remove the original". Of course there's no way to do that in general without having the original.

@JoeDurnavich:  Are you maybe confusing "the lowest 8 bits" with "the values from 0 to 256" in the full scale of 0 to 16,777,215?

If you recorded audio within just the range of 0 to 256, it would be very low and inaudible. But I don't think that is the meaning of the lowest 8 bits in 24-bit audio. Those lowest 8 bits are used in the encoding of the entire range of the audio signal from the noise floor all the way up to full scale. They represent finer steps between the voltage levels.

Maybe think of it as (approximately) the rightmost three digits in a number that goes up to 16,777,215. The step from 16,777,211 to 16,777,212 is encoding up high near the peak of a sine wave, say, but the change in values is still in the least significant digits or bits.

@bradwalker1259 replies to @JoeDurnavich: The "bottom 8 bits" add 256 steps between each step of a 16-bit number, not just the bottom 256 codes.

@JoeDurnavich replies to @JoeDurnavich: @@bradwalker1259 Yes, that is a nice, succinct way of rephrasing my more rambling text (that others have brought up too, I see). The question I have is what part of the audio does he think is the inaudible part encoded by the bottom 8 bits? I'm thinking it's either the quietest portions of the music or it's the noise and distortion (or noise floor) from the quantization error.

In the past, I have tried to come up with the best way to think about what we can now hear when the bit depth is increased, but I have never been satisfied with anything. I have seen people liken increased bit depth to having more stories in a building (to illustrate a greater dynamic range), but that seems to not illustrate the concept of the finer resolution that results from the encoding steps being closer together.

@ScottEvil:  What you've described sounds like the MQA CODEC.

@SteveWille replies to @ScottEvil: I was going to comment this, but you beat me to it. Cheers… 🍻

@paulstubbs7678 replies to @ScottEvil: MQA is lossy, so why go there.

@goldenears9748:  I like you. You come over like a Paul McCartney brother. Which is not a bad thing.

@dangerzone007:  24 bits gives higher resolution in audio processing. Keep 24-bits for the recording engineers and the mastering engineers. Consumers don't need anything more than 16 bits.

@AudioMasterclass replies to @dangerzone007: As you might guess, I disagree. If a master is made with 24 bits, then consumers should have the option to access to all 24 bits. They might not need it, but they might want it.

@dangerzone007 replies to @dangerzone007: @@AudioMasterclass I'm a consumer and I don't want it.

@andrewbrazier9664 replies to @dangerzone007: ​@@dangerzone007🙃

@Roger_Gadd:  I believe that the assertion that the first 16 bits of 24 bit audio are identical to those of 16 bit audio is false. My understanding is when converting from 24 to 16 bits, the correct process to get the Least Significant Bit of the 16 bit output is to round the 8 LSBs of the 24 bit data. If bit 17 is 1, then add 1 to bit 16 (which might also result in change to one or more of the 15 more significant bits). If bit 17 is 0, then leave bit 16 as is. I might be mistaken because I haven't read this anywhere, but I am am looking at it from a mathematical perspective.

@johnwatrous3058:  I can't find those bits.

@rabit818:  Does that mean you can hear the tape hiss better?

@AudioMasterclass replies to @rabit818: Audiophiles might argue that the sound of tape is part of the sound of music.

@OfficerGaydept:  Noise isn’t necessarily bad. I can guarantee the gear used to produce and record the music audiophiles love to nitpick over, has inherent noise that is considered a special quality, ueard?

@AudioMasterclass replies to @OfficerGaydept: Some plugins that emulate analogue equipment also emulate the noise. Normally there's an option to turn it off.

@hugomottet1516:  Cool idea but sadly the theory is flawed as others mentioned... 24 bit is already doing what you proposed, capturing the signal with better precision, but due to the way digital signal works the only effect of adding resolution is to lower the noise floor, not "adding" detail.

@SteveHuffer:  I think even audiophiles have given up on direct amplitude/resolution arguments and have settled on either 'resonance' (i.e. inaudible sounds affecting the audible range) or the idea that Hi-Res DACs can improve lower bit-rate files due to the ease with which they can process them (i.e. they are working less and therefore introducing less distortion).

@SteveHuffer:  An audio engineer on You Tube (can't remember which) said that 24Bits would be extremely good for the noise floor if you were listening to an amplifier putting out volume comparable to sticking your head next to a full-throttle jet engine.

@AudioMasterclass replies to @SteveHuffer: This is a good point. If the lowest possible level in 24-bit digital audio were set at 0 dB SPL then the highest level would be 144 dB SPL. You'd probably die.

@vietvooj:  The idea is not new. Search for A-law and μ-law. Both use a logarithmic scale for quantisation. But not with 24 bit, but 8 bit.

@AudioMasterclass replies to @vietvooj: Also in samplers from the 1980s. Logarithmic compression had an interesting crunchy sound that some people liked.

@vietvooj replies to @vietvooj: @@AudioMasterclass I rethought it.
You want to take away 8 bits from the 24 bits and use them for more details when needed.
But this means that you simply add the 8 bits to the 16 bit again, then you have more details. Everything stays the same.
If you only want to add these 8 bit when useful, you should not do it when the signal is loud, but when it is quite.
Adding details to the sound of an airplane flying by is wasted, but to the sound of someone whispering it makes sense.
Having that said, what you are looking for is a representation of the signal as a float number. You take 16 bit for the signal and 3 bit to describe, how much the 16 bit are shift to the right (0-7 bit shift right possible).
With that you can encode your 24 bit into 19 bit.
Or you use 20 bit for the signal and 4 bit for the number of shift right, if you want to stay with 24 bit in total.

@SwinkelsPL:  96 dB of dynamic range is the level of quantization noise for 16-bit resolution. The sound above this level is already of perfect quality, so adding more bits will not improve it, but will only lower the noise floor, which we probably won’t notice during normal listening anyway.

@ClaytonMacleod replies to @SwinkelsPL: He doesn’t understand this idea that above the noise floor things are reproduced perfectly. He continually talks about digital audio having a resolution, which it does not. The signal is either in the range to be perfectly reproduced by the given bit depth and sample rate or it isn’t. He’s got more than one misunderstanding of how digital sampling works. If you’ve seen Monty’s “Digital Shoe & Tell” video you know how simple Monty makes it to understand by illustrating by example how things actually work. But this guy either hasn’t watched it or hasn’t grasped it for whatever reason. He misunderstands the concepts involved, but is thoroughly convinced that his understanding of the concepts are how things actually work. He positively refuses to entertain the thought that his understanding might be incorrect. And he has made more than one video illustrating his misunderstanding. I wish I could sit down with the guy and watch Monty’s video with him and have him tell me what he thinks differently about so we could go over it. But he’s convinced he is right and will not consider that he may be wrong. Frustrating.

@ClaytonMacleod replies to @SwinkelsPL: @nicksterj Yes! I think a lot of people have a misunderstanding of just what quantization noise/error is, exactly. And when you can see it in that demonstration laid out that apparently, hopefully it finally clicks. Digital sampling reproduces your signal “perfectly” except for that quantization noise, but the level that this noise presents itself is so low as to be meaningless in any formats we get our music in. I’d wager most people would have trouble hearing it if we were using 8-bit samples, as the dithering does a decent job of obscuring it. Problems in digital sampling aren’t what most people seem to think it is. They don’t seem to grasp that your output signal is virtually identical to your input signal for precisely the reason Monty says, it is in fact the only possible output. A perfect copy, except for the noise. The intended signal is reproduced perfectly, above that noise. Not good, not great, but perfectly. That noise is the only thing that differs. At least, in so far as the digital sampling reproduction is concerned. Other parts of the analog circuitry can of course play a role in unwanted alterations to the signal, but these are typically minor enough as to be ignored, one would hope. There is no “resolution” in the intended signal reproduction. Higher sampling rates or bit depths don’t give you a smoother result that’s closer to the original, which is how most people seem to think it works. Sampling rate simply moves the frequency ceiling. Bit depth simply moves the noise floor. And you visually see this happening in Monty’s demonstration in the oscilloscope output.

@TWEAKER01 replies to @SwinkelsPL: Lower noise floor, but also lower level of errors (truncation distortion, which compounds with almost every dsp stage)

@ClaytonMacleod replies to @SwinkelsPL: @@TWEAKER01 You’re saying the same thing, but for some reason think you’re saying something different.

@jxtq27 replies to @SwinkelsPL: @@ClaytonMacleod This is a common misconception. The signal is not reproduced perfectly when quantized and sampled. Take the example of a minimum-value sine wave, just wiggling the very last bit. Furthermore, let's say that it's at a relatively low frequency. The bit pattern we record for a sine wave under these conditions is identical to the bit pattern we would record for a square wave, but we know that a square wave is full of harmonics. Since the original tone was fairly low, several of those harmonics are going to survive the reconstruction filter. Yes, they'll be below -96dB, but if the tone we record is at -96dB and then we turn the gain up enough to hear our test tone, what we hear is going to be markedly different than the sine wave we recorded in the first place. Now multiply the input waveform by 1.99. The result doesn't quite make it into the second bit, and we record exactly the same bit pattern as before. Now the amplitude is wrong. By almost 6dB. I wouldn't call either of those results "perfect".

@MowestChameleon:  Nah....I don't get this technospeak.....for me 24/96 has always added a subtle clarity I hear.......so you are saying I am imagining it?.....well whilst 16/44 sounds totally acceptable I have to be using Mastering headphones to hear the extra 24 bit detail.......I will keep my archived 60s dubs at both 24/96 or 16/48.......and will master down to 16/44 when I want to burn to CD.....for me more is MORE

@BrianHall-Oklahoma replies to @MowestChameleon: You know there is a different number and you think the different number means it is better. Your brain tells you it sounds better because you expect it to sound better. Placebo effect.

@powernattoh replies to @MowestChameleon: @@BrianHall-Oklahoma I'm with your argument, but the conclusion is perhaps less placebo and more confirmation bias :)

@MowestChameleon replies to @MowestChameleon: I was doing a tape research project of Masters from the 60s that were transferred and I didn't check the sampling rate .....but I knew that some tracks sounded exceptional....then I checked further and found some to be 24/96 and the clearest and best at 24/192.....so I know it wasn't me expecting to hear better clarity.....and i know the engineer from Mirasound was an exceptional guy. So maybe I am defying human science or I HEARD something more than the 16/44. 'I'm A Believer' haha

@frogandspanner:  When doing computer arithmetic with integers (such as a digital representation of audio amplitude) we lose resolution at each stage, especially with multiplication and its inverse. We cannot hear the extra bits, but we can process subsequently them without loss of audible resolution (<16 bits). (Floating point arithmetic is slightly different, but still suffers from loss of resolution at each calculation step).

An alternative explanation: rich people are thick; they like to show off; more bits is showing off; we can make more money from them by selling more bits to them; and become rich; and thick.

[My PhD involved calculating energy of molecular interactions, and all I had was a 16-bit PDP11/45. I developed what I called floating fixed point arithmetic to minimise the loss of resolution at each arithmetic step. As Thatcher screwed up university careers during my postdoc, I went into banking, and had to use a PDP11/70 to transfer clearing data to our Dublin office. Bit-limited arithmetic was not acceptable, as occasional lost pennies would have resulted in Post Office Horizon type problems, so I had to develop IBM-like arithmetic to prevent resolution reduction.]

@pantegministries:  What about 32 bits?

@AudioMasterclass replies to @pantegministries: 32-bit floating point doesn't add any more detail, just more potential dynamic range. Too dangerous in my opinion to be let out of the studio.

@DPSingh-px4xu:  I'd like to know how many bits this man is using to attract me to the rich soundstage of his voice at the precise frequencies that are so appealing....otherwise I have no idea what he's talking about

@Synthematix:  Screw 24bit, we now have 64bit audio, yup even more pointless.

Maybe in a million years when our ears have evolved to hearing ultra subsonics and ultrasonics then 24bit 96khz may be useful, that is if of course if there are any musical instruments that can play such tones, to my understanding the lowest musical note of any instrument is that of a pipe organ at around 18Hz and the highest wood instruments/brass are 6.5khz, anything higher is created from electronic instruments.

Modern music doesnt have any dynamic range anyway making all this even more pointless.

Most people over the age of 30 have some form of tinnitus, this cancels out any extra headroom. But apparently "audiophiles" are immune to tinnitus and the aging process of the human body haha

In fact its the unnatural unwanted high frequencies that can cause tinnitus in the first place

@AudioMasterclass replies to @Synthematix: Well if male audiophiles are more attractive to women than ordinary listeners, perhaps we will evolve. Same the other way round just slower.

@gurratell7326:  You talk about "the bottom eight bits" that's not being used. It's not exactly how it works, all bits are always used it's just that a 24-bit have smaller steps than a 16-bit signal so when the signal is quantized there are less room for errors, ie better sound. In theory that is, both because that quantization error is so low that it's almost impossible to hear them, and also because any audio engineer with a bit of common sense will ALWAYS dither, which makes that quantization error go from correlated noise to just pure noise.
So what those extra eight bit gives is headroom for DSP. DSP in either a DAW when working with music or for room/speaker correction and/or subjective taste fiddling. This is the reason why a 24bit DSP and DAC is useful; to have headroom, while for consumption a 16bit file or stream does EVERYTHING we need since with proper noise shaped dither can give us 120dB+ of SNR which is enough to play music in a normal room at 145dB without being able to hear the noise floor. Well both because the noise is still below the background noise in your room, but also because you are now deaf.

@thepuma2012:  I thought that what you propose, is what already whas done. My mistake....?

@andrewmeates7633:  Very good, you hum it son and I'll play it. I've tried 24 bit at various rates and dsd. I am happy with Nyquist et al. 16/44 always sounds good enough to my nearly ancient pinnas after comparing the same pieces of music. Leave the lilly alone I say. But I enjoy your content. What about your system, it would be interesting to take a peek 😮

@Anybloke:  Didn't Neil Young's ludicrous Pono player operate at 24 bit ? The albums cost a fortune and it was an unmitigated disaster.

@arvidstorli2501:  Interesting and pleasant as usual :D There is one thing I wonder about. What is the source of the streaming companies' hi-res files? I highly doubt if the record companies make anything specifically just for streaming services. Something for another video maybe :D Best regards - curious norwegian

@AudioMasterclass replies to @arvidstorli2501: It depends how you define hi-res. One definition is 24-bit / 96 kHz, which is necessary to use the official logo. Music is commonly mixed and mastered to 44.1 or 48 kHz, 24-bit, not so much to 96 kHz. It's easy enough to convert but I wouldn't call it genuine 96. If you (heaven forbid) listen to my music on some outlets it will be CD-standard, on others 24-bit / 48 kHz. I doubt if anyone will notice the difference.

@arvidstorli2501 replies to @arvidstorli2501: My vintage ears will not. Even if I play it thru some ekselent vintage british speakers. That I'm sure of :D

@patrikpopelar2056:  Yup, gaping hole in your reasoning. Those 'extra' 8 bit are used between every level of 16 bit resolution. Level 256 of 24 bit is level 1 of 16 bit. Level 512 is level 2 of 16 bit, 768 is 3, etc... Thus between every 16 bit level, there are in fact 256 sub-levels in 24 bit resolution. Those extreme low noise floors achieved with 24-bit are only achieved when 16-bit is at 0. Otherwise, it just improved resolution...

@stunksinatl replies to @patrikpopelar2056: Um, no

@patrikpopelar2056 replies to @patrikpopelar2056: Right, the last line should have read "extremely low signal levels" as noise is always present at all signal levels (no digital noise without signal).

@stephanherschel5785:  Interesting that it needs this reasoning. I thought it was always about - thinking of image processing - more "greys" : with 8 bits we've got 256 brightness levels, 16 bit makes 65536 and 24 bit gives us 16777216 levels which of course results in better image quality (not going into how many grey levels one could possible distinguish). So I would have thought in audio when encoding 24 bit all 16777216 levels are used - thus finer resolution. Sounds absurd to me to encode 65536 levels with 24 bits. Why would you?

@thepuma2012 replies to @stephanherschel5785: that s what he is proposing. using those levels in the same dynamic range, instead of adding levels under the most silent levels, which you can t hear - the situation it is now according to him.

@stephanherschel5785 replies to @stephanherschel5785: @@thepuma2012 I understand. Maybe I should have said "interesting, that it needs this reasoning" - because I would have thought that that's the way how it is done ...

@markbrookes5953:  My pet bats, Wonfor and Dufer, can hear the difference... I've asked them. 🐭

@bobbradley3866:  Yes, very funny. Maybe you could use those bits as an expontent and call it 32 bit float.

@cueboyd8666:  I mostly capture at 32bit float 44.1khz when producing music, then down convert to 16bit. Largely for compatibility for other systems and easy audio transportation.

@maidsandmuses:  No, that wouldn't work, just scratching my head as to how to explain that in layman's terms.
What you are suggesting effectively is encoding the 96dB in 24 bits rather than 16 bits. That implies making the binary steps smaller (i.e. not 2Log, but rather 1.5Log, or whatever it needed to be), which is not possible in a binary system where each digit (bit) can only take two discrete 0/1 values (at least not without significant multiplicity of value encodings). But, for argument sake let's assume you could indeed encode 96dB with full use of 24 bits: then you are right back to square one as having smaller binary steps means they fall below the -96dB audible level...
Bottom line is our hearing can't do better than 96dB, encoding in 24 or however many bits you want isn't going to change that.

Better use of those 24 bits would be to use 8 of them to encode the compressed version of the song so favoured by the record labels, and use the other 16 to encode the full dynamic version prior to compression. Then you could choose on the player whether you wanted to hear the 8 bit compressed version in noisy environments, or the 16 bit full dynamic version in a good listening environment.

@AudioMasterclass replies to @maidsandmuses: Thank you for that. Your second paragraph is inspiring me towards another video.

@fabiosantesarti4081 replies to @maidsandmuses: That sounds very interesting. I guess if it's possible to implement.

@maidsandmuses replies to @maidsandmuses: @@fabiosantesarti4081 It would certainly be possible to implement, but a new decoder/filter would need to be developed for it.
The question is whether 8 bits is enough for even the most compressed material; I might have been a bit optimistic with that idea, although there is some really poor material out there...

@hansfijlstra5932 replies to @maidsandmuses: Excellent idea of using the ‘lower’ 8 bit for other purposes. But in stead of storing the compressed version, maybe it could store the level of compression, so (with a suitable player) one could adjust the level of compression from zero to max. But note I am not an expert…

@CORVUSMAXYMUS:  16 bits its enough for everyone

@CORVUSMAXYMUS:  24 bits its for studio

@DeMorcan:  My wife can change the steam from 186 bit to 24 bit. I can hear the difference. I do not know why. Some things sound better to me at 16 bit original source. But I think that is do to mastering. Of course then there is OS and NOS which I am not sure is just changing the bit depth. And I usually prefer NOS. Blues and Jazz, I really listen in 24 bit. Orchestras and classical music, I notice the most difference with 24 bit. Also with some pipe organs the 24 bit captures more of the cathedral echoes and feeling. As soon as a pipe organ album starts I can tell if it is 16 bit or 24 bit. It also depends on how it was miced more than the bit depth. The bit depth is that last bit of fine tuning sometimes. Also speakers and amps can hide the bit depth. This is all experience. Most musuc I do not hear a difference due to the recording and mixing.

@omenoid replies to @DeMorcan: The only difference between 16 and 24 bits is noise floor. Period.

@gurratell7326 replies to @DeMorcan: What you hear is probably some error in you chain that can't handle either 16bit or 24bit properly. Or just placebo.

@DeMorcan replies to @DeMorcan: @@gurratell7326 ifi Neo Stream
Weiss 204
McIntosh C49 and 7200
Focal Sporas
REL T/9xs
I think a midfi system like this should be able to handle any stream? Since this is my end game system, which one needs improved ? This seems to be a very revealing system to me and I can hear differences I could not hear with my previous systems.

@DeMorcan replies to @DeMorcan: @@omenoidThis might be as the differences I hear are in imaging and decay in live recordings which can both be affected by noise.

@gurratell7326 replies to @DeMorcan: @@DeMorcan Seeing that 16bit gives 96dB of SNR or even 120dB+ with noise shaped dither you'd have to play at INSANE levels if you'd hear the noise in a 16-bit file above the noise floor in your room. So yeah if you hear any difference it's not because of the bit depth, it's because of either because something is ding something wrong, because of placebo, or as you said yourself different masters (the 16-bit file could have the better master though).

@montynorth3009:  Regarding the other digital consideration of sampling rate, could a difference be heard between 16/44.1 and either 16/96 or 16/192?

@Wizabeard replies to @montynorth3009: Subtle if any. The type of difference where you'd have to focus hard to notice, which I'd argue takes away more enjoyment of what you're listening to than the "audible difference" itself.

@omenoid replies to @montynorth3009: No.

@BrianHall-Oklahoma replies to @montynorth3009: The number at the right (44,96,192) is samples per second which need to be twice the highest frequency you want to capture to not miss anything. It has nothing to do with "better resolution" of the frequencies we can hear. 44.1 khz is all that is needed to capture slightly more than even the best human ears can detect. 96,000 samples per second would capture frequencies up to 48 khz which is more than double the highest frequency we can detect. There is no value to us in reproducing frequencies we can't even get close to hearing unless you were born with bat DNA. "HiRes" audio is just another snake oil scam.

@montynorth3009 replies to @montynorth3009: My first wife was a fruit bat!😊😊😊@@BrianHall-Oklahoma

@fernandofonseca3354:  Turning solutions into problems... 🙄

@earthoid:  Good luck trying to explain to the masses how taking away 8 bits can ultimately make their music sound better. Interesting idea though!

@earthoid replies to @earthoid: On second thought, the MQA inventors manipulated and reduced bit count using some sort of magic that we weren't allowed to understand, and they were fairly successful.

@imqqmi:  The only benefit I could think of is for that music we all 'love', modern pop music where everything is limited, gated and compressed to death. To prevent the obvious quantization issues you can record in 24 bits, giving the artists even more room for limiting, gating and compression.

As digital audio works now is for every extra bit the voltage will be divided by 2. It's theoretically possible to decrease that to say 1.5 so that the steps will be smaller. But then you'd need to scale the analogue signal to a lower dynamic range introducing more noise.
Another consequence of remapping 24 to the audible range is that you'll hear much much more noise, not only digital or analogue noise but also the smacking of lips, the nails of pianists on the piano keys, the valves of hobos, clarinets etc, the sighing and breathing of the audience, the farts and the brilliantly covered up cough, the heartbeats, the air conditioning of the auditorium, the Underground 100km away, the meteorites hitting the earth atmosphere...

@GCKelloch replies to @imqqmi: Interesting stuff, but I assure you hobo valves are virtually silent. 😁

@imqqmi replies to @imqqmi: @@GCKelloch Not with 24 bit recording, with a equally resolving microphone gives you superhuman hearing ;) Even the soft taps of your fingers will give a sound. If you place your index finger and thump very close to your ear and try tapping as softly as you can, you can still hear it, unless you have loss of hearing of course. A good mike at close range can pick up those kinds of sounds.

@EricIolo:  How old are you?

@AudioMasterclass replies to @EricIolo: Very old.

@montynorth3009 replies to @EricIolo: He should have used the Just for Men.

@AudioMasterclass replies to @EricIolo: @@montynorth3009 Gratuitous affiliate link for comment readers https://amzn.to/475OgS1

@biketech60:  Maybe it would be better if Sony relinquished control of DSD and allowed us all to hear music recorded and played back with a sample rate of 2.8 Megahertz ? Paul at PS Audio says it sounds better than any professional tape deck technology .

@andrewbrazier9664 replies to @biketech60: I wouldn't doubt his experience. 👍

@Weissman111:  Can't say I've noticed a difference - all my studio recordings are done at 24-but (mainly so I can drop the overall volume without sacrificing any detail) but once it's converted to 16-bit, even through studio monitors it's nigh on impossible to tell the two apart.

@ChrisTaylor-dz6nk:  🎉😂16.44.done

@TheHalfEmptyGlass:  Pah. 24-bit audio? 32-bit floating point or nothing. If you can't record the sound of the world exploding or an atom vibrating, then what sort of audiophile are you?
(I say "or nothing" - there's always 64-bit floating point or higher.)

@TheRealCykOne:  Hello, the higher bit depth make sense when capturing Audio with a higher sample rate like 88.2 or 96KHz, the 24bits give u more "space" to store the extra information even when the signal is dithered down to 44.1Khz in the end. Imo bit depth and sample rate should be viewed interdependent.

@andymouse:  Philips 3000 nose hair trimmer, a great choice.....cheers.

@AudioMasterclass replies to @andymouse: https://amzn.to/40BdLJo (affiliate link for comment readers)

You can comment on this video at YouTube

Thursday February 8, 2024

Like, follow, and comment on this article at Facebook, Twitter, Reddit, Instagram or the social network of your choice.

David Mellor

David Mellor

David Mellor is CEO and Course Director of Audio Masterclass. David has designed courses in audio education and training since 1986 and is the publisher and principal writer of Adventures In Audio.

Audiophiles - You're wasting your money!

Audiophiles - You're wasting your money!

Watch on YouTube...

If you can't hear this then you're not an audiophile

If you can't hear this then you're not an audiophile

Watch on YouTube...

CD vs. 24-bit streaming - Sound of the past vs. sound of the future

CD vs. 24-bit streaming - Sound of the past vs. sound of the future

Watch on YouTube...

The Vinyl Revival - So wrong on so many levels

The Vinyl Revival - So wrong on so many levels

Watch on YouTube...

More from Adventures In Audio...

Get VU meters in your system and in your life [Fosi Audio LC30]

Is this the world's most diabolically expensive DAC? [iFi Diablo 2]

A tiny amplifier with a weird switch in a strange place

Will this DAC/headphone-amp dongle work with *your* phone? [Fosi Audio DS2]

When is a tube power amp not a tube power amp? - Aiyima T9 review

I test the Verum 1 Planar Magnetic headphones for listening and production

Your power amp is average - Here's why

Adding tube warmth with the Freqtube FT-1 - Audio demonstration

Adding tubes to a synth track with Freqport Freqtube

The tiny amp that does (nearly) everything

Can I unmix this track?

Why you need a mono amp in your system - Fosi Audio ZA3 review

Can you get great earbud bass with Soundpeats AIR4 Pro?

24 bits or 96 kHz? Which makes most difference?

16-bit vs. 24-bit - Less noise or more detail?

Are these earphones REALLY lossless? Questyle NHB12

Could this be your first oscilloscope? FNIRSI DSO-TC3

OneOdio Monitor 60 Hi-Res wired headphones full review

Watch me rebuild my studio with the FlexiSpot E7 Pro standing desk

Can a tiny box do all this? Testing the Fosi Audio SK01 headphone amp, preamp, EQ

Hi-Fi comfort OVER your ears? TRUEFREE O1 detailed review

Get the tube sound in your system with the Fosi Audio P3

Any studio you like, any listening room you like - For producers and audiophiles

Hidden Hi-Fi - The equipment you never knew you *didn't* need - Fosi Audio N3