It looks like you're new here. If you want to get involved, click one of these buttons!
Subscribe to our Patreon, and get image uploads with no ads on the site!
Base theme by DesignModo & ported to Powered by Vanilla by Chris Ireland, modified by the "theFB" team.
Comments
You don't need the massive dynamic range to perfectly record a Jew's harp from the end of the street and a maxed out plexi from 1cm away with the same mic at the same input level.
If you did, and were able to record it perfectly, then the equipment playing it back wouldn't reproduce that dynamic range. And even if it could it would sound rubbish going from inaudible to painfully loud.
This "perfect" recording would need seriously compressed in the mix to work when being listened to. So you'd be better off using different input methods to capture them. So you don't need that massive a dynamic range after all.
"With use of shaped dither, which moves quantization noise energy into frequencies where it's harder to hear, the effective dynamic range of 16 bit audio reaches 120dB in practice
... 120dB is greater than the difference between a mosquito somewhere in the same room and a jackhammer a foot away."
Which should illustrate perfectly why anyone who thinks they can hear the difference between 16 and higher bit depths is nuts.
The only point of recording at higher rates is to allow more headroom and avoid distortion, which *is* audible - and even then 24-bit should be plenty.
"Take these three items, some WD-40, a vise grip, and a roll of duct tape. Any man worth his salt can fix almost any problem with this stuff alone." - Walt Kowalski
"Only two things are infinite - the universe, and human stupidity. And I'm not sure about the universe." - Albert Einstein
When recording, I use 24bit just to give me greater dynamic range on the sound I'm recording (as others have said) but I use 44.1kHz sample rate, so there's no rendering to be done when I send my mates a WAV file or cut a CD.
Classical recordings can be a slightly different case but a different set of problems develop.
For example, a common complaint of London Philharmonic Chorus and Orchestra / Vladimir Jurowski recording of Holst's 'The Planets' is that the dynamic range is too excessive, even on CD (which has a theoretical dynamic range of 96db undithered and about 120db with noise shaping).
That recording has such a huge dynamic range (for a CD) that you find yourself having to choose between not hearing the very quiet moments that well or having your head ripped off when the loud bits kick in.
A 24 bit recording would be, in theory, even worse.
I get a lot of tracks recorded by other people to mix.
My most common complaint is that they are recording too hot.
Quite often I get clipped drums, vocals or guitars that I then have to spend hours fixing in Izotope RX.
I prefer recordings to be averaging around -18dbFS with peaks no louder than -6dbFS.
Studio: https://www.voltperoctave.com
Music: https://www.euclideancircuits.com
Me: https://www.jamesrichmond.com
The reality is though that if you are doing label work then you simply cannot deliver 16 bit masters.
Something else that has not been mentioned (I don't think) is all audio interfaces are not equal.
You will get prosumer audio interfaces that capture at 24 bit/192khz that will sound obviously worse than 20 year old Lavry/Prism converters that are only 16 bit, 48khz capable.
That said, the entry level interfaces sound much better than the entry level interfaces of 20 years ago.
Studio: https://www.voltperoctave.com
Music: https://www.euclideancircuits.com
Me: https://www.jamesrichmond.com
Some people might dislike discovering they can't hear certain things so those people shouldn't do it but for me it allowed me to forget about certain minute details and focus on the important things.
Same can be applied to the sample rate idea - if you really want to know for yourself if you can hear a difference it has to be blind abx, it's just not possible to judge it reliably when you're aware of what you're hearing.
There's a common story that I've heard many big time engineers confess as well as experienced for myself - they sit tweaking a compressor or something for ages until it's just right and sounds perfect only to find that it's been in bypass the whole time.
Bandcamp
Spotify, Apple et al
there can be some advantage in efx applied in the daw software at the higher rates. So there can be an advantage adding reverbs and other efx in 24/96 or 24/192 or higher but bring down to a 24/48 when creating final mixdown.
agree it doesn't need to be high numbers but do think a lot of hardware produces better results at 48 than 44.1 but that's not because of the extra samples
high def doesn't solve the ridiculous levels of compression used.
There are plenty 44.1 and 48khz recording out there that prove the problem isn't inherent with that sample as they sound incredible.
one thing to do if you capture and edit at 96 don't make final music avail at 44.1 make it 48. If you plan on releasing a cd don't do all work at 96 or 192 etc etc but do it at 88.2 or 176.4.
Why? We'll its very easy to accurately convert 176.4 or 88.2 to 44.1 with very little error. But to change from 192 or 96 to 44.1 requires the software to interpolate the data (ie best guess of where wave would have been at point).
So if you wanna do a CD and HD use 88.2 or 176.4.... If doing video work use 48, 96 or 192.
So, in practice, it comes down to how well the converters were engineered. Personally, I have no problem with my signal being, say, 1dB down and a few degrees out of phase by the highest frequency I could possibly hear. It pales in comparison to the things I'll do to the music by the time it's mixed down to stereo. , and it pales in comparison to things like the quality of the clock, the op-amps on the way in and out...
There's also the issue of aliasing and extra inter-modulation distortion if you record ultrasonic signals you don't actually need - record your project at 96k and you might have ultrasonic stuff that takes up headroom, interacts with signals in the audible range etc. At some point you down-sample the signal and you hope that stuff is going to be dealt with properly by the software.
These are all very minor points, and my feeling is that the downside of capturing unnecessary info in the first place pretty much balances out the minor improvement in the nyquist filter situation.
Bandcamp
Spotify, Apple et al
Re' the sample rate conversion, that seems intuitively right but apparently it's not at all. Whether you're downsampling to 1/2 rate or some very complex ratio, the maths is sound. I've seen enough clever people say it that I'm inclined to believe them - including someone who designs A/D and D/A converters.
Bandcamp
Spotify, Apple et al
Things change though and all software is different so millage may vary. Seems a no brainer to avoid a conversion stage that requires calculations to be made.
like i say on high end audio gear in past i have been able to tell which was interpolated and which wasn't.. Bit my ears now and software now may have got to stage its not really a big consideration.