• antimidas@sopuli.xyz
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    13 days ago

    And this is because audiophiles don’t understand why the audio master is 96 kHz or more often 192 kHz. You can actually easily hear the difference between 48, 96 and 192 kHz signals, but not in the way people usually think, and not after the audio has been recorded – because the main difference is latency when recording and editing. Digital sound processing works in terms of samples, and a certain amount of them have to be buffered to be able to transform the signal between time and frequency. The higher the sample rate, the shorter the buffer, and if there’s one thing humans are good at hearing (relatively speaking) it’s latency.

    Digital instruments start being usable after 96 kHz as the latency with 256 samples buffered gets short enough that there’s no distracting delay from key press to sound. 192 gives you more room to add effects and such to make the pipeline longer. Higher sample rate also makes changing frequencies, like bringing the pitch down, simpler as there’s more to work with.

    But after the editing is done, there’s absolutely no reason to not cut the published recording to 48 or 44.1 kHz. Human ears can’t hear the difference, and whatever equipment you’re using will probably refuse to play anything higher than 25 kHz anyways, as e.g. the speaker coils aren’t designed to let higher frequency signals through. It’s not like visual information where equipment still can’t match the dynamic range of the eye, and we’re just starting to get to a pixel density where we can no longer see a difference between DPIs.