Production Advice

Production Advice

make your music sound great

Are high sample rates making your music sound worse ?

July 18th, 2012 BY 



Yes, you read that right. There's a real possibility that using high sample rates could actually be reducing the quality of your audio, not making it better.

How is this possible ?

In a nutshell, many "affordable" soundcards have a non-linear response to high frequency content.

Meaning that even though they are technically capable of recording at 96 kHz and above, the small benefits of the higher sample rate are completely outweighed by unwanted "intermodulation distortion" in the analogue stages.

Further down I link to some test files so you can test your own system for this problem, but let's just slow down for a minute and add a little background information.

What's the point of high sample rates anyway ?

The sample rate determines how many samples per second a digital audio system uses to record the audio signal. The higher the sample rate, the higher frequencies a system can record. CDs, most mp3s and the AAC files sold by the iTunes store all use a sample rate of 44.1 kHz, which means they can reproduce frequencies up to roughly 20 kHz.

Testing shows that most adults can't hear much above 16 kHz, so on the face of it, this seems sensible enough. Some can, but not the majority. And examples of people who can hear above 20 kHz are few and far between. And to accurately reproduce everything below 20 kHz, a digital audio system removes everything above 20 kHz - this is the job of the anti-aliasing filter.

But a fair few musical instruments produce sound well above these frequencies - muted trumpet and percussion instruments like cymbals or chime bars are clear examples.

This leads to two potential objections to a 44.1 kHz sample rate - first, that in order to reproduce a sound accurately we should capture as much of it as possible, including frequencies we probably can't hear. There are various suggestions that we may be able to somehow perceive these sounds, even if we can't actually hear them. And secondly that depending on the design, the anti-aliasing filter may have an effect at frequencies well below the 20 kHz cut-off point.

Whether these arguments stand up to scrutiny or not, the solution is obvious - record at higher sample rates. The filters can work higher up and be more gentle, and all the high-frequency content can be recorded accurately. Simple, right ?

Well, not entirely. In fact, these arguments don't really make sense - for an excellent and detailed discussion of why not, check out this article.

So why NOT use higher sample rates, then ?

Back when CD was released, recording at 96 kHz or above simply wasn't viable at a reasonable price, especially not in consumer audio. Times have moved on though, and these days almost any off-the-peg digital audio chip is capable of at least 96 kHz processing, if not higher.

Now these files take up much more space than simple 44.1 kHz audio, but hard drive space is cheap, and getting cheaper all the time - why not record at 96 kHz or higher, just in case either of those hotly debated arguments really does carry some weight ?

The answer lies in the analogue circuitry of the equipment we use. Just because the digital hardware in an interface is capable of 96 kHz or higher audio processing, doesn't mean the analogue stages will record or play the signal cleanly.

It's quite common for ultrasonic content to cause intermodulation distortion right down into the audible range. Or in simple English, the inaudible high-frequency content actually makes the audio you can hear sound worse.

You can read all the gory details in a the same excellent article I linked to above here, but here's the summary:

...it's not certain that intermodulation from ultrasonics will be audible on a given system. The added distortion could be insignificant or it could be noticable. Either way, ultrasonic content is never a benefit, and on plenty of systems it will audibly hurt fidelity. On the systems it doesn't hurt, the cost and complexity of handling ultrasonics could have been saved, or spent on improved audible range performance instead.

Check your own system

If you want to test your own playback system, there are files included in the article - to download them, click here. The files contain various types of test material, but all of it is at frequencies that is completely inaudible to humans.

[Edit - Monty's original article and files seem to be offline at the moment, so I've added a temporary link to my own copy of the files until they come back]

So interpreting the results is simple:

Assuming your system is actually capable of full 96kHz playback, the above files should be completely silent with no audible noises, tones, whistles, clicks, or other sounds.

For what it's worth, the onboard sound of my Mac Pro fails all these tests - make sure you've set your system to 96 kHz output, if you want to try them yourself.

So what does this mean ?

If you can hear audible sound when you play these test files, then you may be making your audio quality worse by choosing to use 96 kHz or higher sample frequencies !

If that's the case, then you face the difficult choice - do you spend time and money upgrading to handle these very high frequencies, even though they probably aren't audible ? Or just optimise for 44.1 kHz, which is still the most common playback frequency ?

Notice I said you may be making things worse - even if your system fails these tests, the music you record may not have ultrasonic content that causes audible problems.

Another test would be to apply a phase-linear high-pass filter to your music at 25 kHz (say) and listen to the result - again, you shouldn't be able to hear anything. If you can't, then the high sample rates probably aren't recording any information which will cause you a problem - but in that case, are you actually getting any benefit from them ?

A controversial question

Finally, the fact that ultrasonic content can potentially cause intermodulation distortion and make things sound different even when they shouldn't raises a tough question.

Are all the people who claim to be hearing improved quality at 96 kHz and above really hearing what they think they are ? Or are they just hearing intermodulation distortion ?

My experience

It's been over 6 years since I first tested myself with 96 kHz audio - I compared the SACD of Pink Floyd's "Dark Side Of The Moon" and a pure DSD live jazz recording - I down-sampled both to 48 kHz and 44.1 kHz and blind tested myself switching between the three.

In each case, I could reliably hear a difference between 44.1 kHz and 48 kHz, but not between 48 kHz and 96 kHz. And, I was left with the distinct feeling that I could compensate for the difference I did hear with a very small EQ tweak on the 44.1 kHz version...

Now, that was just on those two recordings, at that time, using that system - maybe those recordings don't have anything in them that gets the "benefit" of 96 kHz playback.

Or maybe 48 kHz is actually good enough ?

Leave a comment and let me know what your experiences are with high sample rate recordings.



If you found this post useful, you might like to check out my recent eBook,
"The Best Of Production Advice" - it has all the best content from this site site to help you start improving your recording, mixing and production skills today. For more information, click here.


Filed Under: ,

ABOUT IAN SHEPHERD

My name is Ian Shepherd - I've worked as a professional mastering engineer for over 20 years and I run the Production Advice website with over 50,000 readers each month

TRANSLATE

WANT MORE?

Discover the 6 Essential Steps to Releasing Your Music with Complete Confidence!

To get started, Click Here
Copyright © 2024 · Mastering Media (Production Advice) Ltd · Privacy