Are high sample rates making your music sound worse ?

Yes, you read that right. There’s a real possibility that using high sample rates could actually be reducing the quality of your audio, not making it better.

How is this possible ?

In a nutshell, many “affordable” soundcards have a non-linear response to high frequency content.

Meaning that even though they are technically capable of recording at 96 kHz and above, the small benefits of the higher sample rate are completely outweighed by unwanted “intermodulation distortion” in the analogue stages.

Further down I link to some test files so you can test your own system for this problem, but let’s just slow down for a minute and add a little background information.

What’s the point of high sample rates anyway ?

The sample rate determines how many samples per second a digital audio system uses to record the audio signal. The higher the sample rate, the higher frequencies a system can record. CDs, most mp3s and the AAC files sold by the iTunes store all use a sample rate of 44.1 kHz, which means they can reproduce frequencies up to roughly 20 kHz.

Testing shows that most adults can’t hear much above 16 kHz, so on the face of it, this seems sensible enough. Some can, but not the majority. And examples of people who can hear above 20 kHz are few and far between. And to accurately reproduce everything below 20 kHz, a digital audio system removes everything above 20 kHz – this is the job of the anti-aliasing filter.

But a fair few musical instruments produce sound well above these frequencies – muted trumpet and percussion instruments like cymbals or chime bars are clear examples.

This leads to two potential objections to a 44.1 kHz sample rate – first, that in order to reproduce a sound accurately we should capture as much of it as possible, including frequencies we probably can’t hear. There are various suggestions that we may be able to somehow perceive these sounds, even if we can’t actually hear them. And secondly that depending on the design, the anti-aliasing filter may have an effect at frequencies well below the 20 kHz cut-off point.

Whether these arguments stand up to scrutiny or not, the solution is obvious – record at higher sample rates. The filters can work higher up and be more gentle, and all the high-frequency content can be recorded accurately. Simple, right ?

Well, not entirely. In fact, these arguments don’t really make sense – for an excellent and detailed discussion of why not, check out this article.

So why NOT use higher sample rates, then ?

Back when CD was released, recording at 96 kHz or above simply wasn’t viable at a reasonable price, especially not in consumer audio. Times have moved on though, and these days almost any off-the-peg digital audio chip is capable of at least 96 kHz processing, if not higher.

Now these files take up much more space than simple 44.1 kHz audio, but hard drive space is cheap, and getting cheaper all the time – why not record at 96 kHz or higher, just in case either of those hotly debated arguments really does carry some weight ?

The answer lies in the analogue circuitry of the equipment we use. Just because the digital hardware in an interface is capable of 96 kHz or higher audio processing, doesn’t mean the analogue stages will record or play the signal cleanly.

It’s quite common for ultrasonic content to cause intermodulation distortion right down into the audible range. Or in simple English, the inaudible high-frequency content actually makes the audio you can hear sound worse.

You can read all the gory details in a the same excellent article I linked to above here, but here’s the summary:

…it’s not certain that intermodulation from ultrasonics will be audible on a given system. The added distortion could be insignificant or it could be noticable. Either way, ultrasonic content is never a benefit, and on plenty of systems it will audibly hurt fidelity. On the systems it doesn’t hurt, the cost and complexity of handling ultrasonics could have been saved, or spent on improved audible range performance instead.

Check your own system

If you want to test your own playback system, there are files included in the article – to download them, click here. The files contain various types of test material, but all of it is at frequencies that is completely inaudible to humans.

So interpreting the results is simple:

Assuming your system is actually capable of full 96kHz playback, the above files should be completely silent with no audible noises, tones, whistles, clicks, or other sounds.

For what it’s worth, the onboard sound of my Mac Pro fails all these tests – make sure you’ve set your system to 96 kHz output, if you want to try them yourself.

So what does this mean ?

If you can hear audible sound when you play these test files, then you may be making your audio quality worse by choosing to use 96 kHz or higher sample frequencies !

If that’s the case, then you face the difficult choice – do you spend time and money upgrading to handle these very high frequencies, even though they probably aren’t audible ? Or just optimise for 44.1 kHz, which is still the most common playback frequency ?

Notice I said you may be making things worse – even if your system fails these tests, the music you record may not have ultrasonic content that causes audible problems.

Another test would be to apply a phase-linear high-pass filter to your music at 25 kHz (say) and listen to the result – again, you shouldn’t be able to hear anything. If you can’t, then the high sample rates probably aren’t recording any information which will cause you a problem – but in that case, are you actually getting any benefit from them ?

A controversial question

Finally, the fact that ultrasonic content can potentially cause intermodulation distortion and make things sound different even when they shouldn’t raises a tough question.

Are all the people who claim to be hearing improved quality at 96 kHz and above really hearing what they think they are ? Or are they just hearing intermodulation distortion ?

My experience

It’s been over 6 years since I first tested myself with 96 kHz audio – I compared the SACD of Pink Floyd’s “Dark Side Of The Moon” and a pure DSD live jazz recording – I down-sampled both to 48 kHz and 44.1 kHz and blind tested myself switching between the three.

In each case, I could reliably hear a difference between 44.1 kHz and 48 kHz, but not between 48 kHz and 96 kHz. And, I was left with the distinct feeling that I could compensate for the difference I did hear with a very small EQ tweak on the 44.1 kHz version…

Now, that was just on those two recordings, at that time, using that system – maybe those recordings don’t have anything in them that gets the “benefit” of 96 kHz playback.

Or maybe 48 kHz is actually good enough ?

Leave a comment and let me know what your experiences are with high sample rate recordings.

If you found this post useful, you might like to check out my recent eBook,
The Best Of Production Advice” – it has all the best content from this site site to help you start improving your recording, mixing and production skills today. For more information, click here.

facebook comments:


  1. Dustbunnies says


    First, thanks for focusing our attention on this extremely interesting and thought-provoking topic. I do have a thought or two on the matter.

    While I agree with the data presented in the article you’re using as the basis of this post, I do not necessarily agree with one of its seemingly-tacit key premises: that ultrasonic modulation is synonymous with distortion and unwanted aberrations.

    Ultrasonic modulation happens all the time in the real (analog) world. Any time you mix two signals containing ultrasonic content, this process will result in additive and subtractive frequencies which can then color frequencies down into the audible range. The mere inclusion of ultrasonic modulation has nothing to do with a value judgment that this is distortion or unwanted. It can just as well present as a character or added dimension of the source audio.

    The exception here, of course, is where low-quality electronic components themselves are generating noise at an ultrasonic level, which then cross-modulates and produces extra frequencies in the audible spectra that would not naturally occur. In that case, I would usually agree with the subjective judgment of distortion and noise.

    However, at one point the article states, “If the same transducer *reproduces* ultrasonics along with audible content…” (note: emphasis is mine). Observe that the author chose the word ‘reproduces’ rather than ‘generates’, leading me to believe that he is also considering naturally-occurring ultrasonic modulation to be as unwanted as additional component noise.

    That nit-pick aside, it’s an interesting article and a good summary analysis on your part. Thanks for bringing that one to our attention, Ian. :)


  2. says

    Thanks for the comment. I read the article to relate only to ultrasonic content generated by the analogue gear.

    The bottom line is, there are no intermodulation distortion signals in the test files. If you hear them when you play back your signal, they are being “generated” by the playback system…

  3. says

    @ Dustbunnies.

    When you mix “ultrasonic” signals they just mix together. It’s only in the presence of non-linearity that the sum and difference frequencies are created.

    If you generate, say, 19kHz and 20kHz if the system is linear you won’t hear any 1kHz.

    It’s called “Linear Superposition” In order for the air to do this you need *very* high SPL’s.

    Competently designed analogue electronics, will have these distortions in the <-100dB levels.


  4. says

    The real culprit here is not higher sample rates, but bad analog. Bad analog always sounds, well… bad. Higher sample rates simply reveal it more fully.

    The article is laughable: “None of that is relevant to playback; here 24 bit audio is as useless as 192kHz sampling. The good news is that at least 24 bit depth doesn’t harm fidelity. It just doesn’t help, and also wastes space.”

    24bit audio is a significant improvement and if author Monty can’t hear the difference he probably lacks a reasonable monitor setup. I wonder if he advocates cars being engineered to go no faster than the speed limit so as not to waste capacity.

  5. says

    Do we have any idea how much kit produces these errors?
    Tried the samples out on my Prism Orpheus and a TC Electronic StudioKonnect 48 I am reviewing with no issues. If its not wide spread is this not simular to saying:
    “Some HP laptop speakers fart when bass notes are played, maybe we shouldn’t use 30hz in recordings as few systems can reproduce it”?

    Ok obviously its not that extreme but do you see what I am getting at?

  6. says

    Honest answer – no. But there are so many systems advertising “support” which will certainly fail (every Mac’s built-in audio, for example) I think it’s important to make people aware that the numbers don’t tell the same story, just as not all HD TVs give a better picture.

  7. says

    I totally agree. I think the 44.1Khz reproduces a total range of 22.05Khz Stereo, as the frequency number is split between left and right.

    I don’t think there is anymore to it then that. I think a lot of audiophiles get caught up in the bigger is better idea. I guess they believe it makes them sound more intellectual then they actually are.

    In the digital age of being able to record a professional sounding album in your basement with not much money, they like to pretend they can hear something the average person can’t.

    Maintaining the idea they have control over something mystical and mysterious that noob engineers can’t hear or perceive because they just don’t have the ear for it.

  8. Lerxst2112 says

    Thanks Ian, very insightful article, really got me reconsidering things as a musician and home studio engineer. Very glad I found this website. Cheers!

    Sidenote: @ Heavy Metal, the reason why the minimum sample rate is required to be at least double the max. audible frequency is explained by the Nyquist-Shannon sampling theorem. In short, no it’s not because it’s stereo. Also, some people certainly can hear things others cannot, just as how some people do not require eyeglasses, while others do. A producer/technician friend of mine has actually been able to hear signals beyond 20khz when tested for them, though this is indeed extremely rare. As for me, I can hear things that are much lower in volume than the average person, again confirmed by audiologists. The problem is people’s attitudes toward such abilities, either for or against, and how this is exploited primarily by the consumer market.

  9. M says

    Very interesting article. I’ve never found sampling rates nearly as important (to my ears, anyway) as bitrate and rarely record above 48kHz, but I never would have thought the higher rates could potentially make it sound worse.

  10. Jerry Korten says

    The basic premise you state for justifying high sample rates is mostly incorrect. While it is desirable to extend the frequency response of digital music by increasing the sample rate. The real benefit is that the digital filtering is moved to an ultrasonic (>20kHz) region and therefore cannot affect the shape of the waveform. In case you don’t know, a 10kHz squarewave digitized at 44.1kHz plays back as a sine wave. In addition, the brickwall filters required during data acquisition (ADC nyquist filtering) and during playback (reconstruction filtering to keep the sample rate from heterodyning with high frequencies) cause 20kHz ringing (and around 19khz for the reconstruction filter). This ringing itself causes intermodulation distortion and is the reason audiophiles reacted so strongly negative to the sound of massed strings on CD playback. But the real test is in the pudding – if you invest in a $1K DAC and have a USB to S/PDIF converter that removes jitter from the digital stream, the difference in the sound is monumental. If you have tried this and you do not hear a difference, consider yourself blessed – you will never need to invest in anything other than a $500 stereo system.

  11. says

    Thanks for the comment, but did you read the post ? It’s basically saying high sample rates aren’t needed – I’m not trying to justify them.

    But also, not all filters cause ringing. Do you have research to support your claim about the intermodulation distortion ?

    Also not all clocks exhibit jitter…

  12. says


    I’m not sure what the point is to saying that a 10kHz squarewave sampled at 44.1 makes a sine. It can do nothing but that! The first harmonic (30kHz) is outside the passband for 44.1. This is more a argument that Fourier was correct than anything.

    Also, I’m not sure if there is any real-world evidence that ringing at the Nyquist frequency (always >20kHz) is audible by itself _or_ as an intermodulation product with other nonlinearities in the system.

    As an aside, I have have heard massed strings that sounded pretty good even at 44/16, so I’m not sure if your claim is in regard to a particular recording, or to PCM in general.

    Also, I’m not aware of a USB to S/PDIF converter that doesn’t have substantial j*tter attenuation as it’s an asynchronous (has not clock) so there is always buffering/clocking of the stream. This doesn’t mean that there isn’t some incompetent audiophile implementation somewhere, but I haven’t seen it.


  13. Prashanth says

    Higher sampling rates definitely are desirable and will record fuller/complete/detailed sound, it is just that the analogue part has to be clean.

  14. tony says

    I respect all of yours opinions. But I listened carefully fron a known hi res music online store 192khz 24 bit tracks from Chet Baker and The Eagles burned directly to a DVD-Audio from FLACs and played on an known uk’s hifi manufacturer DVD-Audio player and with passive line stage, class AB power amps heavywired to 4ohms/soft dome passive speakers the difference with 16 bit 44 khz is AMAZING. But I respect all yours opinions.

  15. Dave C. says

    I came across this discussion during a surfing session, and found it quite interesting. I am in no way an audio expert and some of the things being discussed is out of my league, but I was a little curious about the statement close to the beginning of the article: “…a sample rate of 44.1 kHz, which means they can reproduce frequencies up to roughly 20 kH”. This doesn’t make sense to me. This is 2 samples per cycle. If you were to try to rebuild a complex signal wave cycle with only 2 samples, I think your going to be sad.

    Am I out of line here?


  16. says

    >2 samples are all you need to fulfill the Shannon/Nyquist sampling requirement in the real world. That’s because since you band-limited the signal prior to sampling there is only one wave (the correct one) that will intersect those two points. Nothing can fall “between the cracks” or be missed when you weren’t looking.

    Sounds crazy, I know, but it works.

  17. Robert Harvey says

    48Khz is actually good enough.

    16 bits, however, isn’t, if.. and this is a big if.. you are doing multitracking. Recording your tracks at 24 bits ensures that you have a low enough noise floor so that, when you mix them all together, there’s still plenty of room left at the bottom for 16 bits.

  18. musica phenomena says

    I’m a real world working professional with many Billboard top 10s to my credit. So while I’m not an acoustician or an academic on the subject I have some nuts and bolts experience in this area. I’ve heard this argument over audio sample rates both pro and con for years. My experience has been that if you record and play back using high quality analog equipment, DACs and monitors it is always better to capture as much of the source as possible and play back at the highest quality available even if it’s beyond the ability of the human ear. I can tell the difference and I’ve blind tested many of my friends who are not audio professionals or musicians and most can hear the difference as well. If you need hard data you can read plenty of academic arguments in favor of HD audio, google it. The real problem is (as the article writer stated) cheap consumer DACs and amplifiers combined with bad source material. But this is nothing new, when I was growing up in the 70s and 80s my dad had a $10,000 stereo system which sounded amazing, but many people had cheap stereos, turntables and speakers that didn’t have nearly the fidelity of my dad’s system. So I’m not buying the argument that bad consumer electronics is a reason to give up on recording audio in HD. However while most consumer products have improved over the past decade, consumer audio quality has actually gotten worse. In simple non technical terms compressed audio files sound flat compared to uncompressed audio, and in my opinion it’s a big part of why recorded music has been devalued and marginalized over the past decade. At least 2 generations have grown up hearing most of their music via lossy compressed files on youtube or at best iTunes, often on laptop speakers, ear buds or cheap computer monitors through bad DACs. I can only hope that even if only for the sake of greed someone will get the idea that they can package HD audio and sell it along with a line of acceptable consumer playback equipment, similar to how HD video was marketed. Until then I’ll still continue to record in HD and listen to music from

  19. Andy says

    While I don’t think the higher frequencies matter a great deal, if the original source uses frequencies as high as 96 kHz (or even higher), I think we should use those for playback so we get as close to the source as possible.

    I have a Sony STR-DA2400ES amplifier paired with 4x Sony SS-B4ED (main/surrounds-70 kHz) and 1x Sony SS-CNB2ED (centre-80 kHz) speakers running from a good quality Creative 7.1 card on my PC (it’s capable of 192 kHz but of course limited to 96 kHz via optical) and the difference between 48 kHz audio and 96 kHz audio is definitely noticeable to my ears and it also seems to change the sound stage of the whole room.

    I also have a Sony STR-DB895D-QS in the bedroom hooked up to the ED Pascals (also 70 kHz) from Sony and they too sound phenomenal compared to normal range speakers.

    The problem with any person doing blind tests is that not every one will be able to perceive the differences. My neighbour and I both have great hearing (we both own Sony QS/ES equipment) but none of my friends can tell the difference between their cheap all-in-one home cinema kits and the higher end Sony ES equipment.

  20. says

    I ‘m not sure about something here. This is a helpful dialogue, but I was never under the impression that a high sample rate was to allow reproduction of frequencies above 20k (… so people with super-sensitivity could hear notes above 20k).
    I rather thought that the more samples one takes of a high-frequency sound wave (ie. a 12k Hz upper harmonic of a violin or piccolo) the more nuance one can capture of the source instrument. If there are only four samples taken of that 12k waveform (ie. a sampling rate of 48kHz) it is very hard for anyone (high-frequency good hearing or not) to tell the difference, at the highest notes, between a violin and a piccolo. The waveform goes up and it goes down for sure, but with only 4 samples during that interval there will be no upper harmonics detected to tell the two instruments apart. We just know that the note is there.
    However if we quadruple the sampling rate to 192 kHz, we sample that same up and down 12kHz waveform *16 times* during its period… letting those who can hear/feel (and have analog systems capable of reproducing) those upper frequency harmonics (which I know, yes, are created by even higher frequency waveforms) – to probably detect the difference between a piccolo and violin at high pitches (or perhaps the nuances of bowing style, embouchure, etc), right? Isn’t THAT the real reason we want to use higher sample rate recording technique (if we have the tools to do it well)?

  21. Peter Kennerley says

    I heard Rupert Neve on you tube talking of an engineer at Abbey road or similar rejecting a new console because he could hear a fault which turned out to be at 56 KHz. Mr Neve believed this to be evidence of human perception of ultrasonics. Last year I visited Audio Maintenence Ltd in Manchester the owner used to work for AMS and his explanation was that Intermodulation products can be heard in the 20 to 20Khz band. Whilst looking at a FFT screen in his Lab it occurred to me that 40Khz was only on octave above 20Khz. As Intermodulation is associated with even order harmonics which many if not all instruments produce its not unreasonable for it to be possible that filtering out above 20Khz to be truncating the natural sound. Also a relative who was also a musician and electronics design consultant confirmed that harmonics extend well beyond 20KHz. This was the reason I have invested in some analogue equipment.
    Very good artical Ian however Intermodulation may not be a bad thing it may be just part of normal sound our brains expect to hear
    Products of within the subsonic spectrum originating in the ultrasonic spectrum. Regards PK.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>