Production Advice http://productionadvice.co.uk make your music sound great Wed, 05 Dec 2018 23:23:17 +0000 en-US hourly 1 Queen in the recording studio – videos http://productionadvice.co.uk/queen-in-the-studio-videos/ Mon, 26 Nov 2018 15:30:22 +0000 http://productionadvice.co.uk/?p=9705 Everybody’s talking about Queen at the moment, for obvious reasons – the new film “Bohemian Rhapsody” is a huge hit. I have mixed feelings about whether I want to see it or not – with Freddie at the heart of the story, but not here to help tell it, tragically. But what I have been […]

Queen in the recording studio – videos is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

Everybody’s talking about Queen at the moment, for obvious reasons – the new film “Bohemian Rhapsody” is a huge hit. I have mixed feelings about whether I want to see it or not – with Freddie at the heart of the story, but not here to help tell it, tragically.

But what I have been doing recently is digging into the treasure-trove of YouTube footage available of the band, and in this post I thought I’d share a few of my favourites from what I’ve found.

The great thing about this is that it’s reminded me what a huge part of my life Queen were as a young teenager. Of course I watched their incredible Live Aid performance like everyone else – in fact, we watched it so many times during the school lunch-hours that the VHS tape started to wear out ! But we were all huge Queen fans long before then – “Sheer Heart Attack” was the third album I ever taped to listen to on my treasured Aiwa personal cassette player, and is still probably my favourite.

(What were the others ? I’d rather not say.

Oh, all right then: “Oxygene” by Jean Michel Jarre and… “Cats”. By Andrew Lloyd Webber. At least you can’t say I didn’t have eclectic taste ! And the fourth was “Script For A Jester’s Tear” by Marillion.)

What’s interesting with hindsight was that I didn’t think much about how the songs were recorded at the time, even though I was already deeply interested in recording an audio technology. So I just took a studio masterpiece like “Bohemian Rhapsody” for granted. Not any more, though ! And to help fill in the blanks, the first piece of YouTube footage I found was the footage above of Brian May listening to the original takes of “Bohemian Rhapsody” in 2002. And perhaps the most amazing thing about it is simply to realise that Freddie had the whole thing in his head, from the outset – choir section and all.

Watching that video reminded me of an excellent documentary about the recording of the song that I posted on my site way back in the very early days of my blog. A quick check revealed that several of the original links were dead, but happily I was able to track them all down again and update it – you can watch and read here:

Recording and mixing Bohemian Rhapsody

Still highly recommended ! (Make sure you check out the links to the Sound On Sound articles in that post, too – essential reading.)

And there’s footage of Queen actually recording in the studio together in this clip – specifically the song “One Vision” from “A Kind Of Magic”, another album we listened to constantly at the time – and, yes, played air-guitar to with tennis rackets, if you must know. Watch out especially for the “alternative” lyrics to the song towards the end…

Lots more interviews and some live footage from the same time (immeditately after Live Aid) in this video:

And finally a “behind the scenes” documentary from the tour here. (If you haven’t already heard the amazing “Live Killers” album from a few years earlier, that should probably be your next step.)

So, there you go – several hours of high-quality Queen-in-the-studio-related footage – I hope you find them as fascinating and inspiring as I do !

Queen in the recording studio – videos is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
Mixerman says maybe you should use an automated mastering service. Here’s why he’s wrong. http://productionadvice.co.uk/mixerman-versus-mastering/ Fri, 02 Nov 2018 10:58:19 +0000 http://productionadvice.co.uk/?p=9666 OK, let’s get this out of the way right up front. I’m a Mixerman fan. I’ve written before about the extraordinarily amusing and interesting Daily Adventures Of Mixerman audiobook before, and I regularly recomend his book Zen and the Art of Mixing to people. He’s a controversial figure who stirs up a lot of debate online, […]

Mixerman says maybe you should use an automated mastering service. Here’s why he’s wrong. is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

OK, let’s get this out of the way right up front. I’m a Mixerman fan. I’ve written before about the extraordinarily amusing and interesting Daily Adventures Of Mixerman audiobook before, and I regularly recomend his book Zen and the Art of Mixing to people.

He’s a controversial figure who stirs up a lot of debate online, and that’s fine.

Because he has a habit of saying things like this:

“You do need to get your record to level, or no one will be able to turn it up loud enough to hear it in their car. So, at this point–and I can’t believe I’m about to write this–it would seem to make more sense to use an automated mastering service.”

Or this:

“Unless you’re paying to have your record mixed, you shouldn’t pay to have it mastered either.”

[Record scratch]

This statement comes right at the end of the section on mastering in his new book, and is bound to be the biggest take-away people get from the topic.

But it’s completely wrong.

I’ll explain why I say that in a minute, but first let’s get some context – because as Eric himself says in the same blog post I took those quotes from:

“…I’m sometimes paraphrased poorly on the Internet”

So first, all the quotes here are taken from a blog post which you can read here, and are parts of an excerpt from Mixerman’s newest book, the Musician’s Survival Guide to a Killer Record.

(Full disclosure – I haven’t read the whole book, apart from the excerpt in Eric’s blog post, I’m sure it’s very good. Except for the section we’re talking about here.)

So to make sure I’m not accused of poor paraphrasing, let’s get back to another quote from the book for a moment:

“Let me just be perfectly clear… My records are mastered by a professional mastering engineer. I’m a professional producer and a mixer, and I intimately understand the process. I hire people who hear like I do, and whose consultation I trust. I know what the mastering process does and, as a mixer, I automatically compensate for what will happen in that process. While the difference between what I deliver and what I get back from an ME is nothing short of subtle, it often feels like the biggest difference in the world. So, a great ME can bring a great mix up another level.”

So far so good.

The trouble is, Eric undermines almost everything he just said in that paragraph with everything else he says about mastering in the excerpt ! For example:

“Whereas a mixer employs balance to cause a reaction. The ME merely shapes the EQ curve of the stereo mix and brings it to the appropriate level, as determined by you. The mixer deals with emotion. The ME touches up the sound… All great mixes were great before the record ever went to an ME.”

I agree with the last sentence, but not the rest of it ! As a mastering engineer I’m absolutely focused on making sure the emotion of the song, performance and mix are conveyed to the listener with the maximum possible impact. The tools are more limited in mastering than in mixing, but the goal is the same, and the impact can sometimes be fundamental.

Here’s another example:

“A good mix starts with your arrangement, and with a little practice on that front, your mixes will come together without the help of someone who believes music is about sound.”

Talk about poor paraphrasing – Eric seems to be saying that because mastering engineers care about Sound, they somehow don’t know or care about music. Last time I checked music was conveyed by sound, and the way we hear it is utterly influenced by the way it sounds ! “Abbey Road” is a great album on any format, no matter how lo-fi, but if you really want to feel the pulse of “Come Together” or get chills from the end of “The End”, you need to be listening in full-frequency stereo, it needs to sound great – and mastering has a crucial role to play in that. Music is about emotion, yes – but it’s about emotion communicated via sound, and to suggest that the two aren’t intimately connected makes no sense to me – it’s a false distinction.

So at the very least Eric is being inconsistent about his message – if he truly understands and appreciates the impact of mastering, why does he spend so much time minimising it’s value ?

I could forgive him all that though, if it weren’t for the two quotes I put at the top of this post. Here’s the point he’s making in more detail:

“Look, if you’re putting out records, and you’re hiring professionals such as myself to produce and mix them, it only makes sense to have your record mastered. You’re going to spend good money on a mixer only to skimp out at the end? But really, if you’re just starting out, or if you merely want to focus-group a new song, I don’t think it makes much sense to pay to have it mastered.”

And

“Until you have a fanbase, and until you’re putting out records on a regular basis–until you’re making money from your music–I wouldn’t bother mastering your records. Just run your production through an online automated mastering service and be done with it. Or get yourself a good brickwall limiter and bring it to level yourself. That suggestion alone will cause people to pull their hair out. You have to hire a mastering engineer! No, you really don’t. If you’re going to hire anyone, hire a bona fide mixer.”

Now actually there are two points being made here, and one of them I don’t entirely disagree with. Eric’s whole argument is that you need a great mix before you can make a great master, and I agree with that. If you’re not able to get a great mix yourself, or to pay someone to make one for you, paying for mastering really doesn’t make much sense.

But the solution is NOT to use an automated mastering service – the idea of using a decent limiter is actually probably better !

Don’t get me wrong, I don’t have a problem with automated mastering – I know people who use these services and love them, especially when deadlines are tight. (I do have a problem when people say it’s as good as hiring an experienced professional, but I’ve already talked about that elsewhere…)

The real problem is that whereas a good limiter will simply lift the level without dramatically changing the mix, automated mastering services do far more than that. They use “AI” and sophisticated processing to try and emulate what a real engineer might do. Which is fine – sometimes it works really well, and sometimes it doesn’t.

But just like a real mastering engineer, you’re much more likely to get a great result if the mix sounded really good going in. If it doesn’t, with a mastering engineer (or limiter !) you usually just get a not-so-great master back. But with an automated service, you typically get back a train wreck, in my experience.

So Eric’s logic is completely backward ! If you don’t have a great mix to begin with, you really shouldn’t use automated mastering, because it’s far less likely to work well – a simple limiter, used carefully, is probably a much better option. Especially because automated mastering services are “black boxes”. We have no idea what happens in between sending the file and getting the “master” back – and often the default settings are much too aggressive, especially in terms of loudness.

(And while we’re talking about level – who says the music needs a big increase in level anyway, these days ? Most independent musicians submit their music directly to online streaming platforms, where the loudest music is turned down by normalization anyway. You probably only need a few dBs of limiting to get the music into the sweet spot – back to that limiter again…)

So let’s get back to that first quote, that Eric uses to sum up his section on mastering.

“Unless you’re paying to have your record mixed, you shouldn’t pay to have it mastered either.”

I said he’s wrong about that, but how do I know ?

Because I have over 20 years of experience mastering countless projects that people have mixed themselves.

Some of them have been stunning, some of them have been less so – but all of them have been improved by the mastering work I’ve done for them. Otherwise I wouldn’t feel comfortable charging people for them. All of them have sounded dramatically better than they would have by simply lifting the level a few dBs into a limiter – and yes, they have more emotional impact as a result.

Would they have sounded even better if they’d also been mixed by a professional mixer ? Maybe. In some cases definitely yes, but in some cases certainly no ! I’ve heard some outstanding amateur mixes over the years, and some truly dreadful professional mixes, too.

There’s no question that there’s far more potential to make or break a song at the mixing stage than during the mastering – but if your mix is already good, there’s no guarantee that a pro mixer will be able to realise that potential better than you. And it’ll certainly be much more expensive – mixing typically costs 5 to 10x more than mastering at a similar level of expertise.

Eric himself makes this same point but about mastering engineers – how are you supposed to find a good one ? Well exactly the same challenge applies to finding a great mixer – and the same solutions. Listen to people’s work, ask for recommendations and start a conversation with an engineer you’re interested in working with.

Or don’t ! Make the best mix you can, apply some gentle limiting and check how it sounds at www.loudnesspenalty.com.

But whatever you do, DON’T just send it to a machine and hope for the best.

Mixerman, you should know better.

Update

I mention in the post that I haven’t read the whole book, and Eric has raised this in our conversations on Facebook. A major theme of the book is that with proper arrangement and recording, mixing and mastering become far less important for musicians just starting out – and I agree. If you’re not making money from your music yet and have limited resources, it probably doesn’t make sense to pay for mastering (or mixing).

But that includes auto-mastering ! No mastering is better than bad mastering every time, and in my experience auto-mastering is often bad.

Mixerman says maybe you should use an automated mastering service. Here’s why he’s wrong. is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
Spotify upload specs recommend -2dB True Peak (for the loudest songs) http://productionadvice.co.uk/spotify-upload-true-peak/ Fri, 28 Sep 2018 16:16:08 +0000 http://productionadvice.co.uk/?p=9582   You’ve probably heard by now that Spotify recently announced it will soon be possible for anyone to upload directly to their streaming service, without going through an agregator like TuneCore or CD Baby. What you may not have heard yet is that along with that, they’ve also published recommendations for the best format and […]

Spotify upload specs recommend -2dB True Peak (for the loudest songs) is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
You’ve probably heard by now that Spotify recently announced it will soon be possible for anyone to upload directly to their streaming service, without going through an agregator like TuneCore or CD Baby.

What you may not have heard yet is that along with that, they’ve also published recommendations for the best format and specifications for your music when you do.

These include some interesting details (like the fact that they support 24-bit files) and confirm several things we already knew – that they’re using ReplayGain for loudness normalization, and that the default playback reference level is approximately -14 LUFS, for example.

There’s one suggestion that may raise a few eyebrows though, and that’s the recommendation that files should peak no higher than -2 dBTP (True Peak) – thanks to Christopher Carvalo for the heads-up.

[Update – Since I posted this last week, Spotify have updated their FAQ to clarify that the -2 dBTP recommendation only applies to material mastered louder than -14 LUFS. If your material measures -14 LUFS or lower, the True Peak recommendation is -1 dBTP]
 

So why is this important ?

Mainly because it’s a much more conservative number than many people would expect. I’ve been mastering with peak levels no higher than -1 dBTP for years now, and recommending people do the same, but I still see people saying that True Peaks aren’t an issue “in the real world”. And Spotify’s guideline is even more conservative than mine.

The reason for the recommendation is simple – Spotify doesn’t stream lossless audio. They encode using Ogg/Vorbis and AAC data-compression methods to reduce bandwidth – like more sophisticated versions of mp3 encoding. These encoded streams sound pretty good, but reduce the file size by as much as ten times to reduce the amount of data needed get the audio from Spotify’s servers to our mobile phones and other playback devices.

There’s no such thing as a free lunch, though – to achieve this reduction in data-rate, the audio has to be heavily processed when it’s encoded.
 

What happens during encoding

VERY roughly speaking, the audio is split up into many different frequency bands. The encoder analyses these and prioritises the ones that contribute most to the way we percieve the sound, and throws away the ones we’re least likely to hear.

When the audio is decoded for playback later, the signal is rebuilt, and usually sounds remarkably similar to the original, despite all the discarded data. However even though it sounds pretty close to the original, the audio waveform has typically changed dramatically – and one of the most noticeable differences is that the peak level will have increased.

And this is where the problem arises. If the audio was already peaking near 0 dBFS, the reconstructed waveform will almost certainly contain peaks that are above zero. And that means that the encoded file could cause clipping distortion when it’s reduced to a fixed bit-depth for playback, which wasn’t present in the original.

In fact, it’s even worse than that, sometimes. Encoded files store the data with “scale factor information” built in (kind of like a coarse floating point), but many players reduce the decoded files to fixed-point immediately after decoding. So whereas extra decoding peaks aren’t an issue if the signal is turned down before it gets played back, clipping during the decoding process will be “baked in” to the decoded audio in this case, regardless of normalization or the final playback level.

(If you’re asking why the encoder doesn’t detect when this might happen and reduce the level automatically – great question ! And actually some do. But the answer for Spotify is almost certainly that users would complain. The simplest way to test an encoded file is to compare it directly to the original, and if the result is quieter than the super-loud result people have worked so hard to achieve, many users would be unhappy, even if the encode is cleaner as a result.)
 

What does all this have to do with True Peaks ?

There’s no way to know for sure if encoding will cause clipping, or how much – it depends heavily on the codec, the material and the data-rate, to begin with. Lower data rates require heavier processing, and cause bigger changes in peak level, and can potentially cause more encoder clipping.

The True Peak level gives a useful warning, though. It was introduced as part of the R128 Loudness Unit specification, and gives a reasonable indication of when encoder clipping is likely to occur. Really loud modern masters can easily register True Peaks levels of +1 or +2 dBTP, and often as much as +3 or +4 !

Those files are virtually guaranteed to cause encoder clipping if they’re processed as-is, so to avoid the risk of encoder clipping, it’s sensible to reduce the level of those files before you supply them, to get the best quality encodes.
 

The question is, how much should they be reduced ?

It’s impossible to say exactly without trying it. The harder the audio is hitting the limiter, and the lower the data rate, the bigger the changes in peak level during encoding and decoding will be, and the more likelihood of problems as a result, so there’s no one-size fits all solution.

Personally I don’t make super-loud masters, and have found that my suggestion of -1 dBTP typically produces very clean encodes, but we have to assume that Spotify’s recommendation is based on analysis of the files they encode. I’ve double checked some of my own recent masters, and found that using my own loudness guidelines I’m getting clean encodes, so I won’t be changing how I work because of this recommendation.

[Update – As I mentioned above, Spotify have updated their FAQ to confirm this – the -2 dBTP recommendation only applies to material mastered louder than -14 LUFS]

But certainly if you’re making mixes or masters that are hitting close to 0 dBFS, you should be thinking of starting to measure True Peaks and reduce the levels to avoid them, at the very least.
 

But the music is MEANT to be loud, why should we turn it down ?!

Well firstly because the encodes could sound better if you do. But also because it’s going to be turned down eventually, anyway ! Spotify uses loudness normalization by default, just like YouTube, TIDAL and Pandora. This means they measure the loudness of all the material they stream, and turn the loudest stuff down. This is done to stop users being “blasted” by unexpected changes in level, which is a major source of complaints. And even if users turn normalization off, they’re unlikely to run the software with the volume at maximum !

So even if you’re in love with the super-dense sound of your music, reducing the overall level when you submit it won’t have any practical consequences for the final playback level – it can only sound better because of a cleaner encode.
 

What about -14 LUFS ?

I’ve had a few people asking about the fact that Spotify’s normalization reference level is approximately -14 LUFS, and if this -2 dB True Peak recommendation over-rules or replaces it.

The answer is No – these are two separate issues. The -14 LUFS figure simply gives us an idea of how loud Spotify will try and play songs in shuffle mode – it’s never been a “target” or a recommendation. This is a common source of confusion, and I wrote about it in more detail here.

The -2 dBTP recommendation is to try and ensure better encoding quality for material that was mastered very loud originally – peak levels aren’t a good way to judge loudness. So to get the best results you should keep both numbers in mind.

 

Summary

I’ve said it before, and I’ll say it again – the loudness normalization reference levels aren’t meant to be targets. Instead, master your music so it sounds great to you, and preview it using the free Loudness Penalty site to see how it will sound when normalized.

But you should also be aware that very high peak levels can cause sub-standard encodes when the files are converted for streaming. And if you’re like me, you’ll want to do everything you can to get the best possible results – including keeping an eye on the True Peaks.

 

Update – and a warning

I’m seeing a lot of different reactions to the information in this post. They vary from “yes I’ve been saying this for ages”, through annoyance that there’s yet another number to think about, all the way to “ah I don’t care, I’ll just turn the limiter output down a little”.

Be very careful about this last option.

The harder you push the loudness into a limiter, the higher the True Peak level will go. And the higher the True Peak levels are, the greater the risk of encoder clipping. So you’re fighting a losing battle. Remember True Peak doesn’t necessarily predict how much clipping will take place, so if you try to upload at the same loudness and just reduce the True Peaks, you could end up with just as many issues with the encode.

Wavelab, Ozone, Sonnox and others offer “codec preview” features which allow you to assess the results of encoding – if you’re chasing extreme loudness then you need to use methods like these to check the results you’re getting.

And as always personally I think the best answer is a perfect balance between the different factors – between loudness and dynamics, and now between loudness and True Peak values.

If you want to know the method I use myself to find the perfect loudness when I’m mastering, and why it works – click here.

[Edit – the original version of this post stated that some encoders can “bake in” clipping, which was misleading. A correctly-implemented encoder won’t do this, and I’ve updated the post to reflect that. However not all encoders are guaranteed to be well-written (!) and many decoders end up reducing the decoded file to fixed bit-depth anyway which does cause this problem. So avoiding high peak levels before encoding is definitely a good idea !]

Spotify upload specs recommend -2dB True Peak (for the loudest songs) is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
Loudness Penalty – Live Loudness Preview http://productionadvice.co.uk/loudness-penalty-preview/ Mon, 23 Jul 2018 00:36:22 +0000 http://productionadvice.co.uk/?p=9543   People are loving the Loudness Penalty website – but some of them have been saying Who cares what the numbers say ? The important thing is – how does it sound ? And of course we agree ! Which is why we’ve added a new feature to the site – Live Loudness Preview To […]

Loudness Penalty – Live Loudness Preview is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
People are loving the Loudness Penalty website – but some of them have been saying

Who cares what the numbers say ? The important thing is – how does it sound ?

And of course we agree !

Which is why we’ve added a new feature to the site – Live Loudness Preview

To see it in action in less than 30 seconds, check out the video above. But in this post I want to take a little more time to explain why we’re so excited about this new function. (Which, by the way, works on mobile as well – and, the whole site is significantly faster this time around)

Firstly though:

DON’T compare the different services

I mean – sure, you can, but what’s the point ? We all know ‘louder is better’ so the chances are YouTube will sound a little better than the others, but that’s not very valuable conclusion. (The site doesn’t emulate the streaming codecs, just the loudness differences.)

What is really valuable is to use the new Loudness Preview function to compare the song you’re working on with reference tracks on YouTube itself.

Provided you have the volume slider all the way up, you’ll be making a real-world comparison between the reference and your song, almost exactly as it will sound if you actually uploaded it.

And now we get to the really good part…

DO compare alternative versions of your songs

This is where the real power of the site comes into play. Say you’re under pressure from a client to master something louder than you think it needs to be. If you make two versions of the master – one at the louder level and one at your preferred level, you can send both versions to the client and ask them to Preview them using Loudness Penalty. You can even open both versions at the same time in different tabs of your browser.

If you do, chances are you’ll hear one of two things:

  1. There isn’t a big difference – because loudness normalisation ! Once the loudness is matched, there’s no real benefit to making things loud in the first place. If there are genuinely no down-sides, then you can go for the louder version, but keep an ear out in case:
  2. The louder version sounds worse Sometimes this is subtle, sometimes it really isn’t. Sometimes the less heavily processed version actually sounds louder ! And even if it doesn’t , chances are it will sound more 3D, more open, more spacious – clearer, wider and sweeter.

Spread the word

And that’s why we’re so excited – because this idea is so easy to share. When it was all just numbers, people had to really take the time to understand what was happening, and why it mattered for their music. Now they can hear it for themselves ! And make informed choices about loudness.

(It’s also super-quick & easy to use – like a kind of Lite version of my Perception plugin, almost)

My hope it that eventually the Loudness Penalty site becomes the new standard way to compare new mixes and masters – and if it does, perhaps people will start to hear that louder actually doesn’t sound better, online – and start choosing more balanced dynamics for their music as a result.

If you like the idea too, please share it and help spread the word !
 
 

Loudness Penalty – Live Loudness Preview is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
Don’t push your music over the Loudness Cliff ! http://productionadvice.co.uk/loudness-cliff/ Thu, 05 Jul 2018 13:07:33 +0000 http://productionadvice.co.uk/?p=9508 I’ve been talking about this image for years. Literally, I’d describe it in almost every conversation, interview or lecture when I talked about loudness. And it always got a great reaction. But it didn’t exist ! Except in my head. …until now. I recently did an interview with Chris Selim over at mixdown.online, and my […]

Don’t push your music over the Loudness Cliff ! is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

I’ve been talking about this image for years. Literally, I’d describe it in almost every conversation, interview or lecture when I talked about loudness. And it always got a great reaction.

But it didn’t exist !

Except in my head.

…until now.

I recently did an interview with Chris Selim over at mixdown.online, and my analogy of the ‘Loudness Cliff’ came up again, with me waving my hands around while describing it as usual.

But this time something was different – because a few days later Kredenz emailed me an idea for the image above, asking “is this the kind of thing you had in mind?”.

And it absolutely was ! So after a few tweaks and additions, here it is – The Loudness Cliff illustration.

Hopefully it speaks for itself, but just in case, the idea is pretty simple:

  • We perceive louder sounds better, at least to begin with. So, everyone wants to sound loud – so far so good.
  • But achieving loudness can be difficult – sometimes it feels like you’re trying to push a rock up a hill. Everyone else is at the top of their own mountain though, so you want to be, too.
  • The trouble is, the closer you get to the top, the harder it gets, and the less improvement in sound you get. And if you go too far – past the danger point – it can actually sound worse.
  • And if you push it even further – you’re over the edge and smashed on the rocks.

Instead, you want to look for the ‘loudness sweet spot’ – the perfect balance of loudness and dynamics, where you get all the benefits of cohesion, consistency and translation – without pushing things too far.

The goal is to be loud enough, but not too loud.

So, enough of the analogies – how do you actually find the loudness ‘sweet spot’ ?

My best advice for that is in this post:

How loud ? The simple solution to optimizing playback volume – online, and everywhere else

And if you want to know whether you’ve got it right or not (for free) try this:

www.loudnesspenalty.com

If your music scores between 0 and -2 for YouTube, you’re probably in good shape!

And if not, there’s plenty of free information here on Production Advice to help you – a great place to start is here.

Don’t push your music over the Loudness Cliff – find the loudness Sweet Spot instead !
 
 
Thanks again to Kredenz for making my hand-waving idea a reality ! You can check out his site here.
 
 

Don’t push your music over the Loudness Cliff ! is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
Mastering for Spotify ? NO ! (or: Streaming playback levels are NOT targets) http://productionadvice.co.uk/no-lufs-targets/ Mon, 04 Jun 2018 13:38:37 +0000 http://productionadvice.co.uk/?p=9450   So, most streaming services normalize their audio to around -14 LUFS. YouTube are slightly louder, iTunes is a couple of dB quieter, but overall -14 is the loudness you should aim for, right ? WRONG   Wait, what ?! Haven’t I been posting relentlessly about this issue for months (and years), providing relentless blow-by-blow […]

Mastering for Spotify ? NO ! </br>(or: Streaming playback levels are NOT targets) is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
So, most streaming services normalize their audio to around -14 LUFS.

YouTube are slightly louder, iTunes is a couple of dB quieter, but overall -14 is the loudness you should aim for, right ?

WRONG
 

Wait, what ?!

Haven’t I been posting relentlessly about this issue for months (and years), providing relentless blow-by-blow updates on the latest developments and banging on and on about how important it is ?

Well, yes. But that still doesn’t mean the playback levels we’re measuring are targets.
 

Dude. Stop talking crazy and explain yourself !

To start with, TIDAL is the only service actually using LUFS for it’s loudness normalisation. So even if you did want to optimise your audio’s loudness for a particular streaming service, TIDAL is the only place you’ll get completely reliable results. Spotify use ReplayGain, Apple use their own mysterious Sound Check algorithm, and the others aren’t telling.
 

But – but – why do you keep quoting LUFS figures, then ?

Because we have to measure things somehow, and LUFS is the internationally recognised method of measuring loudness – plus it’s the best, in our experience.

And the numbers are accurate – if you run a loudness meter on Spotify for 30 minutes or more, you will find the overall playback loudness is very close to -14 LUFS, especially for loud material.

But that’s an average value – individual songs may vary up or down by several dB, because ReplayGain gives different results to LUFS. The same applies to YouTube, iTunes and Pandora. So using LUFS as a target just won’t work reliably – as well as being a bad idea.
 

What do you mean, a bad idea ? Why NOT target loudness at specific services ?

Because we don’t need to.

Streaming services measure the loudness and make it more consistent for us – so we don’t have to. Loudness normalization is an opportunity to do what’s best for the music, without having to worry about the need to “fit in” with loudness.

Having said that, there can be an advantage to keeping the streaming services’ playback levels in mind while you’re optimizing the loudness of your music – which is why we created the Loudness Penalty website. Let me explain.
 

Why streaming playback levels DO matter

Imagine you master a song, and test it using the Loudness Penalty site, which tells you it’ll be turned down by 6 dB or more on all the streaming services.

That means you could potentially apply 6 dB less dynamic processing and still have it play back just as loud.

I don’t know about you, but that feels like an opportunity to me ! At the very least I’d want to experiment as see how a less heavily processed version sounded, using the LP scores to hear how it will sound online.

In the most agressive genres, it might be that you decide to stick with the original version, but in my experience this rarely gives the best results. For me, the sweet spot for loud material is about LP -2 on YouTube – but you may feel differently.

Either way, don’t we owe it to the music to at least try the experiment ?
 

One master to rule them all

So, what am I actually saying ? On the one hand, there’s no point in trying to optimise loudness for streaming services, but on the other there might be an opportunity. I’m contradicting myself, surely ?

No.

It’s true that there’s no real benefit to supplying separate loudness-optimized masters for each streaming service – partly for the reasons explained above. But also in a practical sense, because most agregators will only accept one file per song anyway, so there’s no easy way to get individual masters uploaded to each service.

But there is a benefit to optimising your music for online streaming in general.
 

Seize the opportunity to create a master that sounds great everywhere

Measure your files using the Loudness Penalty site, and find out how much they’re going to be turned down. Experiment with less agressive loudness processing, and preview the different versions against each other – and your favourite reference material – using the LP scores to adjust the playback level and see how they’ll sound online.

Knowledge is power – and making real-world comparisons like this will let you find the “sweet spot” – the perfect balance of loudness and dynamics, that best serves the music.

Not the streaming normalisation algorithms, or the wild ‘Loudness War’ goose – the music.

And in the process, even if you think your genre needs that loudness war sound, you might find yourself surprised.

If J Cole can break streaming records and debut at Number 1 in the Billboard chart with a more dynamic master – maybe you can, too.
 
 

Update

I’ve been getting quite a few frustrated comments about this, saying “well how loud should we master things, then ?!”. If that includes you, click here for my best advice.
 
 

Mastering for Spotify ? NO ! </br>(or: Streaming playback levels are NOT targets) is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
Introducing Loudness Penalty http://productionadvice.co.uk/loudness-penalty/ Fri, 18 May 2018 11:50:45 +0000 http://productionadvice.co.uk/?p=9435   The number one question I get asked these days is How loud will my music be played back online ? And the answer is always – “it depends”. Until now. I’m proud and excited to be able to announce a new website, developed with MeterPlugs, which we’ve designed to answer exactly that question. Quickly, […]

Introducing Loudness Penalty is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
The number one question I get asked these days is

How loud will my music be played back online ?

And the answer is always – “it depends”.

Until now.

I’m proud and excited to be able to announce a new website, developed with MeterPlugs, which we’ve designed to answer exactly that question.

Quickly, accurately, and for free.

It’s called Loudness Penalty, and in the video above I show you how to use it, why you would want to, and what the results mean.

Or you can head straight over and check it out yourself, right now – just click here.

I hope you find it useful – and if you like it, please share !
 
 

Introducing Loudness Penalty is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
J Cole WINS – with dynamics ! http://productionadvice.co.uk/j-cole-wins-with-dynamics/ Tue, 01 May 2018 14:02:27 +0000 http://productionadvice.co.uk/?p=9415   J Cole’s new album KOD just won the Dynamic Range Day Award 2018. (The award is given every year to a great-sounding, successful album that also has great dynamics) And it’s the most streamed album in it’s first week ever, AND it went straight in at Number 1 in the Billboard album charts ! So – […]

J Cole WINS – with dynamics ! is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
J Cole’s new album KOD just won the Dynamic Range Day Award 2018.

(The award is given every year to a great-sounding, successful album that also has great dynamics)

And it’s the most streamed album in it’s first week ever, AND it went straight in at Number 1 in the Billboard album charts !

So – remind me – why exactly is a super-loud master supposed to be ‘required’ for success and sales again ?

…right.

And here’s the thing – this is just the latest example in a building trend. More and more rap, R&B and hip-hop artists are taking advantage of the benefits of dynamics in their sound, and people love it.

Let’s start with a low-key example like – oh, say: Drake.

Right, the Drake – the one who regularly holds multiple Top 10 positions in the global streaming charts simultaneously. To be that successful, surely your music has to be ridiculously loud, right ?

Well… no.

Drakes recent single God’s Plan has 469 million views as I’m writing this, and the integrated loudness measures… -11.7 LUFS. Hardly the -8, -6 or even -4 numbers some people like to tell you are ‘needed’.

Or how about “Process”, by Sampha, also nominated for the DRD Award, and which won the Mercury Prize here in the UK last year ? The album overall measures -10.4 LUFS.

Now don’t get me wrong, both these numbers and albums are still loud – but they’re not “loudness war loud”, in the way so many are.

And that’s the point.

Users don’t care about loudness – they care about music.

Some of the biggest artists in the world are mastering their music with more dynamics – let’s hope everyone else follows suit.

Soon.
 
 

STOP PRESS

I’ll be interviewing Glenn Schick, who mastered KOD for J Cole, on the next episode of The Mastering Show podcast. Subscribe now to make sure you catch it – and, listen to my interviews with previous DRD Award-winners Matt Colton and Bob Ludwig, while you wait !

J Cole WINS – with dynamics ! is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
Humans versus Robot Mastering: Updated http://productionadvice.co.uk/humans-versus-robots/ Fri, 01 Dec 2017 14:54:08 +0000 http://productionadvice.co.uk/?p=9338 IMPORTANT UPDATE: I messed up. In the original post and graphic below, I said that the results were almost certainly influenced by the comments on Facebook. But I had no idea by how much. Since then, Kenny has run another poll, using a different voting system that allows us to see the way votes are cast over […]

Humans versus Robot Mastering: Updated is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
IMPORTANT UPDATE:

I messed up.

In the original post and graphic below, I said that the results were almost certainly influenced by the comments on Facebook. But I had no idea by how much.

Since then, Kenny has run another poll, using a different voting system that allows us to see the way votes are cast over time, and we can see a clear and very strong bias introduced into the results by people’s comments on the Facebook thread.

That means the same thing will have happened in the original poll, and I commented there myself, too – meaning I have to accept that I was part of the problem.

In a nutshell, every time someone posts their preference in a Facebook comment, there’s a corresponding boost in votes for their favourite master. In the second poll, the votes for one of the masters trebled in just a few hours after one of the comments, taking it from second place to a commanding lead.

This new information means we really can’t take the results too seriously. My original title for this post was “Humans versus robots: Humans WIN – by a huge margin”. I should have known better. That conclusion isn’t valid, and I let my passion for dynamics (and humans!) got the better of me.

It doesn’t invalidate the poll completely – after all, not everyone will have been influenced by the comments, and if a particular master prompts people to make comments in the first place and people then agree with it, that also tells us something.

But my headline and conclusion in the original version of this post were over the top, and I’ve decided to edit it. I considered removing it completely, but I think it’s a good example of how confirmation bias can influence us all, even when we think we’re immune to it !

So, with all that being said, here’s the original post, with visible strikethrough and comments in italics:
 
 
People are always asking me what I think of “automated mastering” – services like LANDR and Aria, for example – or “intelligent” mastering plugins like the mastering assistant in Ozone 8.

So I tested them – or at least, LANDR. Once by myself, and once someone else did it – without my knowledge !

And each time, I concluded that while the results weren’t nearly as bad as you might fear, provided you use the conservative settings, they still weren’t as good as what I could do.

However

Both these tests were non-blind. I didn’t cheat and listen to a LANDR master before doing my own masters, but when listening and comparing I always knew which master was which.

And that means I was open to expectation bias – and so are you, when you’re reading or listening to me talk about them.

So maybe our opinions are influenced by that, and if we didn’t know which was which, we would have made different choices. In fact Graham’s test wasn’t loudness-matched, which could also influence the results. The different versions were close, and in fact mine was a little quieter than the others, which should have been a disadvantage in theory – but you know me: it’s not a valid test if it’s not loudness-matched, as far as I’m concerned.

Just recently though, Kenny Gioia did something different.

Kenny’s Test

Kenny created six different masters of the same song – three by humans, three by machines. And not just any humans – one of the masters was by an unknown engineer at Sterling Sound, one was by John Longley, and the third was by none other than Steven Slate. Steven doesn’t claim to be a mastering engineer, but he certainly knows his way around a studio !

For the machines, Kenny asked members of his Facebook group which services to use, resulting in the choices of LANDR, Aria and Ozone 8.

Kenny then set up a poll, and asked people to listen and vote for the masters they liked best.

And here’s where it gets interesting

First, Kenny made the files anonymous, so that no-one could tell which was which – and second, he loudness-matched them, so that listeners wouldn’t be fooled by the ‘loudness deception‘.

Which means that provided people didn’t look at the waveforms, there was no way to tell which was which, except by listening.

As far as I know, this is the first time a blind, loudness-matched poll like this has been done.

[Edit – we now know it wasn’t nearly blind enough – see above !]

And the results were fascinating interesting

You can see a summary of how they came out in this infographic, illustrated with analysis of the dynamics of each master using my Dynameter plugin, but I wanted to take a little more time to make some extra comments here. First though, the disclaimer:

We need to remember this wasn’t a scientific test, even though it was loudness-matched and [kind of] blind. People could see how other people were voting, which results in a subtle kind of peer pressure. You can download the files and look at the waveforms, or measure them in other ways, so people might have made decisions based on that, rather than the sound alone. And perhaps most importantly of all, people were commenting and discussing what they heard all the while the poll was running – which results in a distinctly un-subtle form of peer pressure bias !

[I now think this effect was the most important factor for the surprisingly big difference in overall votes]

And, this is just one test, with one song. Kenny’s mix already sounded pretty good, and was very dynamically controlled, so different songs might have given very different results.

BUT

The results are still compelling suggestive. We can’t rule out the possibility that they would have been different if the votes and comments had been hidden [they would !] but I suspect these actually just caused the final scores to be more exaggerated than they would otherwise have been, rather than completely changed.

Here are the highlights:

Humans WIN got the most votes

Even though the results were blind, John‘s master got 42% of the overall votes. Not only that, but humans scored a massive 83% of the total votes, securing all three top slots. That’s a pretty convincing victory, even if it’s not entirely unexpected.

[True, but not as impressive as it might seem. And perhaps without the effect of the comments on Facebook, the differences between the different human masters would have been much less obvious.]

Dynamics WIN  played an important role

John’s winning master was also the most dynamic. Not only that, but the winning robot master with the most votes was also the most dynamic of the automated masters, although the final result was very tight.

And in fact, the only master to break the trend of “dynamic sounds better got more votes” was the Sterling Sound master. This was made back in 2009, when the loudness wars were in full effect, so it’s not all that surprising it was pushed pretty hard but again the result is quite dramatic – this Sterling master got seven times more votes than the Aria machine master of similar loudness, which is suggestive of an interesting conclusion: if high loudness is your goal, you’re better off getting it done by a human !

[I now think the results are so biased by the comments that this isn’t a fair conclusion from this poll, although it’s still my opinion.]

Default settings suck

LANDR was the only robot master with decent dynamics, for which I applaud them – but unfortunately the heavy bass EQ of the master came in for a lot of criticism in the comments, which presumably explains why it didn’t score higher.

But elsewhere the results weren’t so positive. Kenny deliberately chose the default settings for all the automated masters, and both Aria and Ozone 8 pushed the loudness to extreme levels by default, which is not only a Bad Thing (in my opinion) but also didn’t achieve a result people liked, either.

Which means I can’t help asking – shouldn’t automated services like LANDR and Aria be offering loudness-matched previews of their output ? Otherwise, isn’t the before & after comparison they offer deeply flawed, and maybe even deliberately misleading ? Hmmm…

ANYWAY, back on topic !

EQ matters

It’s fascinating that dynamics seem to have played such an important [a] part in people’s preferences, given that Kenny’s mix was pretty dense and controlled already – but the other factor is the EQ. Broadly speaking, all the human masters were somewhat brighter than the automated versions. This EQ choice suits the song better, and I suspect this is an important factor in the results – especially since the LUFS loudness matching takes EQ differences into account, as far as possible.

Aria lost

That might seem an unnecessarily blunt conclusion, but I think it’s worth saying because in many other comparisons and conversations I’ve seen, Aria has received great feedback. This may be partly because it’s the only system that uses actual analogue hardware to achieve it’s results, but I suspect it’s more likely that it simply returns louder masters by default, which sound superficially more impressive.

[Again, I think the comment bias in the results means we can’t draw any conclusions from the details of this poll. Maybe not even for the order of the human masters.

I also want to say that personally I thought the Aria master was the best-sounding of the automated masters overall, even though it was too heavily compressed and limited for my taste.]

That’s why the loudness-matching is so crucial, because that’s not how most people hear songs for the first time. The files in this test were balanced exactly as they would be if they were uploaded as single songs to TIDAL or Pandora, and in my experience you’d get very similar results on YouTube, Spotify and Apple Radio.

So this is a great real-world representation of how most people will hear songs for the first time. CD sales are falling day on day, and the vast majority of music discovery takes place online. If you want you music to stand out and make a great impression, you need it to work well when it’s loudness-matched. And that means mixing and mastering in the “loudness sweet spot” – with balanced dynamics. To find out the strategy I recommend to achieve this, click here.

Update

Several people have strongly criticised Kenny’s decision to use default settings for the automated mastering services, saying that the humans were told not to master for loudness, so the robots should have been “told” the same thing.

That’s reasonable, and Kenny says he’ll do a new tests to address this factor, but I disagree. In my opinion it wouldn’t have significantly changed the outcome of this poll. Here’s why:

  • Two of the human masters were “loud” anyway – in Sterling’s case because it was done years ago, and in Steven’s because he felt it sounded best that way, presumably. Despite this, people preferred them to the similarly loud automated masters, despite being less dynamic.
  • LANDR ended up pretty dynamic anyway, but the EQ wasn’t right.
  • The settings Kenny “should” have apparently used for Aria are labelled “classical and light acoustic” (E) and “for very dynamic mixes” (A) in the Help text on the site. This song wasn’t either of those – it’s a heavily compressed rock mix, so Kenny’s choice was reasonable, in my opinion.
  • Finally “B” is Aria’s default setting – it includes two other presets that are even louder.

So once again – no, this wasn’t a perfect test – but in my opinion the possibility for people to be influenced by other people’s votes and comments is a much more significant criticism than the presets used for the online services.

[And now I know this was this case to a much greater extent than I expected]

Conclusion

At the end of the day, tests like this are just a bit of fun, really. To get a truly definitive answer to the question of which masters people prefer, we would need a truly blind poll, without comments, and multiple tests using many different songs in many different genres, with many more people listening.

But for now, this is the best we have and I’m calling it:

Humans WIN did really well in this poll. Just as they should I want them to !

More info

I deliberately haven’t revealed which master is which in the poll here, in case you want to try the test for yourself. To download the files, click here. To see the poll and join the discussion, click here. (You’ll need to join Kenny’s Facebook group first, to get access.)

And to hear Kenny and I discuss the whole project in even more detail, you might like to listen to the latest episode of my Mastering Show podcast. We also reveal exactly which master is which, and I give my blind comments on the different masters, plus predictions about which is which.

If you’d like to take a listen, click here.

Humans versus Robot Mastering: Updated is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
TIDAL upgrade their loudness normalization – and enable it by default http://productionadvice.co.uk/tidal-normalization-upgrade/ Tue, 14 Nov 2017 15:13:50 +0000 http://productionadvice.co.uk/?p=9295 Developments in loudness normalization are coming thick and fast, these days – and TIDAL just raised the bar. Quality has always been one of the major selling-points of TIDAL’s streaming service – it’s one of the few places that lossless streaming is available, still. And that means they’ve been wanting to enable normalization by default […]

TIDAL upgrade their loudness normalization – and enable it by default is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

Developments in loudness normalization are coming thick and fast, these days – and TIDAL just raised the bar.

Quality has always been one of the major selling-points of TIDAL’s streaming service – it’s one of the few places that lossless streaming is available, still. And that means they’ve been wanting to enable normalization by default in their players for some time. So that we won’t be blasted by sudden changes in level, which is a major source of user complaints.
 

But there’s a problem…

…and it also relates to quality.

Most normalization right now is done on a track-by-track basis, meaning all songs are played back with similar loudness. This seems to make sense for shuffle or playlist listening, but it doesn’t work for albums, where it changes the artistic intent.

You spend days, weeks, months crafting the perfect balance for your music, including from song to song – why would you want a computer changing that ? Research shows that only 2% of albums in TIDAL’s catalogue have songs that are all the same loudness, even in the current ‘loudness war’ era. So messing with that balance is something TIDAL really want to avoid.
 

The alternative

The solution to this challenge seems straightforward, and it’s called Album Normalization. Instead of making all songs play with the same loudness, you measure the overall loudness of a whole album, and adjust all the songs by the same amount. The overall level is managed, to prevent “blasting” and improve the user experience, but the artistic intent is preserved.

Simple, right ?

Well… not necessarily. As usual, the devil is in the details. How does the playback software detect the difference between Album or Shuffle mode ? What should happen in Playlist mode ? And what happens when you switch between them ? If the user starts listening to a song in Shuffle mode with “Track” loudness, but then chooses to listen to the rest of the album, the next track would have to be played at “Album” loudness, which breaks the loudness sequence… Apple have had album-style normalization for some time, but it still has some rough edges and bugs, especially on mobile.

And even at a more basic level, users want things to be simple. The more options, the more potential for confusion. Spotify’s normalization has been in place for years, but many people still aren’t clear on exactly how it works.
 

TIDAL’s research

TIDAL’s approach to this challenge was refreshingly simple – they asked an expert to research the best solution. That expert was Eelco Grimm, one of the original architects of the loudness unit measurement system, and a fellow member of the Music Loudness Alliance.

Eelco’s research was exhaustive and fascinating – you can hear all about it in my interview with him on the latest episode of The Mastering Show podcast in the player above, or read his findings in full on his website.

But here are the highlights:
 

Users prefer Album Normalization – EVEN in shuffle mode

This is the big one. Eelco analysed TIDAL’s database of over 4.2 million albums (!) and found examples with the biggest difference in loudness between the loudest and quietest songs. These are the albums whose dynamic structure will be changed most significantly by Track normalization, but would also presumably sound the most uneven when listened to in Shuffle mode.

Eelco built two random shuffled playlists, containing examples of these loud & soft songs, from 12 albums, with 7-10 dB of difference between the loud and soft examples. And he sent the playlists to 38 test subjects, who listened to them blind, and reported back on which ones they preferred.

I was one of those test subjects, and what I heard surprised me. The difference between the playlists was easy to hear. Album mode worked pretty well, but with Track Normalization, the songs didn’t sound equally loud ! Most would be OK, but then you’d suddenly have a song that is supposed to sound “loud” which felt too quiet, or a “quiet” song that sounded too loud. Album Normalization sounded better to me – more natural, more effective, more satisfying – even in shuffle mode.

And it wasn’t just me – 71% of the test subjects voted blind for Album Normalization, with a further 10% saying they would prefer this method by default. That’s over 80% of people preferring Album Normalization, all the time. Even when listening to Playlists, or with Shuffle enabled.

And with the benefit of hindsight, it’s not hard to see why. These albums were all mastered with care, meaning the relative levels of the songs worked on a musical and emotional level. If they worked in the context of the original album, why wouldn’t they work in shuffle as well, once all the albums were playing at a similar loudness ?

That leads us to another interesting finding, though.
 

Normalizing to the loudest song on an album sounds better than using the average loudness

Apple and Spotify both use the average loudness of each album for their Album Normalization, but Eelco recommended that TIDAL normalize to the loudest song of each album instead. Again, the reasoning behind this is straightforward.

Imagine an album with many soft songs and just one loud song, in comparison to one where all the songs are loud. If the overall loudness of these albums is matched, the loudest song on the album with “mostly quiet” songs will end up playing louder than the songs on the “all loud” album ! This doesn’t work artistically, and also opens the door for people to “game” the system and try to get some songs louder than most others. In contrast, matching the loudest songs on each album and scaling everything else by the same amount plugs this loophole, and keeps the listening experience consistent for the user.

(In fact, it’s exactly the strategy I use myself, when making loudness decisions in mastering, too.)

There were plenty of other interesting findings in Eelco’s research, too – we go into them in the podcast and I recommend you take a listen, even if you’re not that interested in normalization. But right now I want to move on to some…
 

Great news

It happens all the time: The Company has a problem. The Company commissions research. The research comes back, and tells The Company something unexpected, or unwelcome. The Company ignores the research.

But not TIDAL. Not only did they accept the findings of Eelco’s research in full, they paid attention and implemented his recommendations. And we learned yesterday that their new loudness normalisation method is live now – by default, in every new install of their player application on iOS or Android devices. All the time – even in Shuffle mode – and they’re working on the same system in their desktop application, too.

And that’s huge. It means Apple is now the only major streaming service not to have normalization enabled by default – apart from SoundCloud and Google Play, neither of which offer normalization yet.

And not only that, but it’s a significant upgrade in comparison to the normalization used everywhere else. By using the “loudest song method” of Album Normalization to balance albums against each other, TIDAL have ensured not only that their normalization can’t be “gamed”, and the artistic intentions of the artists are preserved, but also that their overall loudness level will comply with the AES streaming loudness recommendations.
 

So what ?

The momentum is building all the time. We saw the most recent signs that streaming services are really taking normalization issues seriously when Spotify reduced their playback reference level to be more in line with Apple, Pandora and YouTube earlier this year, and I’m confident the same thing will happen with these improvements by TIDAL.

After all, it’s a win-win. Using Album Normalization to the loudest song (@ -14 LUFS) gives a better user experience, is simpler and easier to understand, and is preferred by over 80% of users ! What’s not to like ?

These changes are simple, but profound. Most importantly, they overcome a major (and real) objection to normalization in general – that it shouldn’t disturb the delicate balance between songs. I’ve often heard people say “I don’t want the loudness of my songs changed”, and now it won’t be – except to keep a consistent maximum playback level from album to album.

All the streaming services care deeply about music, and high quality – despite the cynicism I sometimes see – and I’m confident they will all adopt Eelco’s recommendations in the near future.

And personally, I can’t wait.
 
 

TIDAL upgrade their loudness normalization – and enable it by default is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>