Production Advice http://productionadvice.co.uk make your music sound great Fri, 22 Jun 2018 10:43:55 +0000 en-US hourly 1 Mastering for Spotify ? NO ! (or: Streaming playback levels are NOT targets) http://productionadvice.co.uk/no-lufs-targets/ http://productionadvice.co.uk/no-lufs-targets/#respond Mon, 04 Jun 2018 13:38:37 +0000 http://productionadvice.co.uk/?p=9450   So, most streaming services normalize their audio to around -14 LUFS. YouTube are slightly louder, iTunes is a couple of dB quieter, but overall -14 is the loudness you should aim for, right ? WRONG   Wait, what ?! Haven’t I been posting relentlessly about this issue for months (and years), providing relentless blow-by-blow […]

Mastering for Spotify ? NO ! </br>(or: Streaming playback levels are NOT targets) is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
So, most streaming services normalize their audio to around -14 LUFS.

YouTube are slightly louder, iTunes is a couple of dB quieter, but overall -14 is the loudness you should aim for, right ?

WRONG
 

Wait, what ?!

Haven’t I been posting relentlessly about this issue for months (and years), providing relentless blow-by-blow updates on the latest developments and banging on and on about how important it is ?

Well, yes. But that still doesn’t mean the playback levels we’re measuring are targets.
 

Dude. Stop talking crazy and explain yourself !

To start with, TIDAL is the only service actually using LUFS for it’s loudness normalisation. So even if you did want to optimise your audio’s loudness for a particular streaming service, TIDAL is the only place you’ll get completely reliable results. Spotify use ReplayGain, Apple use their own mysterious Sound Check algorithm, and the others aren’t telling.
 

But – but – why do you keep quoting LUFS figures, then ?

Because we have to measure things somehow, and LUFS is the internationally recognised method of measuring loudness – plus it’s the best, in our experience.

And the numbers are accurate – if you run a loudness meter on Spotify for 30 minutes or more, you will find the overall playback loudness is very close to -14 LUFS, especially for loud material.

But that’s an average value – individual songs may vary up or down by several dB, because ReplayGain gives different results to LUFS. The same applies to YouTube, iTunes and Pandora. So using LUFS as a target just won’t work reliably – as well as being a bad idea.
 

What do you mean, a bad idea ? Why NOT target loudness at specific services ?

Because we don’t need to.

Streaming services measure the loudness and make it more consistent for us – so we don’t have to. Loudness normalization is an opportunity to do what’s best for the music, without having to worry about the need to “fit in” with loudness.

Having said that, there can be an advantage to keeping the streaming services’ playback levels in mind while you’re optimizing the loudness of your music – which is why we created the Loudness Penalty website. Let me explain.
 

Why streaming playback levels DO matter

Imagine you master a song, and test it using the Loudness Penalty site, which tells you it’ll be turned down by 6 dB or more on all the streaming services.

That means you could potentially apply 6 dB less dynamic processing and still have it play back just as loud.

I don’t know about you, but that feels like an opportunity to me ! At the very least I’d want to experiment as see how a less heavily processed version sounded, using the LP scores to hear how it will sound online.

In the most agressive genres, it might be that you decide to stick with the original version, but in my experience this rarely gives the best results. For me, the sweet spot for loud material is about LP -2 on YouTube – but you may feel differently.

Either way, don’t we owe it to the music to at least try the experiment ?
 

One master to rule them all

So, what am I actually saying ? On the one hand, there’s no point in trying to optimise loudness for streaming services, but on the other there might be an opportunity. I’m contradicting myself, surely ?

No.

It’s true that there’s no real benefit to supplying separate loudness-optimized masters for each streaming service – partly for the reasons explained above. But also in a practical sense, because most agregators will only accept one file per song anyway, so there’s no easy way to get individual masters uploaded to each service.

But there is a benefit to optimising your music for online streaming in general.
 

Seize the opportunity to create a master that sounds great everywhere

Measure your files using the Loudness Penalty site, and find out how much they’re going to be turned down. Experiment with less agressive loudness processing, and preview the different versions against each other – and your favourite reference material – using the LP scores to adjust the playback level and see how they’ll sound online.

Knowledge is power – and making real-world comparisons like this will let you find the “sweet spot” – the perfect balance of loudness and dynamics, that best serves the music.

Not the streaming normalisation algorithms, or the wild ‘Loudness War’ goose – the music.

And in the process, even if you think your genre needs that loudness war sound, you might find yourself surprised.

If J Cole can break streaming records and debut at Number 1 in the Billboard chart with a more dynamic master – maybe you can, too.
 
 

Update

I’ve been getting quite a few frustrated comments about this, saying “well how loud should we master things, then ?!”. If that includes you, click here for my best advice.
 
 

Mastering for Spotify ? NO ! </br>(or: Streaming playback levels are NOT targets) is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/no-lufs-targets/feed/ 0
Introducing Loudness Penalty http://productionadvice.co.uk/loudness-penalty/ http://productionadvice.co.uk/loudness-penalty/#respond Fri, 18 May 2018 11:50:45 +0000 http://productionadvice.co.uk/?p=9435   The number one question I get asked these days is How loud will my music be played back online ? And the answer is always – “it depends”. Until now. I’m proud and excited to be able to announce a new website, developed with MeterPlugs, which we’ve designed to answer exactly that question. Quickly, […]

Introducing Loudness Penalty is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
The number one question I get asked these days is

How loud will my music be played back online ?

And the answer is always – “it depends”.

Until now.

I’m proud and excited to be able to announce a new website, developed with MeterPlugs, which we’ve designed to answer exactly that question.

Quickly, accurately, and for free.

It’s called Loudness Penalty, and in the video above I show you how to use it, why you would want to, and what the results mean.

Or you can head straight over and check it out yourself, right now – just click here.

I hope you find it useful – and if you like it, please share !
 
 

Introducing Loudness Penalty is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/loudness-penalty/feed/ 0
J Cole WINS – with dynamics ! http://productionadvice.co.uk/j-cole-wins-with-dynamics/ http://productionadvice.co.uk/j-cole-wins-with-dynamics/#respond Tue, 01 May 2018 14:02:27 +0000 http://productionadvice.co.uk/?p=9415   J Cole’s new album KOD just won the Dynamic Range Day Award 2018. (The award is given every year to a great-sounding, successful album that also has great dynamics) And it’s the most streamed album in it’s first week ever, AND it went straight in at Number 1 in the Billboard album charts ! So – […]

J Cole WINS – with dynamics ! is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
J Cole’s new album KOD just won the Dynamic Range Day Award 2018.

(The award is given every year to a great-sounding, successful album that also has great dynamics)

And it’s the most streamed album in it’s first week ever, AND it went straight in at Number 1 in the Billboard album charts !

So – remind me – why exactly is a super-loud master supposed to be ‘required’ for success and sales again ?

…right.

And here’s the thing – this is just the latest example in a building trend. More and more rap, R&B and hip-hop artists are taking advantage of the benefits of dynamics in their sound, and people love it.

Let’s start with a low-key example like – oh, say: Drake.

Right, the Drake – the one who regularly holds multiple Top 10 positions in the global streaming charts simultaneously. To be that successful, surely your music has to be ridiculously loud, right ?

Well… no.

Drakes recent single God’s Plan has 469 million views as I’m writing this, and the integrated loudness measures… -11.7 LUFS. Hardly the -8, -6 or even -4 numbers some people like to tell you are ‘needed’.

Or how about “Process”, by Sampha, also nominated for the DRD Award, and which won the Mercury Prize here in the UK last year ? The album overall measures -10.4 LUFS.

Now don’t get me wrong, both these numbers and albums are still loud – but they’re not “loudness war loud”, in the way so many are.

And that’s the point.

Users don’t care about loudness – they care about music.

Some of the biggest artists in the world are mastering their music with more dynamics – let’s hope everyone else follows suit.

Soon.
 
 

STOP PRESS

I’ll be interviewing Glenn Schick, who mastered KOD for J Cole, on the next episode of The Mastering Show podcast. Subscribe now to make sure you catch it – and, listen to my interviews with previous DRD Award-winners Matt Colton and Bob Ludwig, while you wait !

J Cole WINS – with dynamics ! is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/j-cole-wins-with-dynamics/feed/ 0
Humans versus Robot Mastering: Updated http://productionadvice.co.uk/humans-versus-robots/ http://productionadvice.co.uk/humans-versus-robots/#respond Fri, 01 Dec 2017 14:54:08 +0000 http://productionadvice.co.uk/?p=9338 IMPORTANT UPDATE: I messed up. In the original post and graphic below, I said that the results were almost certainly influenced by the comments on Facebook. But I had no idea by how much. Since then, Kenny has run another poll, using a different voting system that allows us to see the way votes are cast over […]

Humans versus Robot Mastering: Updated is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
IMPORTANT UPDATE:

I messed up.

In the original post and graphic below, I said that the results were almost certainly influenced by the comments on Facebook. But I had no idea by how much.

Since then, Kenny has run another poll, using a different voting system that allows us to see the way votes are cast over time, and we can see a clear and very strong bias introduced into the results by people’s comments on the Facebook thread.

That means the same thing will have happened in the original poll, and I commented there myself, too – meaning I have to accept that I was part of the problem.

In a nutshell, every time someone posts their preference in a Facebook comment, there’s a corresponding boost in votes for their favourite master. In the second poll, the votes for one of the masters trebled in just a few hours after one of the comments, taking it from second place to a commanding lead.

This new information means we really can’t take the results too seriously. My original title for this post was “Humans versus robots: Humans WIN – by a huge margin”. I should have known better. That conclusion isn’t valid, and I let my passion for dynamics (and humans!) got the better of me.

It doesn’t invalidate the poll completely – after all, not everyone will have been influenced by the comments, and if a particular master prompts people to make comments in the first place and people then agree with it, that also tells us something.

But my headline and conclusion in the original version of this post were over the top, and I’ve decided to edit it. I considered removing it completely, but I think it’s a good example of how confirmation bias can influence us all, even when we think we’re immune to it !

So, with all that being said, here’s the original post, with visible strikethrough and comments in italics:
 
 
People are always asking me what I think of “automated mastering” – services like LANDR and Aria, for example – or “intelligent” mastering plugins like the mastering assistant in Ozone 8.

So I tested them – or at least, LANDR. Once by myself, and once someone else did it – without my knowledge !

And each time, I concluded that while the results weren’t nearly as bad as you might fear, provided you use the conservative settings, they still weren’t as good as what I could do.

However

Both these tests were non-blind. I didn’t cheat and listen to a LANDR master before doing my own masters, but when listening and comparing I always knew which master was which.

And that means I was open to expectation bias – and so are you, when you’re reading or listening to me talk about them.

So maybe our opinions are influenced by that, and if we didn’t know which was which, we would have made different choices. In fact Graham’s test wasn’t loudness-matched, which could also influence the results. The different versions were close, and in fact mine was a little quieter than the others, which should have been a disadvantage in theory – but you know me: it’s not a valid test if it’s not loudness-matched, as far as I’m concerned.

Just recently though, Kenny Gioia did something different.

Kenny’s Test

Kenny created six different masters of the same song – three by humans, three by machines. And not just any humans – one of the masters was by an unknown engineer at Sterling Sound, one was by John Longley, and the third was by none other than Steven Slate. Steven doesn’t claim to be a mastering engineer, but he certainly knows his way around a studio !

For the machines, Kenny asked members of his Facebook group which services to use, resulting in the choices of LANDR, Aria and Ozone 8.

Kenny then set up a poll, and asked people to listen and vote for the masters they liked best.

And here’s where it gets interesting

First, Kenny made the files anonymous, so that no-one could tell which was which – and second, he loudness-matched them, so that listeners wouldn’t be fooled by the ‘loudness deception‘.

Which means that provided people didn’t look at the waveforms, there was no way to tell which was which, except by listening.

As far as I know, this is the first time a blind, loudness-matched poll like this has been done.

[Edit – we now know it wasn’t nearly blind enough – see above !]

And the results were fascinating interesting

You can see a summary of how they came out in this infographic, illustrated with analysis of the dynamics of each master using my Dynameter plugin, but I wanted to take a little more time to make some extra comments here. First though, the disclaimer:

We need to remember this wasn’t a scientific test, even though it was loudness-matched and [kind of] blind. People could see how other people were voting, which results in a subtle kind of peer pressure. You can download the files and look at the waveforms, or measure them in other ways, so people might have made decisions based on that, rather than the sound alone. And perhaps most importantly of all, people were commenting and discussing what they heard all the while the poll was running – which results in a distinctly un-subtle form of peer pressure bias !

[I now think this effect was the most important factor for the surprisingly big difference in overall votes]

And, this is just one test, with one song. Kenny’s mix already sounded pretty good, and was very dynamically controlled, so different songs might have given very different results.

BUT

The results are still compelling suggestive. We can’t rule out the possibility that they would have been different if the votes and comments had been hidden [they would !] but I suspect these actually just caused the final scores to be more exaggerated than they would otherwise have been, rather than completely changed.

Here are the highlights:

Humans WIN got the most votes

Even though the results were blind, John‘s master got 42% of the overall votes. Not only that, but humans scored a massive 83% of the total votes, securing all three top slots. That’s a pretty convincing victory, even if it’s not entirely unexpected.

[True, but not as impressive as it might seem. And perhaps without the effect of the comments on Facebook, the differences between the different human masters would have been much less obvious.]

Dynamics WIN  played an important role

John’s winning master was also the most dynamic. Not only that, but the winning robot master with the most votes was also the most dynamic of the automated masters, although the final result was very tight.

And in fact, the only master to break the trend of “dynamic sounds better got more votes” was the Sterling Sound master. This was made back in 2009, when the loudness wars were in full effect, so it’s not all that surprising it was pushed pretty hard but again the result is quite dramatic – this Sterling master got seven times more votes than the Aria machine master of similar loudness, which is suggestive of an interesting conclusion: if high loudness is your goal, you’re better off getting it done by a human !

[I now think the results are so biased by the comments that this isn’t a fair conclusion from this poll, although it’s still my opinion.]

Default settings suck

LANDR was the only robot master with decent dynamics, for which I applaud them – but unfortunately the heavy bass EQ of the master came in for a lot of criticism in the comments, which presumably explains why it didn’t score higher.

But elsewhere the results weren’t so positive. Kenny deliberately chose the default settings for all the automated masters, and both Aria and Ozone 8 pushed the loudness to extreme levels by default, which is not only a Bad Thing (in my opinion) but also didn’t achieve a result people liked, either.

Which means I can’t help asking – shouldn’t automated services like LANDR and Aria be offering loudness-matched previews of their output ? Otherwise, isn’t the before & after comparison they offer deeply flawed, and maybe even deliberately misleading ? Hmmm…

ANYWAY, back on topic !

EQ matters

It’s fascinating that dynamics seem to have played such an important [a] part in people’s preferences, given that Kenny’s mix was pretty dense and controlled already – but the other factor is the EQ. Broadly speaking, all the human masters were somewhat brighter than the automated versions. This EQ choice suits the song better, and I suspect this is an important factor in the results – especially since the LUFS loudness matching takes EQ differences into account, as far as possible.

Aria lost

That might seem an unnecessarily blunt conclusion, but I think it’s worth saying because in many other comparisons and conversations I’ve seen, Aria has received great feedback. This may be partly because it’s the only system that uses actual analogue hardware to achieve it’s results, but I suspect it’s more likely that it simply returns louder masters by default, which sound superficially more impressive.

[Again, I think the comment bias in the results means we can’t draw any conclusions from the details of this poll. Maybe not even for the order of the human masters.

I also want to say that personally I thought the Aria master was the best-sounding of the automated masters overall, even though it was too heavily compressed and limited for my taste.]

That’s why the loudness-matching is so crucial, because that’s not how most people hear songs for the first time. The files in this test were balanced exactly as they would be if they were uploaded as single songs to TIDAL or Pandora, and in my experience you’d get very similar results on YouTube, Spotify and Apple Radio.

So this is a great real-world representation of how most people will hear songs for the first time. CD sales are falling day on day, and the vast majority of music discovery takes place online. If you want you music to stand out and make a great impression, you need it to work well when it’s loudness-matched. And that means mixing and mastering in the “loudness sweet spot” – with balanced dynamics. To find out the strategy I recommend to achieve this, click here.

Update

Several people have strongly criticised Kenny’s decision to use default settings for the automated mastering services, saying that the humans were told not to master for loudness, so the robots should have been “told” the same thing.

That’s reasonable, and Kenny says he’ll do a new tests to address this factor, but I disagree. In my opinion it wouldn’t have significantly changed the outcome of this poll. Here’s why:

  • Two of the human masters were “loud” anyway – in Sterling’s case because it was done years ago, and in Steven’s because he felt it sounded best that way, presumably. Despite this, people preferred them to the similarly loud automated masters, despite being less dynamic.
  • LANDR ended up pretty dynamic anyway, but the EQ wasn’t right.
  • The settings Kenny “should” have apparently used for Aria are labelled “classical and light acoustic” (E) and “for very dynamic mixes” (A) in the Help text on the site. This song wasn’t either of those – it’s a heavily compressed rock mix, so Kenny’s choice was reasonable, in my opinion.
  • Finally “B” is Aria’s default setting – it includes two other presets that are even louder.

So once again – no, this wasn’t a perfect test – but in my opinion the possibility for people to be influenced by other people’s votes and comments is a much more significant criticism than the presets used for the online services.

[And now I know this was this case to a much greater extent than I expected]

Conclusion

At the end of the day, tests like this are just a bit of fun, really. To get a truly definitive answer to the question of which masters people prefer, we would need a truly blind poll, without comments, and multiple tests using many different songs in many different genres, with many more people listening.

But for now, this is the best we have and I’m calling it:

Humans WIN did really well in this poll. Just as they should I want them to !

More info

I deliberately haven’t revealed which master is which in the poll here, in case you want to try the test for yourself. To download the files, click here. To see the poll and join the discussion, click here. (You’ll need to join Kenny’s Facebook group first, to get access.)

And to hear Kenny and I discuss the whole project in even more detail, you might like to listen to the latest episode of my Mastering Show podcast. We also reveal exactly which master is which, and I give my blind comments on the different masters, plus predictions about which is which.

If you’d like to take a listen, click here.

Humans versus Robot Mastering: Updated is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/humans-versus-robots/feed/ 0
TIDAL upgrade their loudness normalization – and enable it by default http://productionadvice.co.uk/tidal-normalization-upgrade/ http://productionadvice.co.uk/tidal-normalization-upgrade/#respond Tue, 14 Nov 2017 15:13:50 +0000 http://productionadvice.co.uk/?p=9295 Developments in loudness normalization are coming thick and fast, these days – and TIDAL just raised the bar. Quality has always been one of the major selling-points of TIDAL’s streaming service – it’s one of the few places that lossless streaming is available, still. And that means they’ve been wanting to enable normalization by default […]

TIDAL upgrade their loudness normalization – and enable it by default is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

Developments in loudness normalization are coming thick and fast, these days – and TIDAL just raised the bar.

Quality has always been one of the major selling-points of TIDAL’s streaming service – it’s one of the few places that lossless streaming is available, still. And that means they’ve been wanting to enable normalization by default in their players for some time. So that we won’t be blasted by sudden changes in level, which is a major source of user complaints.
 

But there’s a problem…

…and it also relates to quality.

Most normalization right now is done on a track-by-track basis, meaning all songs are played back with similar loudness. This seems to make sense for shuffle or playlist listening, but it doesn’t work for albums, where it changes the artistic intent.

You spend days, weeks, months crafting the perfect balance for your music, including from song to song – why would you want a computer changing that ? Research shows that only 2% of albums in TIDAL’s catalogue have songs that are all the same loudness, even in the current ‘loudness war’ era. So messing with that balance is something TIDAL really want to avoid.
 

The alternative

The solution to this challenge seems straightforward, and it’s called Album Normalization. Instead of making all songs play with the same loudness, you measure the overall loudness of a whole album, and adjust all the songs by the same amount. The overall level is managed, to prevent “blasting” and improve the user experience, but the artistic intent is preserved.

Simple, right ?

Well… not necessarily. As usual, the devil is in the details. How does the playback software detect the difference between Album or Shuffle mode ? What should happen in Playlist mode ? And what happens when you switch between them ? If the user starts listening to a song in Shuffle mode with “Track” loudness, but then chooses to listen to the rest of the album, the next track would have to be played at “Album” loudness, which breaks the loudness sequence… Apple have had album-style normalization for some time, but it still has some rough edges and bugs, especially on mobile.

And even at a more basic level, users want things to be simple. The more options, the more potential for confusion. Spotify’s normalization has been in place for years, but many people still aren’t clear on exactly how it works.
 

TIDAL’s research

TIDAL’s approach to this challenge was refreshingly simple – they asked an expert to research the best solution. That expert was Eelco Grimm, one of the original architects of the loudness unit measurement system, and a fellow member of the Music Loudness Alliance.

Eelco’s research was exhaustive and fascinating – you can hear all about it in my interview with him on the latest episode of The Mastering Show podcast in the player above, or read his findings in full on his website.

But here are the highlights:
 

Users prefer Album Normalization – EVEN in shuffle mode

This is the big one. Eelco analysed TIDAL’s database of over 4.2 million albums (!) and found examples with the biggest difference in loudness between the loudest and quietest songs. These are the albums whose dynamic structure will be changed most significantly by Track normalization, but would also presumably sound the most uneven when listened to in Shuffle mode.

Eelco built two random shuffled playlists, containing examples of these loud & soft songs, from 12 albums, with 7-10 dB of difference between the loud and soft examples. And he sent the playlists to 38 test subjects, who listened to them blind, and reported back on which ones they preferred.

I was one of those test subjects, and what I heard surprised me. The difference between the playlists was easy to hear. Album mode worked pretty well, but with Track Normalization, the songs didn’t sound equally loud ! Most would be OK, but then you’d suddenly have a song that is supposed to sound “loud” which felt too quiet, or a “quiet” song that sounded too loud. Album Normalization sounded better to me – more natural, more effective, more satisfying – even in shuffle mode.

And it wasn’t just me – 71% of the test subjects voted blind for Album Normalization, with a further 10% saying they would prefer this method by default. That’s over 80% of people preferring Album Normalization, all the time. Even when listening to Playlists, or with Shuffle enabled.

And with the benefit of hindsight, it’s not hard to see why. These albums were all mastered with care, meaning the relative levels of the songs worked on a musical and emotional level. If they worked in the context of the original album, why wouldn’t they work in shuffle as well, once all the albums were playing at a similar loudness ?

That leads us to another interesting finding, though.
 

Normalizing to the loudest song on an album sounds better than using the average loudness

Apple and Spotify both use the average loudness of each album for their Album Normalization, but Eelco recommended that TIDAL normalize to the loudest song of each album instead. Again, the reasoning behind this is straightforward.

Imagine an album with many soft songs and just one loud song, in comparison to one where all the songs are loud. If the overall loudness of these albums is matched, the loudest song on the album with “mostly quiet” songs will end up playing louder than the songs on the “all loud” album ! This doesn’t work artistically, and also opens the door for people to “game” the system and try to get some songs louder than most others. In contrast, matching the loudest songs on each album and scaling everything else by the same amount plugs this loophole, and keeps the listening experience consistent for the user.

(In fact, it’s exactly the strategy I use myself, when making loudness decisions in mastering, too.)

There were plenty of other interesting findings in Eelco’s research, too – we go into them in the podcast and I recommend you take a listen, even if you’re not that interested in normalization. But right now I want to move on to some…
 

Great news

It happens all the time: The Company has a problem. The Company commissions research. The research comes back, and tells The Company something unexpected, or unwelcome. The Company ignores the research.

But not TIDAL. Not only did they accept the findings of Eelco’s research in full, they paid attention and implemented his recommendations. And we learned yesterday that their new loudness normalisation method is live now – by default, in every new install of their player application on iOS or Android devices. All the time – even in Shuffle mode – and they’re working on the same system in their desktop application, too.

And that’s huge. It means Apple is now the only major streaming service not to have normalization enabled by default – apart from SoundCloud and Google Play, neither of which offer normalization yet.

And not only that, but it’s a significant upgrade in comparison to the normalization used everywhere else. By using the “loudest song method” of Album Normalization to balance albums against each other, TIDAL have ensured not only that their normalization can’t be “gamed”, and the artistic intentions of the artists are preserved, but also that their overall loudness level will comply with the AES streaming loudness recommendations.
 

So what ?

The momentum is building all the time. We saw the most recent signs that streaming services are really taking normalization issues seriously when Spotify reduced their playback reference level to be more in line with Apple, Pandora and YouTube earlier this year, and I’m confident the same thing will happen with these improvements by TIDAL.

After all, it’s a win-win. Using Album Normalization to the loudest song (@ -14 LUFS) gives a better user experience, is simpler and easier to understand, and is preferred by over 80% of users ! What’s not to like ?

These changes are simple, but profound. Most importantly, they overcome a major (and real) objection to normalization in general – that it shouldn’t disturb the delicate balance between songs. I’ve often heard people say “I don’t want the loudness of my songs changed”, and now it won’t be – except to keep a consistent maximum playback level from album to album.

All the streaming services care deeply about music, and high quality – despite the cynicism I sometimes see – and I’m confident they will all adopt Eelco’s recommendations in the near future.

And personally, I can’t wait.
 
 

TIDAL upgrade their loudness normalization – and enable it by default is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/tidal-normalization-upgrade/feed/ 0
YouTube Stats For Nerds – EXACT volume normalization values revealed, and how to find them http://productionadvice.co.uk/stats-for-nerds/ http://productionadvice.co.uk/stats-for-nerds/#respond Fri, 29 Sep 2017 09:50:34 +0000 http://productionadvice.co.uk/?p=9223   Are you confused about exactly what YouTube’s playback volume normalization is doing to your music ? Maybe you understand the basic idea but struggle to predict exactly what will happen when videos are uploaded ? Well, that’s understandable – the procedure is still inconsistent and unpredictable. Some songs are measured and normalized right away, […]

YouTube Stats For Nerds – EXACT volume normalization values revealed, and how to find them is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
Are you confused about exactly what YouTube’s playback volume normalization is doing to your music ?

Maybe you understand the basic idea but struggle to predict exactly what will happen when videos are uploaded ?

Well, that’s understandable – the procedure is still inconsistent and unpredictable. Some songs are measured and normalized right away, others take weeks, some never seem to be normalized at all.

But it is happening – and YouTube just added an important new feature which can really help you get a grip on the process.

You can now see exactly what effect the system is having on our audio, because YouTube have exposed the normalization data in their interface. You just need to know where to find it – and what it means.

(Thanks to Paul Maunder for the heads-up !)

To see it for yourself, right-click on any YouTube video and select the “Stats for nerds” option.

 (Yes, this means that you are now a nerd 🙂 )

The fourth item down in the list will say something like:

Volume / Normalized:  100% / 54% (content loudness 5.3 dB)



The first percentage describes the Volume slider setting in the YouTube player window, and can be adjusted by clicking on the “speaker” icon and dragging the slider up or down.

The second percentage reflects the normalization adjustment being used. This is the amount by which the playback volume of the clip has been turned down to prevent users being blasted by sudden changes in volume in comparison to everything else. The value scales in proportion with the Volume slider setting.

So for example, if the normalization percentage reads 60% when the Volume slider is at 100 %, it will scale down to 30% if you move the Volume slider to 50%. This means that if you want to use these stats to compare songs with each other, you should always set the Volume slider to 100% first.

The final value is the “content loudness” value, and indicates the difference between YouTube’s estimate of the loudness and their reference playback level. This value is fixed for each clip, and isn’t affected by the Volume slider.

So for example a reading of 6dB means your video is 6dB louder than YouTube’s reference level, and a 50% normalization adjustment (-6dB) will be applied to compensate. Whereas a negative reading of -3dB, say, means it’s 3 dB lower in level than YouTube’s reference, and no normalization will be applied, so the normalization percentage will always be 100% of the Volume slider’s value – YouTube doesn’t turn up quieter videos.

(Important note – I’ve seen the way these values are reported change several times over the last couple of weeks. YouTube are obviously still working on this feature, so it may change again, and I’ll try to keep this post updated if they do.)

So what ?

Firstly, these “Stats for nerds” give you a quick and easy way to check whether your video has been normalized yet. If there’s no “content loudness” value listed, the video hasn’t yet been normalized, and the second value will always be the same as the Volume slider percentage – the song will be played as loud as the Volume slider allows.

(This happens more often than you might expect – for example normalization seems to have been “on hold” in August and early September 2017 – but more recent uploads have already been measured. It also answers a very common question – yes, adverts are being normalized – or at least, they are right now.)

Secondly, if there is a “content loudness” value listed, then your video is being normalized, and you can see exactly how much by setting the Volume slider to its 100% maximum, and checking the normalization percentage value.

So in the image above, for example, the Metallica song is being turned down to only 54% of it’s original volume (-5.3 dB) and Taylor Swift’s “Shake It Off” is also being turned down by a substantial 4.6 dB.

Whereas the final video in the image is a song that I mastered myself recently – a trance/techno track called “Vi er GodsetGutta” by B Killax – and because YouTube measure it as being 0.7 dB quieter than their desired reference level, it always gets played as loud as the Volume slider setting allows.

Thirdly, it means that if you want your music to stand out in comparison to everything else, you want to avoid large positive or negative “content loudness” values – you need to optimise loudness, not maximize it.

The great news is that when you do this, your music will actually “pop” more other songs, in my experience. For example the song I mastered actually has more punch and impact than the other two, in my opinion, especially in the low end – despite having been mastered at a lower level. Which of course is exactly what you would expect, because it has better micro-dynamics. To see if you agree with me, take a listen to the playlist here.

How do we use this ?

Apart from being interesting, the fact that YouTube have made this information visible means that you can test the effects of normalisation yourself. Simply upload a song, wait for it to be normalized and check the stats.

And then you can tweak, re-upload and test again, if you like – to try and get an even better result.

But here’s the thing. My advice is:

Don’t bother.

The best way to optimize loudness on YouTube

By all means check out the Stats For Nerds for your songs, and see how they compare with other similar tracks – and of course, how they sound.

But getting drawn into a cycle of uploading, testing and re-uploading over and over isn’t an efficient way to work, in my opinion. For one thing, it’s really tedious !

And more importantly, at the rate YouTube are releasing updates to their normalization system, there’s no guarantee that what works today will still work tomorrow – or next month, or next year.

It’s far better to aim for a result in mastering that you can be confident will result in minimal normalization changes to your audio, and therefore maximize both the playback volume, and the punch and impact of the music.

That’s the method I used to master “Vi er GodsetGutta” – and every other song I work on, for that matter.

 All the examples I’ve found on YouTube are being played with no volume reduction from normalization, and are assessed as being within 1 dB of YouTube’s reference level. And it works on all the online platforms, not just YouTube.

It’s a simple method, and straightforward to implement – and I explained it in a blog post a few days ago. (Hint: it’s not about aiming for -14 LUFS !) To find out how it works, click here.

And meanwhile, try not to spend too long worrying about the Stats For Nerds, and focus on making great-sounding music instead 🙂

 
 

YouTube Stats For Nerds – EXACT volume normalization values revealed, and how to find them is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/stats-for-nerds/feed/ 0
How loud ? The simple solution to optimizing playback volume online – and everywhere else http://productionadvice.co.uk/how-loud/ http://productionadvice.co.uk/how-loud/#respond Tue, 26 Sep 2017 11:02:16 +0000 http://productionadvice.co.uk/?p=9206   I get asked this question literally every day, now. And I see people asking it, everywhere: “What’s the ideal loudness for my music to get the best playback volume online ?” Because people have realized that loudness normalization is a reality. They know that loud songs are turned down to stop users being blasted […]

How loud ? The simple solution to optimizing playback volume online – and everywhere else is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
I get asked this question literally every day, now.

And I see people asking it, everywhere:

“What’s the ideal loudness for my music to get the best playback volume online ?”

Because people have realized that loudness normalization is a reality. They know that loud songs are turned down to stop users being blasted by sudden changes in volume – and they’ve probably heard some numbers: -13 LUFS for YouTube, -16 for iTunes and Pandora, -14 for Spotify and TIDAL… but which one should you choose ? Is there a perfect number, or do you have to submit different masters for every platform ?

In this post I’ll answer that question, simply and clearly.

(If you’re impatient, feel free to skip to the end – but please come back and read this explanation afterwards, too !)

Before that though, it’s important to realise – asking this question misses three key points.

The first is:

1 – There are no ideal loudness values – just guidelines you can follow

Because although although all the streaming services are measuring loudness and turning loud songs down, they all do it in different ways. They don’t all use LUFS loudness units, and they’ve all chosen slightly different reference levels.

So you can’t choose an ideal loudness that suits all platforms, because there isn’t one.

But the good news is – you don’t need to.

The whole point about loudness normalization is that each streaming service will measure the loudness, and adjust the playback volume according to their rules.

So you can make your music as loud as you like, if you want to – it just might get turned down. And that’s OK, because so does everything else.

Which means targeting a specific integrated loudness is a red herring. Lots of people are asking if they should aim for an integrated loudness of -14 LUFS, for example – because that’s the volume TIDAL uses, and Spotify recently reduced their level to something similar (although they don’t use LUFS to make their measurements, so this is only an approximate value). Plus -14 is only a dB quieter than YouTube’s approximate level of -13 LUFS, and 2 dB louder than Apple Sound Check… so all in all it seems like a pretty good value to have in mind.

But that brings us to the second key point I mentioned:

2 – Integrated loudness isn’t the best way make loudness choices

Here’s what I mean.

Integrated loudness is an overall value for a song, album or any section of audio.

Just one number.

It does take account of the loudest moments, and the quietest – but you can’t tell what they were, just by looking at the number.

Imagine two songs, balanced by ear. One of them could be straight-ahead rock, with almost the same short-term loudness all the way through, hovering around -14 LUFS – so that’s what the integrated loudness reading across the whole song will read. And now imagine a more varied song – still heavy, but with a quiet introduction and more mellow verses. These quieter sections will reduce the overall integrated loudness reading – down to -16 LUFS, perhaps.

So far so good – you can’t tell by looking at the integrated loudness if you have two “loud all the way through” songs, or one loud and one with more varied dynamics – but so what ? You matched them by ear, and when you play them back one after the other, they sound great. The loud sections of both are at similar levels, and the quieter choruses work for the more varied song – who cares if they measure slightly differently ?

The problems start when you turn this process the other way around.

Rather than measuring the songs, you want to choose how loud they should be.

If you use your ears again, you’ll be fine – but that’s not what people are asking me about. If you just follow the numbers and make things match an integrated loudness value – making both songs measure -14 LUFS for example – the more varied song will sound 2 LU too loud in comparison to what you would have chosen by ear. The integrated LUFS value tells you nothing about the dynamic variety in the song. In other words, our opinion about what integrated loudness feels musically right changes, depending on the song – and genre, and arrangement… and everything.

Don’t worry, there is a solution to this – but before I get to it I just want to highlight the third, simplest and probably most important point in all of this:

3 – Loudness is an artistic decision

You probably already guessed this one – loudness shouldn’t be about the numbers.

And neither should any other property of music, of course. Numbers are helpful as a sanity-check, and for training our ears. But that doesn’t mean you should choose the EQ balance or how loud to master a song based purely on measurements – in an ideal world you just choose what sounds best.

And the great news is that we’re headed in that direction ! Since loudness levels are being adjusted on playback, you’re free to make that choice based on what’s right for the music, and not have to worry that someone else will “cheat” and try to make theirs sound better just by making it louder – that won’t work.

(Up to a point – see the very end of this post…)

Just tell us the numbers !

OK, I said I’d answer the “how loud” question simply and clearly – and I will.

But from what’s written above you’ll have gathered by now that I’m not going to be recommending any of the LUFS numbers suggested above – or any integrated loudness.

Instead, my recommendation uses short-term loudness values, and it’s this:

Master no louder than -9 LUFS short-term at the loudest moments
(with True Peaks no higher than -1)

That’s it.

If you follow this suggestion, you’ll be in great shape, in almost any genre. Your songs will be loud enough to sound “competitive”, whilst still retaining plenty of punch and dynamic contrast. They’ll stand shoulder to shoulder with anything else, on all the streaming platforms, and they won’t get turned down.(*)

(*) Actually they might get turned down a little, but it’s not the end of the world – because so will almost everything else.

OK, now explain how the numbers work !

This suggestion is based on over 20 years of my experience as a professional mastering engineer, on conversations with other mastering engineers, on analysis of my favourite-sounding albums, and on teaching an online course to over 1000 students who’ve also had great results.

The theory is simple – make all the loudest moments similar in loudness, and not too loud – and then balance everything else with them musically.

It just works ! It avoids the problem of using integrated loudness as a target, where you get lower values for music with more dynamic variety, even if the loudest moments are just as loud. But it still gives you a useful benchmark – something to aim for. There can be occasional louder moments, if they work musically, and of course you can go quieter if you want to – always make decisions based on musical considerations, not just the numbers – but this is the simplest and best guideline I can give you.

And in fact when I follow this rule, in most popular genres the integrated loudness often comes out in the -12 to -14 LUFS range – bang in the sweet spot for all the online streaming platforms…

Optimize, don’t maximize – seize the opportunity of dynamics

Maximising loudness doesn’t work, any more. Aiming for a specific integrated loudness doesn’t work, reliably.

But deciding how loud to master the loudest sections of music, keeping them consistent and balancing everything else to feel right musically does work – and it helps you optimize the loudness of your music, making the most of the peak headroom the online streaming services make available.

This is a fantastic opportunity – a true win-win ! You can make the best decisions for your music based on the music itself – and feel confident that it will sound great online, and everywhere else.

(Because these guidelines not only work online, they’re how I’ve been optimizing loudness and dynamics for years, even on CD. Guess what – listeners adjust playback levels, too !)

Make your loudness decisions based on the way the music sounds, rather than arbitrary numbers – but keep an eye on the guidelines, even so.

Coda – The devilish details

The method described above works, but there are a couple of extra details to be aware of.

Firstly, all the streaming services turn louder music down, but not all of them turn quieter music up – for example YouTube & TIDAL. And the ones that do turn quieter songs up will try to avoid causing peak clipping as a result, either by restricting the extent to which levels can be lifted (iTunes) or by using a peak limiter (Spotify).

What does that mean ? If you master your music very quietly, it may not sound as loud as other similar songs. That might not bother you, but if it does, it’s worth keeping an eye on. It’s one of the reasons I developed my Dynameter plugin, which visualizes the dynamics of your music in realtime, to help you optimise it for maximum dynamic impact and compatibility online. I use it on every master I do, these days. For more information, click here.

And secondly, it may sound obvious, but loudness isn’t everything ! Not by a long shot.

To sound great, you still need a great song, great performance, great arrangement, great mix, balanced EQ and dynamics… but that’s what keeps all of this interesting, right ?!?
 
 

How loud ? The simple solution to optimizing playback volume online – and everywhere else is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/how-loud/feed/ 0
The surprisingly simple hack to make your music POP online – and everywhere else ! http://productionadvice.co.uk/make-music-pop/ http://productionadvice.co.uk/make-music-pop/#respond Tue, 25 Jul 2017 01:35:31 +0000 http://productionadvice.co.uk/?p=9181   This video shows a surprisingly simple technique to make your music stand out online – even in an aggressive genre like EDM. The trick is easy, the video includes a real-world example to prove that it works, and best of all – it’s free ! Actually that’s not the best of all – the […]

The surprisingly simple hack to make your music POP online – and everywhere else ! is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
This video shows a surprisingly simple technique to make your music stand out online – even in an aggressive genre like EDM.

The trick is easy, the video includes a real-world example to prove that it works, and best of all – it’s free !

Actually that’s not the best of all – the best of all is that this tip works in any genre, and it doesn’t only work online.

And along the way, it proves once and for all that people who tell you there’s only one way to get “The Sound” in EDM… are wrong.

So, what are you waiting for ? Take a look, and if you like it – please share !

[Updated video – remix matches CD master more closely for a better comparison]

How to persuade your clients

If you like the idea of this technique but don’t think you’ll be able to persuade the artists, labels and engineers you work for – try this.

(And to find out more about my Dynameter plugin, click here)

More details (warning, spoilers)

I deliberately didn’t say what the “hack” is above – so if you haven’t watched the video, do that first – there are some clues below.

Several people have commented that there’s too much pumping in the remix, which is fair enough. But bear in mind that the remix is made from stems, and the pumping is part of the stems.

In other words the CD master squashed the dynamics of the original so much it even reduced the pumping effect that the artists chose in the studio !

And I’m sure there’s all kinds of other extra subtlety in the real mix, too. Maybe the remix would benefit from a little more dynamic control in mastering, but it doesn’t need to be crushed by an extra 6 dB.

Bottom line – if you prefer the CD version that’s absolutely fine, but the reason is the mix – not the crushed dynamics.
 
 

The surprisingly simple hack to make your music POP online – and everywhere else ! is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/make-music-pop/feed/ 0
The Foo Fighters just proved me right about loudness – and dynamics http://productionadvice.co.uk/foo-fighters-dynamics/ http://productionadvice.co.uk/foo-fighters-dynamics/#respond Fri, 16 Jun 2017 12:00:22 +0000 http://productionadvice.co.uk/?p=9143   Foo Fighters just released a surprise new single, “Run” – and the biggest surprise to me is that it has great dynamics. All their recent releases have been pushed really hard, in the loudness department – not disastrously, but I’ve always thought they would have sounded better with more room to breathe. This single […]

The Foo Fighters just proved me right about loudness – and dynamics is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
Foo Fighters just released a surprise new single, “Run” – and the biggest surprise to me is that it has great dynamics.

All their recent releases have been pushed really hard, in the loudness department – not disastrously, but I’ve always thought they would have sounded better with more room to breathe.

This single proves me right.

But then, I would say that ! I’m always saying that balanced dynamics beat loudness.

So in this post, I’m not going to offer any personal opinions at all, I’m just going to let the facts speak for themselves – and the reviews.

Reviews like this one, in Billboard:

Foo Fighters Crank Up the Heavy… play[ing] with a soft-loud-soft dynamic on the new single “Run,” which opens as a dreamy, slow burner then, as you’d expect with the Foos, quickly turns heavy as thunder. How heavy? So heavy your mom will hate it and your neighbors will tell you to turn it down. So heavy it might just feature some of the most hulking moments in the Foos’ canon

Or this one, in Blabbermouth:

a monolithic song of the summer shoo-in as melodic as it is monstrously heavy

– and these comments are about a song that is 4dB quieter than their 2011 single “Rope” !

So how does this compute ?

How can it be a “a full-bore riff-rocker with a huge, triumphant chorus” (Stereogum) with “the speakers going to 11” (SPIN) when it’s mixed and mastered at a lower level than their earlier releases ?

How can it be quieter but sound louder ?

Because dynamics.

And because loudness management.

This song sounds just as loud as “Rope” on YouTube, TIDAL and Spotify. But “Run” has 4 dB more peak-to-loudness impact than “Rope”, as my Dynameter plugin clearly shows – and the Foos have made it count:

QED

Don’t trust the reviews, though – listen for yourself. Listen to the way the guitars pile in during the chorus, the pounding drums – this song still sounds exactly like a Foo Fighters record should, proving yet again that “loudness” isn’t a requirement of “the sound”, it’s just an increasingly irrelevant technicality.

The Foo Fighters have seized the opportunity of using more dynamics in their music, and it’s worked.

Maybe you should, too.

 
 
[Update: I was really looking forward to hearing the full album, based on this song. Sadly, it didn’t live up to my expectations. Some songs sound great, with excellent dynamics – but some are still very badly crushed, sadly.

Even more puzzling, the sound is inconsistent – some songs are too loud in comparison to others, in my opinion, others aren’t loud enough. Some even have this issue between different sections of the same song.

Added to some strange EQ choices, the whole thing feels piecemeal and disjointed, to me. Which is shame – how great would it have been to give the Dynamic Range Day Award to a Foo Fighters album ?!

Maybe next time…
 
 

The Foo Fighters just proved me right about loudness – and dynamics is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/foo-fighters-dynamics/feed/ 0
Spotify just reduced its loudness playback level ! http://productionadvice.co.uk/spotify-reduced-loudness/ http://productionadvice.co.uk/spotify-reduced-loudness/#respond Mon, 22 May 2017 16:03:14 +0000 http://productionadvice.co.uk/?p=9114   The post title says it all – in the last few days it’s become clear that Spotify have chosen to reduce their playback loudness reference level from approximately -11 LUFS down to approximately -14, broadly in line with YouTube and TIDAL. This is a big deal, and in a minute I’ll discuss why, but […]

Spotify just reduced its loudness playback level ! is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
The post title says it all – in the last few days it’s become clear that Spotify have chosen to reduce their playback loudness reference level from approximately -11 LUFS down to approximately -14, broadly in line with YouTube and TIDAL.

This is a big deal, and in a minute I’ll discuss why, but before that – what does it mean, in simple terms ?

[This change is very recent, and you may need to update to the latest release of Spotify before you see it – the build number we are testing is 1.0.54.1079.g3809528e. It’s also possible this change hasn’t rolled out in all territories at the time of writing – 22nd May 2017]

In a nutshell, it means it doesn’t matter how high you push the level of your mixes and masters. Once the raw loudness of the files gets past a certain point, online streaming services will turn them down – keeping them all at the same reference level, to stop users being annoyed by sudden changes in volume.

Exactly where the “point of no return” is varies slightly between different streaming services, but Spotify always used to be the loudest, by a whopping 2-3dB.

And this was a real shame, because it put pressure on musicians, labels and engineers to make the raw loudness levels higher to try and “compete” – even if it didn’t suit the style of the music.

But now, all that has changed.

Why this matters

YouTube, Spotify and TIDAL all now use playback reference levels within a dB of each other, and Apple Sound Check and Pandora are another 2 dB lower than that, matching the recommendations of the Audio Engineering Society for streaming loudness.

So there’s no pressure any more to master louder in order to “compete” on Spotify – you can use the same guidelines for all the major streaming services, and be confident of a great-sounding result.

You can have great dynamics and sound loud – that’s a win-win !

How Loud ?

In a nutshell, the new magic number for Spotify is a reading of -14 LUFS integrated, meaning an overall value measured across the whole song, while keeping peak levels no higher than -1. Bear in mind that you shouldn’t regard this as a target. Spotify will adjust your music’s playback loudness to this kind of level, so it’s better to regard this is an opportunity to choose the loudness that you think sounds best for the material, without having to worry about “competing”.

YouTube’s reference level is actually 1 dB louder since the Spotify change, so you might choose to push things a little harder if maximum loudness on YouTube is important to you. If your music has varied dynamics though, it probably isn’t necessary.

And of course you do still need to keep an eye on the “crest factor” – the difference between the peak level and the short-term loudness. If this drops too low, your music may be turned down more than you expect. This value is labelled PSR in my Dynameter plugin, which was designed specifically to optimise this value property for optimal audio dynamics.

A huge improvement

This change is fantastic news. The -14 LUFS figure may not comply with the AES recommendations, but the reality is that this figure allows plenty of peak-to-loudness headroom for most mainstream music these days to have plenty of dynamics and sound great – which is a win-win for everyone.

I’ve been campaiging a change like this for some time now, both on the Spotify forums and via the Streaming Loudness Petition. There’s no way to know whether either of these initiatives actually influenced Spotify’s decision, but it really doesn’t matter.

The great news is that all the online streaming services now cater for music with decent dynamics – and they’re close enough to each other that there’s no need to create specially optimised masters for each platform – although this is still an option for people who want it.

Over 6 years ago now, I predicted that Spotify would end the loudness wars. Today is another important step towards that prediction coming true.

Hats off to Spotify, and long may the trend continue !

Coda

You may be reading this thinking – “What’s the big deal ? it’s just normalisation”.

And you’re right – this kind of processing won’t fix the damage that’s already been done in the process of making those loud-songs-that-are-being-turned-down loud in the first place.

But over the longer term, it removes the incentive to do it again. Sooner or later, the questions change:

Old question: “Why does Song X stand out on the CD changer ?”

Old answer: “Because it’s louder”

New question: “Why does Song Y stand out online ?”

New answer: “Because it has great dynamics”

And that is the start of a really interesting conversation.
 
 
Thanks to Home Mastering Masterclass members Norbert Tomczak and Sigurdór Guðmundsson for the heads-up on Spotify’s decision !

Update – Thanks also to Jean-Michel Kovacs, who actually told me about this in a YouTube comment even earlier than Sigurdor or Norbert !
 
 

Spotify just reduced its loudness playback level ! is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/spotify-reduced-loudness/feed/ 0