Production Advice http://productionadvice.co.uk make your music sound great Mon, 15 Oct 2018 15:54:24 +0000 en-US hourly 1 Spotify upload specs recommend -2dB True Peak (for the loudest songs) http://productionadvice.co.uk/spotify-upload-true-peak/ http://productionadvice.co.uk/spotify-upload-true-peak/#respond Fri, 28 Sep 2018 16:16:08 +0000 http://productionadvice.co.uk/?p=9582   You’ve probably heard by now that Spotify recently announced it will soon be possible for anyone to upload directly to their streaming service, without going through an agregator like TuneCore or CD Baby. What you may not have heard yet is that along with that, they’ve also published recommendations for the best format and […]

Spotify upload specs recommend -2dB True Peak (for the loudest songs) is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
You’ve probably heard by now that Spotify recently announced it will soon be possible for anyone to upload directly to their streaming service, without going through an agregator like TuneCore or CD Baby.

What you may not have heard yet is that along with that, they’ve also published recommendations for the best format and specifications for your music when you do.

These include some interesting details (like the fact that they support 24-bit files) and confirm several things we already knew – that they’re using ReplayGain for loudness normalization, and that the default playback reference level is approximately -14 LUFS, for example.

There’s one suggestion that may raise a few eyebrows though, and that’s the recommendation that files should peak no higher than -2 dBTP (True Peak) – thanks to Christopher Carvalo for the heads-up.

[Update – Since I posted this last week, Spotify have updated their FAQ to clarify that the -2 dBTP recommendation only applies to material mastered louder than -14 LUFS. If your material measures -14 LUFS or lower, the True Peak recommendation is -1 dBTP]
 

So why is this important ?

Mainly because it’s a much more conservative number than many people would expect. I’ve been mastering with peak levels no higher than -1 dBTP for years now, and recommending people do the same, but I still see people saying that True Peaks aren’t an issue “in the real world”. And Spotify’s guideline is even more conservative than mine.

The reason for the recommendation is simple – Spotify doesn’t stream lossless audio. They encode using Ogg/Vorbis and AAC data-compression methods to reduce bandwidth – like more sophisticated versions of mp3 encoding. These encoded streams sound pretty good, but reduce the file size by as much as ten times to reduce the amount of data needed get the audio from Spotify’s servers to our mobile phones and other playback devices.

There’s no such thing as a free lunch, though – to achieve this reduction in data-rate, the audio has to be heavily processed when it’s encoded.
 

What happens during encoding

VERY roughly speaking, the audio is split up into many different frequency bands. The encoder analyses these and prioritises the ones that contribute most to the way we percieve the sound, and throws away the ones we’re least likely to hear.

When the audio is decoded for playback later, the signal is rebuilt, and usually sounds remarkably similar to the original, despite all the discarded data. However even though it sounds pretty close to the original, the audio waveform has typically changed dramatically – and one of the most noticeable differences is that the peak level will have increased.

And this is where the problem arises. If the audio was already peaking near 0 dBFS, the reconstructed waveform will almost certainly contain peaks that are above zero. And that means that the encoded file could cause clipping distortion when it’s reduced to a fixed bit-depth for playback, which wasn’t present in the original.

In fact, it’s even worse than that, sometimes. Encoded files store the data with “scale factor information” built in (kind of like a coarse floating point), but many players reduce the decoded files to fixed-point immediately after decoding. So whereas extra decoding peaks aren’t an issue if the signal is turned down before it gets played back, clipping during the decoding process will be “baked in” to the decoded audio in this case, regardless of normalization or the final playback level.

(If you’re asking why the encoder doesn’t detect when this might happen and reduce the level automatically – great question ! And actually some do. But the answer for Spotify is almost certainly that users would complain. The simplest way to test an encoded file is to compare it directly to the original, and if the result is quieter than the super-loud result people have worked so hard to achieve, many users would be unhappy, even if the encode is cleaner as a result.)
 

What does all this have to do with True Peaks ?

There’s no way to know for sure if encoding will cause clipping, or how much – it depends heavily on the codec, the material and the data-rate, to begin with. Lower data rates require heavier processing, and cause bigger changes in peak level, and can potentially cause more encoder clipping.

The True Peak level gives a useful warning, though. It was introduced as part of the R128 Loudness Unit specification, and gives a reasonable indication of when encoder clipping is likely to occur. Really loud modern masters can easily register True Peaks levels of +1 or +2 dBTP, and often as much as +3 or +4 !

Those files are virtually guaranteed to cause encoder clipping if they’re processed as-is, so to avoid the risk of encoder clipping, it’s sensible to reduce the level of those files before you supply them, to get the best quality encodes.
 

The question is, how much should they be reduced ?

It’s impossible to say exactly without trying it. The harder the audio is hitting the limiter, and the lower the data rate, the bigger the changes in peak level during encoding and decoding will be, and the more likelihood of problems as a result, so there’s no one-size fits all solution.

Personally I don’t make super-loud masters, and have found that my suggestion of -1 dBTP typically produces very clean encodes, but we have to assume that Spotify’s recommendation is based on analysis of the files they encode. I’ve double checked some of my own recent masters, and found that using my own loudness guidelines I’m getting clean encodes, so I won’t be changing how I work because of this recommendation.

[Update – As I mentioned above, Spotify have updated their FAQ to confirm this – the -2 dBTP recommendation only applies to material mastered louder than -14 LUFS]

But certainly if you’re making mixes or masters that are hitting close to 0 dBFS, you should be thinking of starting to measure True Peaks and reduce the levels to avoid them, at the very least.
 

But the music is MEANT to be loud, why should we turn it down ?!

Well firstly because the encodes could sound better if you do. But also because it’s going to be turned down eventually, anyway ! Spotify uses loudness normalization by default, just like YouTube, TIDAL and Pandora. This means they measure the loudness of all the material they stream, and turn the loudest stuff down. This is done to stop users being “blasted” by unexpected changes in level, which is a major source of complaints. And even if users turn normalization off, they’re unlikely to run the software with the volume at maximum !

So even if you’re in love with the super-dense sound of your music, reducing the overall level when you submit it won’t have any practical consequences for the final playback level – it can only sound better because of a cleaner encode.
 

What about -14 LUFS ?

I’ve had a few people asking about the fact that Spotify’s normalization reference level is approximately -14 LUFS, and if this -2 dB True Peak recommendation over-rules or replaces it.

The answer is No – these are two separate issues. The -14 LUFS figure simply gives us an idea of how loud Spotify will try and play songs in shuffle mode – it’s never been a “target” or a recommendation. This is a common source of confusion, and I wrote about it in more detail here.

The -2 dBTP recommendation is to try and ensure better encoding quality for material that was mastered very loud originally – peak levels aren’t a good way to judge loudness. So to get the best results you should keep both numbers in mind.

 

Summary

I’ve said it before, and I’ll say it again – the loudness normalization reference levels aren’t meant to be targets. Instead, master your music so it sounds great to you, and preview it using the free Loudness Penalty site to see how it will sound when normalized.

But you should also be aware that very high peak levels can cause sub-standard encodes when the files are converted for streaming. And if you’re like me, you’ll want to do everything you can to get the best possible results – including keeping an eye on the True Peaks.

 

Update – and a warning

I’m seeing a lot of different reactions to the information in this post. They vary from “yes I’ve been saying this for ages”, through annoyance that there’s yet another number to think about, all the way to “ah I don’t care, I’ll just turn the limiter output down a little”.

Be very careful about this last option.

The harder you push the loudness into a limiter, the higher the True Peak level will go. And the higher the True Peak levels are, the greater the risk of encoder clipping. So you’re fighting a losing battle. Remember True Peak doesn’t necessarily predict how much clipping will take place, so if you try to upload at the same loudness and just reduce the True Peaks, you could end up with just as many issues with the encode.

Wavelab, Ozone, Sonnox and others offer “codec preview” features which allow you to assess the results of encoding – if you’re chasing extreme loudness then you need to use methods like these to check the results you’re getting.

And as always personally I think the best answer is a perfect balance between the different factors – between loudness and dynamics, and now between loudness and True Peak values.

If you want to know the method I use myself to find the perfect loudness when I’m mastering, and why it works – click here.

[Edit – the original version of this post stated that some encoders can “bake in” clipping, which was misleading. A correctly-implemented encoder won’t do this, and I’ve updated the post to reflect that. However not all encoders are guaranteed to be well-written (!) and many decoders end up reducing the decoded file to fixed bit-depth anyway which does cause this problem. So avoiding high peak levels before encoding is definitely a good idea !]

 

 

Spotify upload specs recommend -2dB True Peak (for the loudest songs) is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/spotify-upload-true-peak/feed/ 0
Loudness Penalty – Live Loudness Preview http://productionadvice.co.uk/loudness-penalty-preview/ http://productionadvice.co.uk/loudness-penalty-preview/#respond Mon, 23 Jul 2018 00:36:22 +0000 http://productionadvice.co.uk/?p=9543   People are loving the Loudness Penalty website – but some of them have been saying Who cares what the numbers say ? The important thing is – how does it sound ? And of course we agree ! Which is why we’ve added a new feature to the site – Live Loudness Preview To […]

Loudness Penalty – Live Loudness Preview is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
People are loving the Loudness Penalty website – but some of them have been saying

Who cares what the numbers say ? The important thing is – how does it sound ?

And of course we agree !

Which is why we’ve added a new feature to the site – Live Loudness Preview

To see it in action in less than 30 seconds, check out the video above. But in this post I want to take a little more time to explain why we’re so excited about this new function. (Which, by the way, works on mobile as well – and, the whole site is significantly faster this time around)

Firstly though:

DON’T compare the different services

I mean – sure, you can, but what’s the point ? We all know ‘louder is better’ so the chances are YouTube will sound a little better than the others, but that’s not very valuable conclusion. (The site doesn’t emulate the streaming codecs, just the loudness differences.)

What is really valuable is to use the new Loudness Preview function to compare the song you’re working on with reference tracks on YouTube itself.

Provided you have the volume slider all the way up, you’ll be making a real-world comparison between the reference and your song, almost exactly as it will sound if you actually uploaded it.

And now we get to the really good part…

DO compare alternative versions of your songs

This is where the real power of the site comes into play. Say you’re under pressure from a client to master something louder than you think it needs to be. If you make two versions of the master – one at the louder level and one at your preferred level, you can send both versions to the client and ask them to Preview them using Loudness Penalty. You can even open both versions at the same time in different tabs of your browser.

If you do, chances are you’ll hear one of two things:

  1. There isn’t a big difference – because loudness normalisation ! Once the loudness is matched, there’s no real benefit to making things loud in the first place. If there are genuinely no down-sides, then you can go for the louder version, but keep an ear out in case:
  2. The louder version sounds worse Sometimes this is subtle, sometimes it really isn’t. Sometimes the less heavily processed version actually sounds louder ! And even if it doesn’t , chances are it will sound more 3D, more open, more spacious – clearer, wider and sweeter.

Spread the word

And that’s why we’re so excited – because this idea is so easy to share. When it was all just numbers, people had to really take the time to understand what was happening, and why it mattered for their music. Now they can hear it for themselves ! And make informed choices about loudness.

(It’s also super-quick & easy to use – like a kind of Lite version of my Perception plugin, almost)

My hope it that eventually the Loudness Penalty site becomes the new standard way to compare new mixes and masters – and if it does, perhaps people will start to hear that louder actually doesn’t sound better, online – and start choosing more balanced dynamics for their music as a result.

If you like the idea too, please share it and help spread the word !
 
 

Loudness Penalty – Live Loudness Preview is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/loudness-penalty-preview/feed/ 0
Don’t push your music over the Loudness Cliff ! http://productionadvice.co.uk/loudness-cliff/ http://productionadvice.co.uk/loudness-cliff/#respond Thu, 05 Jul 2018 13:07:33 +0000 http://productionadvice.co.uk/?p=9508 I’ve been talking about this image for years. Literally, I’d describe it in almost every conversation, interview or lecture when I talked about loudness. And it always got a great reaction. But it didn’t exist ! Except in my head. …until now. I recently did an interview with Chris Selim over at mixdown.online, and my […]

Don’t push your music over the Loudness Cliff ! is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

I’ve been talking about this image for years. Literally, I’d describe it in almost every conversation, interview or lecture when I talked about loudness. And it always got a great reaction.

But it didn’t exist !

Except in my head.

…until now.

I recently did an interview with Chris Selim over at mixdown.online, and my analogy of the ‘Loudness Cliff’ came up again, with me waving my hands around while describing it as usual.

But this time something was different – because a few days later Kredenz emailed me an idea for the image above, asking “is this the kind of thing you had in mind?”.

And it absolutely was ! So after a few tweaks and additions, here it is – The Loudness Cliff illustration.

Hopefully it speaks for itself, but just in case, the idea is pretty simple:

  • We perceive louder sounds better, at least to begin with. So, everyone wants to sound loud – so far so good.
  • But achieving loudness can be difficult – sometimes it feels like you’re trying to push a rock up a hill. Everyone else is at the top of their own mountain though, so you want to be, too.
  • The trouble is, the closer you get to the top, the harder it gets, and the less improvement in sound you get. And if you go too far – past the danger point – it can actually sound worse.
  • And if you push it even further – you’re over the edge and smashed on the rocks.

Instead, you want to look for the ‘loudness sweet spot’ – the perfect balance of loudness and dynamics, where you get all the benefits of cohesion, consistency and translation – without pushing things too far.

The goal is to be loud enough, but not too loud.

So, enough of the analogies – how do you actually find the loudness ‘sweet spot’ ?

My best advice for that is in this post:

How loud ? The simple solution to optimizing playback volume – online, and everywhere else

And if you want to know whether you’ve got it right or not (for free) try this:

www.loudnesspenalty.com

If your music scores between 0 and -2 for YouTube, you’re probably in good shape!

And if not, there’s plenty of free information here on Production Advice to help you – a great place to start is here.

Don’t push your music over the Loudness Cliff – find the loudness Sweet Spot instead !
 
 
Thanks again to Kredenz for making my hand-waving idea a reality ! You can check out his site here.
 
 

Don’t push your music over the Loudness Cliff ! is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/loudness-cliff/feed/ 0
Mastering for Spotify ? NO ! (or: Streaming playback levels are NOT targets) http://productionadvice.co.uk/no-lufs-targets/ http://productionadvice.co.uk/no-lufs-targets/#respond Mon, 04 Jun 2018 13:38:37 +0000 http://productionadvice.co.uk/?p=9450   So, most streaming services normalize their audio to around -14 LUFS. YouTube are slightly louder, iTunes is a couple of dB quieter, but overall -14 is the loudness you should aim for, right ? WRONG   Wait, what ?! Haven’t I been posting relentlessly about this issue for months (and years), providing relentless blow-by-blow […]

Mastering for Spotify ? NO ! </br>(or: Streaming playback levels are NOT targets) is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
So, most streaming services normalize their audio to around -14 LUFS.

YouTube are slightly louder, iTunes is a couple of dB quieter, but overall -14 is the loudness you should aim for, right ?

WRONG
 

Wait, what ?!

Haven’t I been posting relentlessly about this issue for months (and years), providing relentless blow-by-blow updates on the latest developments and banging on and on about how important it is ?

Well, yes. But that still doesn’t mean the playback levels we’re measuring are targets.
 

Dude. Stop talking crazy and explain yourself !

To start with, TIDAL is the only service actually using LUFS for it’s loudness normalisation. So even if you did want to optimise your audio’s loudness for a particular streaming service, TIDAL is the only place you’ll get completely reliable results. Spotify use ReplayGain, Apple use their own mysterious Sound Check algorithm, and the others aren’t telling.
 

But – but – why do you keep quoting LUFS figures, then ?

Because we have to measure things somehow, and LUFS is the internationally recognised method of measuring loudness – plus it’s the best, in our experience.

And the numbers are accurate – if you run a loudness meter on Spotify for 30 minutes or more, you will find the overall playback loudness is very close to -14 LUFS, especially for loud material.

But that’s an average value – individual songs may vary up or down by several dB, because ReplayGain gives different results to LUFS. The same applies to YouTube, iTunes and Pandora. So using LUFS as a target just won’t work reliably – as well as being a bad idea.
 

What do you mean, a bad idea ? Why NOT target loudness at specific services ?

Because we don’t need to.

Streaming services measure the loudness and make it more consistent for us – so we don’t have to. Loudness normalization is an opportunity to do what’s best for the music, without having to worry about the need to “fit in” with loudness.

Having said that, there can be an advantage to keeping the streaming services’ playback levels in mind while you’re optimizing the loudness of your music – which is why we created the Loudness Penalty website. Let me explain.
 

Why streaming playback levels DO matter

Imagine you master a song, and test it using the Loudness Penalty site, which tells you it’ll be turned down by 6 dB or more on all the streaming services.

That means you could potentially apply 6 dB less dynamic processing and still have it play back just as loud.

I don’t know about you, but that feels like an opportunity to me ! At the very least I’d want to experiment as see how a less heavily processed version sounded, using the LP scores to hear how it will sound online.

In the most agressive genres, it might be that you decide to stick with the original version, but in my experience this rarely gives the best results. For me, the sweet spot for loud material is about LP -2 on YouTube – but you may feel differently.

Either way, don’t we owe it to the music to at least try the experiment ?
 

One master to rule them all

So, what am I actually saying ? On the one hand, there’s no point in trying to optimise loudness for streaming services, but on the other there might be an opportunity. I’m contradicting myself, surely ?

No.

It’s true that there’s no real benefit to supplying separate loudness-optimized masters for each streaming service – partly for the reasons explained above. But also in a practical sense, because most agregators will only accept one file per song anyway, so there’s no easy way to get individual masters uploaded to each service.

But there is a benefit to optimising your music for online streaming in general.
 

Seize the opportunity to create a master that sounds great everywhere

Measure your files using the Loudness Penalty site, and find out how much they’re going to be turned down. Experiment with less agressive loudness processing, and preview the different versions against each other – and your favourite reference material – using the LP scores to adjust the playback level and see how they’ll sound online.

Knowledge is power – and making real-world comparisons like this will let you find the “sweet spot” – the perfect balance of loudness and dynamics, that best serves the music.

Not the streaming normalisation algorithms, or the wild ‘Loudness War’ goose – the music.

And in the process, even if you think your genre needs that loudness war sound, you might find yourself surprised.

If J Cole can break streaming records and debut at Number 1 in the Billboard chart with a more dynamic master – maybe you can, too.
 
 

Update

I’ve been getting quite a few frustrated comments about this, saying “well how loud should we master things, then ?!”. If that includes you, click here for my best advice.
 
 

Mastering for Spotify ? NO ! </br>(or: Streaming playback levels are NOT targets) is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/no-lufs-targets/feed/ 0
Introducing Loudness Penalty http://productionadvice.co.uk/loudness-penalty/ http://productionadvice.co.uk/loudness-penalty/#respond Fri, 18 May 2018 11:50:45 +0000 http://productionadvice.co.uk/?p=9435   The number one question I get asked these days is How loud will my music be played back online ? And the answer is always – “it depends”. Until now. I’m proud and excited to be able to announce a new website, developed with MeterPlugs, which we’ve designed to answer exactly that question. Quickly, […]

Introducing Loudness Penalty is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
The number one question I get asked these days is

How loud will my music be played back online ?

And the answer is always – “it depends”.

Until now.

I’m proud and excited to be able to announce a new website, developed with MeterPlugs, which we’ve designed to answer exactly that question.

Quickly, accurately, and for free.

It’s called Loudness Penalty, and in the video above I show you how to use it, why you would want to, and what the results mean.

Or you can head straight over and check it out yourself, right now – just click here.

I hope you find it useful – and if you like it, please share !
 
 

Introducing Loudness Penalty is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/loudness-penalty/feed/ 0
J Cole WINS – with dynamics ! http://productionadvice.co.uk/j-cole-wins-with-dynamics/ http://productionadvice.co.uk/j-cole-wins-with-dynamics/#respond Tue, 01 May 2018 14:02:27 +0000 http://productionadvice.co.uk/?p=9415   J Cole’s new album KOD just won the Dynamic Range Day Award 2018. (The award is given every year to a great-sounding, successful album that also has great dynamics) And it’s the most streamed album in it’s first week ever, AND it went straight in at Number 1 in the Billboard album charts ! So – […]

J Cole WINS – with dynamics ! is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
J Cole’s new album KOD just won the Dynamic Range Day Award 2018.

(The award is given every year to a great-sounding, successful album that also has great dynamics)

And it’s the most streamed album in it’s first week ever, AND it went straight in at Number 1 in the Billboard album charts !

So – remind me – why exactly is a super-loud master supposed to be ‘required’ for success and sales again ?

…right.

And here’s the thing – this is just the latest example in a building trend. More and more rap, R&B and hip-hop artists are taking advantage of the benefits of dynamics in their sound, and people love it.

Let’s start with a low-key example like – oh, say: Drake.

Right, the Drake – the one who regularly holds multiple Top 10 positions in the global streaming charts simultaneously. To be that successful, surely your music has to be ridiculously loud, right ?

Well… no.

Drakes recent single God’s Plan has 469 million views as I’m writing this, and the integrated loudness measures… -11.7 LUFS. Hardly the -8, -6 or even -4 numbers some people like to tell you are ‘needed’.

Or how about “Process”, by Sampha, also nominated for the DRD Award, and which won the Mercury Prize here in the UK last year ? The album overall measures -10.4 LUFS.

Now don’t get me wrong, both these numbers and albums are still loud – but they’re not “loudness war loud”, in the way so many are.

And that’s the point.

Users don’t care about loudness – they care about music.

Some of the biggest artists in the world are mastering their music with more dynamics – let’s hope everyone else follows suit.

Soon.
 
 

STOP PRESS

I’ll be interviewing Glenn Schick, who mastered KOD for J Cole, on the next episode of The Mastering Show podcast. Subscribe now to make sure you catch it – and, listen to my interviews with previous DRD Award-winners Matt Colton and Bob Ludwig, while you wait !

J Cole WINS – with dynamics ! is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/j-cole-wins-with-dynamics/feed/ 0
Humans versus Robot Mastering: Updated http://productionadvice.co.uk/humans-versus-robots/ http://productionadvice.co.uk/humans-versus-robots/#respond Fri, 01 Dec 2017 14:54:08 +0000 http://productionadvice.co.uk/?p=9338 IMPORTANT UPDATE: I messed up. In the original post and graphic below, I said that the results were almost certainly influenced by the comments on Facebook. But I had no idea by how much. Since then, Kenny has run another poll, using a different voting system that allows us to see the way votes are cast over […]

Humans versus Robot Mastering: Updated is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
IMPORTANT UPDATE:

I messed up.

In the original post and graphic below, I said that the results were almost certainly influenced by the comments on Facebook. But I had no idea by how much.

Since then, Kenny has run another poll, using a different voting system that allows us to see the way votes are cast over time, and we can see a clear and very strong bias introduced into the results by people’s comments on the Facebook thread.

That means the same thing will have happened in the original poll, and I commented there myself, too – meaning I have to accept that I was part of the problem.

In a nutshell, every time someone posts their preference in a Facebook comment, there’s a corresponding boost in votes for their favourite master. In the second poll, the votes for one of the masters trebled in just a few hours after one of the comments, taking it from second place to a commanding lead.

This new information means we really can’t take the results too seriously. My original title for this post was “Humans versus robots: Humans WIN – by a huge margin”. I should have known better. That conclusion isn’t valid, and I let my passion for dynamics (and humans!) got the better of me.

It doesn’t invalidate the poll completely – after all, not everyone will have been influenced by the comments, and if a particular master prompts people to make comments in the first place and people then agree with it, that also tells us something.

But my headline and conclusion in the original version of this post were over the top, and I’ve decided to edit it. I considered removing it completely, but I think it’s a good example of how confirmation bias can influence us all, even when we think we’re immune to it !

So, with all that being said, here’s the original post, with visible strikethrough and comments in italics:
 
 
People are always asking me what I think of “automated mastering” – services like LANDR and Aria, for example – or “intelligent” mastering plugins like the mastering assistant in Ozone 8.

So I tested them – or at least, LANDR. Once by myself, and once someone else did it – without my knowledge !

And each time, I concluded that while the results weren’t nearly as bad as you might fear, provided you use the conservative settings, they still weren’t as good as what I could do.

However

Both these tests were non-blind. I didn’t cheat and listen to a LANDR master before doing my own masters, but when listening and comparing I always knew which master was which.

And that means I was open to expectation bias – and so are you, when you’re reading or listening to me talk about them.

So maybe our opinions are influenced by that, and if we didn’t know which was which, we would have made different choices. In fact Graham’s test wasn’t loudness-matched, which could also influence the results. The different versions were close, and in fact mine was a little quieter than the others, which should have been a disadvantage in theory – but you know me: it’s not a valid test if it’s not loudness-matched, as far as I’m concerned.

Just recently though, Kenny Gioia did something different.

Kenny’s Test

Kenny created six different masters of the same song – three by humans, three by machines. And not just any humans – one of the masters was by an unknown engineer at Sterling Sound, one was by John Longley, and the third was by none other than Steven Slate. Steven doesn’t claim to be a mastering engineer, but he certainly knows his way around a studio !

For the machines, Kenny asked members of his Facebook group which services to use, resulting in the choices of LANDR, Aria and Ozone 8.

Kenny then set up a poll, and asked people to listen and vote for the masters they liked best.

And here’s where it gets interesting

First, Kenny made the files anonymous, so that no-one could tell which was which – and second, he loudness-matched them, so that listeners wouldn’t be fooled by the ‘loudness deception‘.

Which means that provided people didn’t look at the waveforms, there was no way to tell which was which, except by listening.

As far as I know, this is the first time a blind, loudness-matched poll like this has been done.

[Edit – we now know it wasn’t nearly blind enough – see above !]

And the results were fascinating interesting

You can see a summary of how they came out in this infographic, illustrated with analysis of the dynamics of each master using my Dynameter plugin, but I wanted to take a little more time to make some extra comments here. First though, the disclaimer:

We need to remember this wasn’t a scientific test, even though it was loudness-matched and [kind of] blind. People could see how other people were voting, which results in a subtle kind of peer pressure. You can download the files and look at the waveforms, or measure them in other ways, so people might have made decisions based on that, rather than the sound alone. And perhaps most importantly of all, people were commenting and discussing what they heard all the while the poll was running – which results in a distinctly un-subtle form of peer pressure bias !

[I now think this effect was the most important factor for the surprisingly big difference in overall votes]

And, this is just one test, with one song. Kenny’s mix already sounded pretty good, and was very dynamically controlled, so different songs might have given very different results.

BUT

The results are still compelling suggestive. We can’t rule out the possibility that they would have been different if the votes and comments had been hidden [they would !] but I suspect these actually just caused the final scores to be more exaggerated than they would otherwise have been, rather than completely changed.

Here are the highlights:

Humans WIN got the most votes

Even though the results were blind, John‘s master got 42% of the overall votes. Not only that, but humans scored a massive 83% of the total votes, securing all three top slots. That’s a pretty convincing victory, even if it’s not entirely unexpected.

[True, but not as impressive as it might seem. And perhaps without the effect of the comments on Facebook, the differences between the different human masters would have been much less obvious.]

Dynamics WIN  played an important role

John’s winning master was also the most dynamic. Not only that, but the winning robot master with the most votes was also the most dynamic of the automated masters, although the final result was very tight.

And in fact, the only master to break the trend of “dynamic sounds better got more votes” was the Sterling Sound master. This was made back in 2009, when the loudness wars were in full effect, so it’s not all that surprising it was pushed pretty hard but again the result is quite dramatic – this Sterling master got seven times more votes than the Aria machine master of similar loudness, which is suggestive of an interesting conclusion: if high loudness is your goal, you’re better off getting it done by a human !

[I now think the results are so biased by the comments that this isn’t a fair conclusion from this poll, although it’s still my opinion.]

Default settings suck

LANDR was the only robot master with decent dynamics, for which I applaud them – but unfortunately the heavy bass EQ of the master came in for a lot of criticism in the comments, which presumably explains why it didn’t score higher.

But elsewhere the results weren’t so positive. Kenny deliberately chose the default settings for all the automated masters, and both Aria and Ozone 8 pushed the loudness to extreme levels by default, which is not only a Bad Thing (in my opinion) but also didn’t achieve a result people liked, either.

Which means I can’t help asking – shouldn’t automated services like LANDR and Aria be offering loudness-matched previews of their output ? Otherwise, isn’t the before & after comparison they offer deeply flawed, and maybe even deliberately misleading ? Hmmm…

ANYWAY, back on topic !

EQ matters

It’s fascinating that dynamics seem to have played such an important [a] part in people’s preferences, given that Kenny’s mix was pretty dense and controlled already – but the other factor is the EQ. Broadly speaking, all the human masters were somewhat brighter than the automated versions. This EQ choice suits the song better, and I suspect this is an important factor in the results – especially since the LUFS loudness matching takes EQ differences into account, as far as possible.

Aria lost

That might seem an unnecessarily blunt conclusion, but I think it’s worth saying because in many other comparisons and conversations I’ve seen, Aria has received great feedback. This may be partly because it’s the only system that uses actual analogue hardware to achieve it’s results, but I suspect it’s more likely that it simply returns louder masters by default, which sound superficially more impressive.

[Again, I think the comment bias in the results means we can’t draw any conclusions from the details of this poll. Maybe not even for the order of the human masters.

I also want to say that personally I thought the Aria master was the best-sounding of the automated masters overall, even though it was too heavily compressed and limited for my taste.]

That’s why the loudness-matching is so crucial, because that’s not how most people hear songs for the first time. The files in this test were balanced exactly as they would be if they were uploaded as single songs to TIDAL or Pandora, and in my experience you’d get very similar results on YouTube, Spotify and Apple Radio.

So this is a great real-world representation of how most people will hear songs for the first time. CD sales are falling day on day, and the vast majority of music discovery takes place online. If you want you music to stand out and make a great impression, you need it to work well when it’s loudness-matched. And that means mixing and mastering in the “loudness sweet spot” – with balanced dynamics. To find out the strategy I recommend to achieve this, click here.

Update

Several people have strongly criticised Kenny’s decision to use default settings for the automated mastering services, saying that the humans were told not to master for loudness, so the robots should have been “told” the same thing.

That’s reasonable, and Kenny says he’ll do a new tests to address this factor, but I disagree. In my opinion it wouldn’t have significantly changed the outcome of this poll. Here’s why:

  • Two of the human masters were “loud” anyway – in Sterling’s case because it was done years ago, and in Steven’s because he felt it sounded best that way, presumably. Despite this, people preferred them to the similarly loud automated masters, despite being less dynamic.
  • LANDR ended up pretty dynamic anyway, but the EQ wasn’t right.
  • The settings Kenny “should” have apparently used for Aria are labelled “classical and light acoustic” (E) and “for very dynamic mixes” (A) in the Help text on the site. This song wasn’t either of those – it’s a heavily compressed rock mix, so Kenny’s choice was reasonable, in my opinion.
  • Finally “B” is Aria’s default setting – it includes two other presets that are even louder.

So once again – no, this wasn’t a perfect test – but in my opinion the possibility for people to be influenced by other people’s votes and comments is a much more significant criticism than the presets used for the online services.

[And now I know this was this case to a much greater extent than I expected]

Conclusion

At the end of the day, tests like this are just a bit of fun, really. To get a truly definitive answer to the question of which masters people prefer, we would need a truly blind poll, without comments, and multiple tests using many different songs in many different genres, with many more people listening.

But for now, this is the best we have and I’m calling it:

Humans WIN did really well in this poll. Just as they should I want them to !

More info

I deliberately haven’t revealed which master is which in the poll here, in case you want to try the test for yourself. To download the files, click here. To see the poll and join the discussion, click here. (You’ll need to join Kenny’s Facebook group first, to get access.)

And to hear Kenny and I discuss the whole project in even more detail, you might like to listen to the latest episode of my Mastering Show podcast. We also reveal exactly which master is which, and I give my blind comments on the different masters, plus predictions about which is which.

If you’d like to take a listen, click here.

Humans versus Robot Mastering: Updated is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/humans-versus-robots/feed/ 0
TIDAL upgrade their loudness normalization – and enable it by default http://productionadvice.co.uk/tidal-normalization-upgrade/ http://productionadvice.co.uk/tidal-normalization-upgrade/#respond Tue, 14 Nov 2017 15:13:50 +0000 http://productionadvice.co.uk/?p=9295 Developments in loudness normalization are coming thick and fast, these days – and TIDAL just raised the bar. Quality has always been one of the major selling-points of TIDAL’s streaming service – it’s one of the few places that lossless streaming is available, still. And that means they’ve been wanting to enable normalization by default […]

TIDAL upgrade their loudness normalization – and enable it by default is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

Developments in loudness normalization are coming thick and fast, these days – and TIDAL just raised the bar.

Quality has always been one of the major selling-points of TIDAL’s streaming service – it’s one of the few places that lossless streaming is available, still. And that means they’ve been wanting to enable normalization by default in their players for some time. So that we won’t be blasted by sudden changes in level, which is a major source of user complaints.
 

But there’s a problem…

…and it also relates to quality.

Most normalization right now is done on a track-by-track basis, meaning all songs are played back with similar loudness. This seems to make sense for shuffle or playlist listening, but it doesn’t work for albums, where it changes the artistic intent.

You spend days, weeks, months crafting the perfect balance for your music, including from song to song – why would you want a computer changing that ? Research shows that only 2% of albums in TIDAL’s catalogue have songs that are all the same loudness, even in the current ‘loudness war’ era. So messing with that balance is something TIDAL really want to avoid.
 

The alternative

The solution to this challenge seems straightforward, and it’s called Album Normalization. Instead of making all songs play with the same loudness, you measure the overall loudness of a whole album, and adjust all the songs by the same amount. The overall level is managed, to prevent “blasting” and improve the user experience, but the artistic intent is preserved.

Simple, right ?

Well… not necessarily. As usual, the devil is in the details. How does the playback software detect the difference between Album or Shuffle mode ? What should happen in Playlist mode ? And what happens when you switch between them ? If the user starts listening to a song in Shuffle mode with “Track” loudness, but then chooses to listen to the rest of the album, the next track would have to be played at “Album” loudness, which breaks the loudness sequence… Apple have had album-style normalization for some time, but it still has some rough edges and bugs, especially on mobile.

And even at a more basic level, users want things to be simple. The more options, the more potential for confusion. Spotify’s normalization has been in place for years, but many people still aren’t clear on exactly how it works.
 

TIDAL’s research

TIDAL’s approach to this challenge was refreshingly simple – they asked an expert to research the best solution. That expert was Eelco Grimm, one of the original architects of the loudness unit measurement system, and a fellow member of the Music Loudness Alliance.

Eelco’s research was exhaustive and fascinating – you can hear all about it in my interview with him on the latest episode of The Mastering Show podcast in the player above, or read his findings in full on his website.

But here are the highlights:
 

Users prefer Album Normalization – EVEN in shuffle mode

This is the big one. Eelco analysed TIDAL’s database of over 4.2 million albums (!) and found examples with the biggest difference in loudness between the loudest and quietest songs. These are the albums whose dynamic structure will be changed most significantly by Track normalization, but would also presumably sound the most uneven when listened to in Shuffle mode.

Eelco built two random shuffled playlists, containing examples of these loud & soft songs, from 12 albums, with 7-10 dB of difference between the loud and soft examples. And he sent the playlists to 38 test subjects, who listened to them blind, and reported back on which ones they preferred.

I was one of those test subjects, and what I heard surprised me. The difference between the playlists was easy to hear. Album mode worked pretty well, but with Track Normalization, the songs didn’t sound equally loud ! Most would be OK, but then you’d suddenly have a song that is supposed to sound “loud” which felt too quiet, or a “quiet” song that sounded too loud. Album Normalization sounded better to me – more natural, more effective, more satisfying – even in shuffle mode.

And it wasn’t just me – 71% of the test subjects voted blind for Album Normalization, with a further 10% saying they would prefer this method by default. That’s over 80% of people preferring Album Normalization, all the time. Even when listening to Playlists, or with Shuffle enabled.

And with the benefit of hindsight, it’s not hard to see why. These albums were all mastered with care, meaning the relative levels of the songs worked on a musical and emotional level. If they worked in the context of the original album, why wouldn’t they work in shuffle as well, once all the albums were playing at a similar loudness ?

That leads us to another interesting finding, though.
 

Normalizing to the loudest song on an album sounds better than using the average loudness

Apple and Spotify both use the average loudness of each album for their Album Normalization, but Eelco recommended that TIDAL normalize to the loudest song of each album instead. Again, the reasoning behind this is straightforward.

Imagine an album with many soft songs and just one loud song, in comparison to one where all the songs are loud. If the overall loudness of these albums is matched, the loudest song on the album with “mostly quiet” songs will end up playing louder than the songs on the “all loud” album ! This doesn’t work artistically, and also opens the door for people to “game” the system and try to get some songs louder than most others. In contrast, matching the loudest songs on each album and scaling everything else by the same amount plugs this loophole, and keeps the listening experience consistent for the user.

(In fact, it’s exactly the strategy I use myself, when making loudness decisions in mastering, too.)

There were plenty of other interesting findings in Eelco’s research, too – we go into them in the podcast and I recommend you take a listen, even if you’re not that interested in normalization. But right now I want to move on to some…
 

Great news

It happens all the time: The Company has a problem. The Company commissions research. The research comes back, and tells The Company something unexpected, or unwelcome. The Company ignores the research.

But not TIDAL. Not only did they accept the findings of Eelco’s research in full, they paid attention and implemented his recommendations. And we learned yesterday that their new loudness normalisation method is live now – by default, in every new install of their player application on iOS or Android devices. All the time – even in Shuffle mode – and they’re working on the same system in their desktop application, too.

And that’s huge. It means Apple is now the only major streaming service not to have normalization enabled by default – apart from SoundCloud and Google Play, neither of which offer normalization yet.

And not only that, but it’s a significant upgrade in comparison to the normalization used everywhere else. By using the “loudest song method” of Album Normalization to balance albums against each other, TIDAL have ensured not only that their normalization can’t be “gamed”, and the artistic intentions of the artists are preserved, but also that their overall loudness level will comply with the AES streaming loudness recommendations.
 

So what ?

The momentum is building all the time. We saw the most recent signs that streaming services are really taking normalization issues seriously when Spotify reduced their playback reference level to be more in line with Apple, Pandora and YouTube earlier this year, and I’m confident the same thing will happen with these improvements by TIDAL.

After all, it’s a win-win. Using Album Normalization to the loudest song (@ -14 LUFS) gives a better user experience, is simpler and easier to understand, and is preferred by over 80% of users ! What’s not to like ?

These changes are simple, but profound. Most importantly, they overcome a major (and real) objection to normalization in general – that it shouldn’t disturb the delicate balance between songs. I’ve often heard people say “I don’t want the loudness of my songs changed”, and now it won’t be – except to keep a consistent maximum playback level from album to album.

All the streaming services care deeply about music, and high quality – despite the cynicism I sometimes see – and I’m confident they will all adopt Eelco’s recommendations in the near future.

And personally, I can’t wait.
 
 

TIDAL upgrade their loudness normalization – and enable it by default is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/tidal-normalization-upgrade/feed/ 0
YouTube Stats For Nerds – EXACT volume normalization values revealed, and how to find them http://productionadvice.co.uk/stats-for-nerds/ http://productionadvice.co.uk/stats-for-nerds/#respond Fri, 29 Sep 2017 09:50:34 +0000 http://productionadvice.co.uk/?p=9223   Are you confused about exactly what YouTube’s playback volume normalization is doing to your music ? Maybe you understand the basic idea but struggle to predict exactly what will happen when videos are uploaded ? Well, that’s understandable – the procedure is still inconsistent and unpredictable. Some songs are measured and normalized right away, […]

YouTube Stats For Nerds – EXACT volume normalization values revealed, and how to find them is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
Are you confused about exactly what YouTube’s playback volume normalization is doing to your music ?

Maybe you understand the basic idea but struggle to predict exactly what will happen when videos are uploaded ?

Well, that’s understandable – the procedure is still inconsistent and unpredictable. Some songs are measured and normalized right away, others take weeks, some never seem to be normalized at all.

But it is happening – and YouTube just added an important new feature which can really help you get a grip on the process.

You can now see exactly what effect the system is having on our audio, because YouTube have exposed the normalization data in their interface. You just need to know where to find it – and what it means.

(Thanks to Paul Maunder for the heads-up !)

To see it for yourself, right-click on any YouTube video and select the “Stats for nerds” option.

 (Yes, this means that you are now a nerd 🙂 )

The fourth item down in the list will say something like:

Volume / Normalized:  100% / 54% (content loudness 5.3 dB)



The first percentage describes the Volume slider setting in the YouTube player window, and can be adjusted by clicking on the “speaker” icon and dragging the slider up or down.

The second percentage reflects the normalization adjustment being used. This is the amount by which the playback volume of the clip has been turned down to prevent users being blasted by sudden changes in volume in comparison to everything else. The value scales in proportion with the Volume slider setting.

So for example, if the normalization percentage reads 60% when the Volume slider is at 100 %, it will scale down to 30% if you move the Volume slider to 50%. This means that if you want to use these stats to compare songs with each other, you should always set the Volume slider to 100% first.

The final value is the “content loudness” value, and indicates the difference between YouTube’s estimate of the loudness and their reference playback level. This value is fixed for each clip, and isn’t affected by the Volume slider.

So for example a reading of 6dB means your video is 6dB louder than YouTube’s reference level, and a 50% normalization adjustment (-6dB) will be applied to compensate. Whereas a negative reading of -3dB, say, means it’s 3 dB lower in level than YouTube’s reference, and no normalization will be applied, so the normalization percentage will always be 100% of the Volume slider’s value – YouTube doesn’t turn up quieter videos.

(Important note – I’ve seen the way these values are reported change several times over the last couple of weeks. YouTube are obviously still working on this feature, so it may change again, and I’ll try to keep this post updated if they do.)

So what ?

Firstly, these “Stats for nerds” give you a quick and easy way to check whether your video has been normalized yet. If there’s no “content loudness” value listed, the video hasn’t yet been normalized, and the second value will always be the same as the Volume slider percentage – the song will be played as loud as the Volume slider allows.

(This happens more often than you might expect – for example normalization seems to have been “on hold” in August and early September 2017 – but more recent uploads have already been measured. It also answers a very common question – yes, adverts are being normalized – or at least, they are right now.)

Secondly, if there is a “content loudness” value listed, then your video is being normalized, and you can see exactly how much by setting the Volume slider to its 100% maximum, and checking the normalization percentage value.

So in the image above, for example, the Metallica song is being turned down to only 54% of it’s original volume (-5.3 dB) and Taylor Swift’s “Shake It Off” is also being turned down by a substantial 4.6 dB.

Whereas the final video in the image is a song that I mastered myself recently – a trance/techno track called “Vi er GodsetGutta” by B Killax – and because YouTube measure it as being 0.7 dB quieter than their desired reference level, it always gets played as loud as the Volume slider setting allows.

Thirdly, it means that if you want your music to stand out in comparison to everything else, you want to avoid large positive or negative “content loudness” values – you need to optimise loudness, not maximize it.

The great news is that when you do this, your music will actually “pop” more other songs, in my experience. For example the song I mastered actually has more punch and impact than the other two, in my opinion, especially in the low end – despite having been mastered at a lower level. Which of course is exactly what you would expect, because it has better micro-dynamics. To see if you agree with me, take a listen to the playlist here.

How do we use this ?

Apart from being interesting, the fact that YouTube have made this information visible means that you can test the effects of normalisation yourself. Simply upload a song, wait for it to be normalized and check the stats.

And then you can tweak, re-upload and test again, if you like – to try and get an even better result.

But here’s the thing. My advice is:

Don’t bother.

The best way to optimize loudness on YouTube

By all means check out the Stats For Nerds for your songs, and see how they compare with other similar tracks – and of course, how they sound.

But getting drawn into a cycle of uploading, testing and re-uploading over and over isn’t an efficient way to work, in my opinion. For one thing, it’s really tedious !

And more importantly, at the rate YouTube are releasing updates to their normalization system, there’s no guarantee that what works today will still work tomorrow – or next month, or next year.

It’s far better to aim for a result in mastering that you can be confident will result in minimal normalization changes to your audio, and therefore maximize both the playback volume, and the punch and impact of the music.

That’s the method I used to master “Vi er GodsetGutta” – and every other song I work on, for that matter.

 All the examples I’ve found on YouTube are being played with no volume reduction from normalization, and are assessed as being within 1 dB of YouTube’s reference level. And it works on all the online platforms, not just YouTube.

It’s a simple method, and straightforward to implement – and I explained it in a blog post a few days ago. (Hint: it’s not about aiming for -14 LUFS !) To find out how it works, click here.

And meanwhile, try not to spend too long worrying about the Stats For Nerds, and focus on making great-sounding music instead 🙂

 
 

YouTube Stats For Nerds – EXACT volume normalization values revealed, and how to find them is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/stats-for-nerds/feed/ 0
How loud ? The simple solution to optimizing playback volume online – and everywhere else http://productionadvice.co.uk/how-loud/ http://productionadvice.co.uk/how-loud/#respond Tue, 26 Sep 2017 11:02:16 +0000 http://productionadvice.co.uk/?p=9206   I get asked this question literally every day, now. And I see people asking it, everywhere: “What’s the ideal loudness for my music to get the best playback volume online ?” Because people have realized that loudness normalization is a reality. They know that loud songs are turned down to stop users being blasted […]

How loud ? The simple solution to optimizing playback volume online – and everywhere else is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
I get asked this question literally every day, now.

And I see people asking it, everywhere:

“What’s the ideal loudness for my music to get the best playback volume online ?”

Because people have realized that loudness normalization is a reality. They know that loud songs are turned down to stop users being blasted by sudden changes in volume – and they’ve probably heard some numbers: -13 LUFS for YouTube, -16 for iTunes and Pandora, -14 for Spotify and TIDAL… but which one should you choose ? Is there a perfect number, or do you have to submit different masters for every platform ?

In this post I’ll answer that question, simply and clearly.

(If you’re impatient, feel free to skip to the end – but please come back and read this explanation afterwards, too !)

Before that though, it’s important to realise – asking this question misses three key points.

The first is:

1 – There are no ideal loudness values – just guidelines you can follow

Because although although all the streaming services are measuring loudness and turning loud songs down, they all do it in different ways. They don’t all use LUFS loudness units, and they’ve all chosen slightly different reference levels.

So you can’t choose an ideal loudness that suits all platforms, because there isn’t one.

But the good news is – you don’t need to.

The whole point about loudness normalization is that each streaming service will measure the loudness, and adjust the playback volume according to their rules.

So you can make your music as loud as you like, if you want to – it just might get turned down. And that’s OK, because so does everything else.

Which means targeting a specific integrated loudness is a red herring. Lots of people are asking if they should aim for an integrated loudness of -14 LUFS, for example – because that’s the volume TIDAL uses, and Spotify recently reduced their level to something similar (although they don’t use LUFS to make their measurements, so this is only an approximate value). Plus -14 is only a dB quieter than YouTube’s approximate level of -13 LUFS, and 2 dB louder than Apple Sound Check… so all in all it seems like a pretty good value to have in mind.

But that brings us to the second key point I mentioned:

2 – Integrated loudness isn’t the best way make loudness choices

Here’s what I mean.

Integrated loudness is an overall value for a song, album or any section of audio.

Just one number.

It does take account of the loudest moments, and the quietest – but you can’t tell what they were, just by looking at the number.

Imagine two songs, balanced by ear. One of them could be straight-ahead rock, with almost the same short-term loudness all the way through, hovering around -14 LUFS – so that’s what the integrated loudness reading across the whole song will read. And now imagine a more varied song – still heavy, but with a quiet introduction and more mellow verses. These quieter sections will reduce the overall integrated loudness reading – down to -16 LUFS, perhaps.

So far so good – you can’t tell by looking at the integrated loudness if you have two “loud all the way through” songs, or one loud and one with more varied dynamics – but so what ? You matched them by ear, and when you play them back one after the other, they sound great. The loud sections of both are at similar levels, and the quieter choruses work for the more varied song – who cares if they measure slightly differently ?

The problems start when you turn this process the other way around.

Rather than measuring the songs, you want to choose how loud they should be.

If you use your ears again, you’ll be fine – but that’s not what people are asking me about. If you just follow the numbers and make things match an integrated loudness value – making both songs measure -14 LUFS for example – the more varied song will sound 2 LU too loud in comparison to what you would have chosen by ear. The integrated LUFS value tells you nothing about the dynamic variety in the song. In other words, our opinion about what integrated loudness feels musically right changes, depending on the song – and genre, and arrangement… and everything.

Don’t worry, there is a solution to this – but before I get to it I just want to highlight the third, simplest and probably most important point in all of this:

3 – Loudness is an artistic decision

You probably already guessed this one – loudness shouldn’t be about the numbers.

And neither should any other property of music, of course. Numbers are helpful as a sanity-check, and for training our ears. But that doesn’t mean you should choose the EQ balance or how loud to master a song based purely on measurements – in an ideal world you just choose what sounds best.

And the great news is that we’re headed in that direction ! Since loudness levels are being adjusted on playback, you’re free to make that choice based on what’s right for the music, and not have to worry that someone else will “cheat” and try to make theirs sound better just by making it louder – that won’t work.

(Up to a point – see the very end of this post…)

Just tell us the numbers !

OK, I said I’d answer the “how loud” question simply and clearly – and I will.

But from what’s written above you’ll have gathered by now that I’m not going to be recommending any of the LUFS numbers suggested above – or any integrated loudness.

Instead, my recommendation uses short-term loudness values, and it’s this:

Master no louder than -9 LUFS short-term at the loudest moments
(with True Peaks no higher than -1)

That’s it.

If you follow this suggestion, you’ll be in great shape, in almost any genre. Your songs will be loud enough to sound “competitive”, whilst still retaining plenty of punch and dynamic contrast. They’ll stand shoulder to shoulder with anything else, on all the streaming platforms, and they won’t get turned down.(*)

(*) Actually they might get turned down a little, but it’s not the end of the world – because so will almost everything else.

OK, now explain how the numbers work !

This suggestion is based on over 20 years of my experience as a professional mastering engineer, on conversations with other mastering engineers, on analysis of my favourite-sounding albums, and on teaching an online course to over 1000 students who’ve also had great results.

The theory is simple – make all the loudest moments similar in loudness, and not too loud – and then balance everything else with them musically.

It just works ! It avoids the problem of using integrated loudness as a target, where you get lower values for music with more dynamic variety, even if the loudest moments are just as loud. But it still gives you a useful benchmark – something to aim for. There can be occasional louder moments, if they work musically, and of course you can go quieter if you want to – always make decisions based on musical considerations, not just the numbers – but this is the simplest and best guideline I can give you.

And in fact when I follow this rule, in most popular genres the integrated loudness often comes out in the -12 to -14 LUFS range – bang in the sweet spot for all the online streaming platforms…

Optimize, don’t maximize – seize the opportunity of dynamics

Maximising loudness doesn’t work, any more. Aiming for a specific integrated loudness doesn’t work, reliably.

But deciding how loud to master the loudest sections of music, keeping them consistent and balancing everything else to feel right musically does work – and it helps you optimize the loudness of your music, making the most of the peak headroom the online streaming services make available.

This is a fantastic opportunity – a true win-win ! You can make the best decisions for your music based on the music itself – and feel confident that it will sound great online, and everywhere else.

(Because these guidelines not only work online, they’re how I’ve been optimizing loudness and dynamics for years, even on CD. Guess what – listeners adjust playback levels, too !)

Make your loudness decisions based on the way the music sounds, rather than arbitrary numbers – but keep an eye on the guidelines, even so.

Coda – The devilish details

The method described above works, but there are a couple of extra details to be aware of.

Firstly, all the streaming services turn louder music down, but not all of them turn quieter music up – for example YouTube & TIDAL. And the ones that do turn quieter songs up will try to avoid causing peak clipping as a result, either by restricting the extent to which levels can be lifted (iTunes) or by using a peak limiter (Spotify).

What does that mean ? If you master your music very quietly, it may not sound as loud as other similar songs. That might not bother you, but if it does, it’s worth keeping an eye on. It’s one of the reasons I developed my Dynameter plugin, which visualizes the dynamics of your music in realtime, to help you optimise it for maximum dynamic impact and compatibility online. I use it on every master I do, these days. For more information, click here.

And secondly, it may sound obvious, but loudness isn’t everything ! Not by a long shot.

To sound great, you still need a great song, great performance, great arrangement, great mix, balanced EQ and dynamics… but that’s what keeps all of this interesting, right ?!?
 
 

How loud ? The simple solution to optimizing playback volume online – and everywhere else is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/how-loud/feed/ 0