Production Advice http://productionadvice.co.uk make your music sound great Fri, 15 Dec 2017 22:00:08 +0000 en-US hourly 1 Humans versus Robot Mastering: Updated http://productionadvice.co.uk/humans-versus-robots/ http://productionadvice.co.uk/humans-versus-robots/#respond Fri, 01 Dec 2017 14:54:08 +0000 http://productionadvice.co.uk/?p=9338 IMPORTANT UPDATE: I messed up. In the original post and graphic below, I said that the results were almost certainly influenced by the comments on Facebook. But I had no idea by how much. Since then, Kenny has run another poll, using a different voting system that allows us to see the way votes are cast over […]

Humans versus Robot Mastering: Updated is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
IMPORTANT UPDATE:

I messed up.

In the original post and graphic below, I said that the results were almost certainly influenced by the comments on Facebook. But I had no idea by how much.

Since then, Kenny has run another poll, using a different voting system that allows us to see the way votes are cast over time, and we can see a clear and very strong bias introduced into the results by people’s comments on the Facebook thread.

That means the same thing will have happened in the original poll, and I commented there myself, too – meaning I have to accept that I was part of the problem.

In a nutshell, every time someone posts their preference in a Facebook comment, there’s a corresponding boost in votes for their favourite master. In the second poll, the votes for one of the masters trebled in just a few hours after one of the comments, taking it from second place to a commanding lead.

This new information means we really can’t take the results too seriously. My original title for this post was “Humans versus robots: Humans WIN – by a huge margin”. I should have known better. That conclusion isn’t valid, and I let my passion for dynamics (and humans!) got the better of me.

It doesn’t invalidate the poll completely – after all, not everyone will have been influenced by the comments, and if a particular master prompts people to make comments in the first place and people then agree with it, that also tells us something.

But my headline and conclusion in the original version of this post were over the top, and I’ve decided to edit it. I considered removing it completely, but I think it’s a good example of how confirmation bias can influence us all, even when we think we’re immune to it !

So, with all that being said, here’s the original post, with visible strikethrough and comments in italics:
 
 
People are always asking me what I think of “automated mastering” – services like LANDR and Aria, for example – or “intelligent” mastering plugins like the mastering assistant in Ozone 8.

So I tested them – or at least, LANDR. Once by myself, and once someone else did it – without my knowledge !

And each time, I concluded that while the results weren’t nearly as bad as you might fear, provided you use the conservative settings, they still weren’t as good as what I could do.

However

Both these tests were non-blind. I didn’t cheat and listen to a LANDR master before doing my own masters, but when listening and comparing I always knew which master was which.

And that means I was open to expectation bias – and so are you, when you’re reading or listening to me talk about them.

So maybe our opinions are influenced by that, and if we didn’t know which was which, we would have made different choices. In fact Graham’s test wasn’t loudness-matched, which could also influence the results. The different versions were close, and in fact mine was a little quieter than the others, which should have been a disadvantage in theory – but you know me: it’s not a valid test if it’s not loudness-matched, as far as I’m concerned.

Just recently though, Kenny Gioia did something different.

Kenny’s Test

Kenny created six different masters of the same song – three by humans, three by machines. And not just any humans – one of the masters was by an unknown engineer at Sterling Sound, one was by John Longley, and the third was by none other than Steven Slate. Steven doesn’t claim to be a mastering engineer, but he certainly knows his way around a studio !

For the machines, Kenny asked members of his Facebook group which services to use, resulting in the choices of LANDR, Aria and Ozone 8.

Kenny then set up a poll, and asked people to listen and vote for the masters they liked best.

And here’s where it gets interesting

First, Kenny made the files anonymous, so that no-one could tell which was which – and second, he loudness-matched them, so that listeners wouldn’t be fooled by the ‘loudness deception‘.

Which means that provided people didn’t look at the waveforms, there was no way to tell which was which, except by listening.

As far as I know, this is the first time a blind, loudness-matched poll like this has been done.

[Edit – we now know it wasn’t nearly blind enough – see above !]

And the results were fascinating interesting

You can see a summary of how they came out in this infographic, illustrated with analysis of the dynamics of each master using my Dynameter plugin, but I wanted to take a little more time to make some extra comments here. First though, the disclaimer:

We need to remember this wasn’t a scientific test, even though it was loudness-matched and [kind of] blind. People could see how other people were voting, which results in a subtle kind of peer pressure. You can download the files and look at the waveforms, or measure them in other ways, so people might have made decisions based on that, rather than the sound alone. And perhaps most importantly of all, people were commenting and discussing what they heard all the while the poll was running – which results in a distinctly un-subtle form of peer pressure bias !

[I now think this effect was the most important factor for the surprisingly big difference in overall votes]

And, this is just one test, with one song. Kenny’s mix already sounded pretty good, and was very dynamically controlled, so different songs might have given very different results.

BUT

The results are still compelling suggestive. We can’t rule out the possibility that they would have been different if the votes and comments had been hidden [they would !] but I suspect these actually just caused the final scores to be more exaggerated than they would otherwise have been, rather than completely changed.

Here are the highlights:

Humans WIN got the most votes

Even though the results were blind, John‘s master got 42% of the overall votes. Not only that, but humans scored a massive 83% of the total votes, securing all three top slots. That’s a pretty convincing victory, even if it’s not entirely unexpected.

[True, but not as impressive as it might seem. And perhaps without the effect of the comments on Facebook, the differences between the different human masters would have been much less obvious.]

Dynamics WIN  played an important role

John’s winning master was also the most dynamic. Not only that, but the winning robot master with the most votes was also the most dynamic of the automated masters, although the final result was very tight.

And in fact, the only master to break the trend of “dynamic sounds better got more votes” was the Sterling Sound master. This was made back in 2009, when the loudness wars were in full effect, so it’s not all that surprising it was pushed pretty hard but again the result is quite dramatic – this Sterling master got seven times more votes than the Aria machine master of similar loudness, which is suggestive of an interesting conclusion: if high loudness is your goal, you’re better off getting it done by a human !

[I now think the results are so biased by the comments that this isn’t a fair conclusion from this poll, although it’s still my opinion.]

Default settings suck

LANDR was the only robot master with decent dynamics, for which I applaud them – but unfortunately the heavy bass EQ of the master came in for a lot of criticism in the comments, which presumably explains why it didn’t score higher.

But elsewhere the results weren’t so positive. Kenny deliberately chose the default settings for all the automated masters, and both Aria and Ozone 8 pushed the loudness to extreme levels by default, which is not only a Bad Thing (in my opinion) but also didn’t achieve a result people liked, either.

Which means I can’t help asking – shouldn’t automated services like LANDR and Aria be offering loudness-matched previews of their output ? Otherwise, isn’t the before & after comparison they offer deeply flawed, and maybe even deliberately misleading ? Hmmm…

ANYWAY, back on topic !

EQ matters

It’s fascinating that dynamics seem to have played such an important [a] part in people’s preferences, given that Kenny’s mix was pretty dense and controlled already – but the other factor is the EQ. Broadly speaking, all the human masters were somewhat brighter than the automated versions. This EQ choice suits the song better, and I suspect this is an important factor in the results – especially since the LUFS loudness matching takes EQ differences into account, as far as possible.

Aria lost

That might seem an unnecessarily blunt conclusion, but I think it’s worth saying because in many other comparisons and conversations I’ve seen, Aria has received great feedback. This may be partly because it’s the only system that uses actual analogue hardware to achieve it’s results, but I suspect it’s more likely that it simply returns louder masters by default, which sound superficially more impressive.

[Again, I think the comment bias in the results means we can’t draw any conclusions from the details of this poll. Maybe not even for the order of the human masters.

I also want to say that personally I thought the Aria master was the best-sounding of the automated masters overall, even though it was too heavily compressed and limited for my taste.]

That’s why the loudness-matching is so crucial, because that’s not how most people hear songs for the first time. The files in this test were balanced exactly as they would be if they were uploaded as single songs to TIDAL or Pandora, and in my experience you’d get very similar results on YouTube, Spotify and Apple Radio.

So this is a great real-world representation of how most people will hear songs for the first time. CD sales are falling day on day, and the vast majority of music discovery takes place online. If you want you music to stand out and make a great impression, you need it to work well when it’s loudness-matched. And that means mixing and mastering in the “loudness sweet spot” – with balanced dynamics. To find out the strategy I recommend to achieve this, click here.

Update

Several people have strongly criticised Kenny’s decision to use default settings for the automated mastering services, saying that the humans were told not to master for loudness, so the robots should have been “told” the same thing.

That’s reasonable, and Kenny says he’ll do a new tests to address this factor, but I disagree. In my opinion it wouldn’t have significantly changed the outcome of this poll. Here’s why:

  • Two of the human masters were “loud” anyway – in Sterling’s case because it was done years ago, and in Steven’s because he felt it sounded best that way, presumably. Despite this, people preferred them to the similarly loud automated masters, despite being less dynamic.
  • LANDR ended up pretty dynamic anyway, but the EQ wasn’t right.
  • The settings Kenny “should” have apparently used for Aria are labelled “classical and light acoustic” (E) and “for very dynamic mixes” (A) in the Help text on the site. This song wasn’t either of those – it’s a heavily compressed rock mix, so Kenny’s choice was reasonable, in my opinion.
  • Finally “B” is Aria’s default setting – it includes two other presets that are even louder.

So once again – no, this wasn’t a perfect test – but in my opinion the possibility for people to be influenced by other people’s votes and comments is a much more significant criticism than the presets used for the online services.

[And now I know this was this case to a much greater extent than I expected]

Conclusion

At the end of the day, tests like this are just a bit of fun, really. To get a truly definitive answer to the question of which masters people prefer, we would need a truly blind poll, without comments, and multiple tests using many different songs in many different genres, with many more people listening.

But for now, this is the best we have and I’m calling it:

Humans WIN did really well in this poll. Just as they should I want them to !

More info

I deliberately haven’t revealed which master is which in the poll here, in case you want to try the test for yourself. To download the files, click here. To see the poll and join the discussion, click here. (You’ll need to join Kenny’s Facebook group first, to get access.)

And to hear Kenny and I discuss the whole project in even more detail, you might like to listen to the latest episode of my Mastering Show podcast. We also reveal exactly which master is which, and I give my blind comments on the different masters, plus predictions about which is which.

If you’d like to take a listen, click here.

Humans versus Robot Mastering: Updated is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/humans-versus-robots/feed/ 0
TIDAL upgrade their loudness normalization – and enable it by default http://productionadvice.co.uk/tidal-normalization-upgrade/ http://productionadvice.co.uk/tidal-normalization-upgrade/#respond Tue, 14 Nov 2017 15:13:50 +0000 http://productionadvice.co.uk/?p=9295 Developments in loudness normalization are coming thick and fast, these days – and TIDAL just raised the bar. Quality has always been one of the major selling-points of TIDAL’s streaming service – it’s one of the few places that lossless streaming is available, still. And that means they’ve been wanting to enable normalization by default […]

TIDAL upgrade their loudness normalization – and enable it by default is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

Developments in loudness normalization are coming thick and fast, these days – and TIDAL just raised the bar.

Quality has always been one of the major selling-points of TIDAL’s streaming service – it’s one of the few places that lossless streaming is available, still. And that means they’ve been wanting to enable normalization by default in their players for some time. So that we won’t be blasted by sudden changes in level, which is a major source of user complaints.
 

But there’s a problem…

…and it also relates to quality.

Most normalization right now is done on a track-by-track basis, meaning all songs are played back with similar loudness. This seems to make sense for shuffle or playlist listening, but it doesn’t work for albums, where it changes the artistic intent.

You spend days, weeks, months crafting the perfect balance for your music, including from song to song – why would you want a computer changing that ? Research shows that only 2% of albums in TIDAL’s catalogue have songs that are all the same loudness, even in the current ‘loudness war’ era. So messing with that balance is something TIDAL really want to avoid.
 

The alternative

The solution to this challenge seems straightforward, and it’s called Album Normalization. Instead of making all songs play with the same loudness, you measure the overall loudness of a whole album, and adjust all the songs by the same amount. The overall level is managed, to prevent “blasting” and improve the user experience, but the artistic intent is preserved.

Simple, right ?

Well… not necessarily. As usual, the devil is in the details. How does the playback software detect the difference between Album or Shuffle mode ? What should happen in Playlist mode ? And what happens when you switch between them ? If the user starts listening to a song in Shuffle mode with “Track” loudness, but then chooses to listen to the rest of the album, the next track would have to be played at “Album” loudness, which breaks the loudness sequence… Apple have had album-style normalization for some time, but it still has some rough edges and bugs, especially on mobile.

And even at a more basic level, users want things to be simple. The more options, the more potential for confusion. Spotify’s normalization has been in place for years, but many people still aren’t clear on exactly how it works.
 

TIDAL’s research

TIDAL’s approach to this challenge was refreshingly simple – they asked an expert to research the best solution. That expert was Eelco Grimm, one of the original architects of the loudness unit measurement system, and a fellow member of the Music Loudness Alliance.

Eelco’s research was exhaustive and fascinating – you can hear all about it in my interview with him on the latest episode of The Mastering Show podcast in the player above, or read his findings in full on his website.

But here are the highlights:
 

Users prefer Album Normalization – EVEN in shuffle mode

This is the big one. Eelco analysed TIDAL’s database of over 4.2 million albums (!) and found examples with the biggest difference in loudness between the loudest and quietest songs. These are the albums whose dynamic structure will be changed most significantly by Track normalization, but would also presumably sound the most uneven when listened to in Shuffle mode.

Eelco built two random shuffled playlists, containing examples of these loud & soft songs, from 12 albums, with 7-10 dB of difference between the loud and soft examples. And he sent the playlists to 38 test subjects, who listened to them blind, and reported back on which ones they preferred.

I was one of those test subjects, and what I heard surprised me. The difference between the playlists was easy to hear. Album mode worked pretty well, but with Track Normalization, the songs didn’t sound equally loud ! Most would be OK, but then you’d suddenly have a song that is supposed to sound “loud” which felt too quiet, or a “quiet” song that sounded too loud. Album Normalization sounded better to me – more natural, more effective, more satisfying – even in shuffle mode.

And it wasn’t just me – 71% of the test subjects voted blind for Album Normalization, with a further 10% saying they would prefer this method by default. That’s over 80% of people preferring Album Normalization, all the time. Even when listening to Playlists, or with Shuffle enabled.

And with the benefit of hindsight, it’s not hard to see why. These albums were all mastered with care, meaning the relative levels of the songs worked on a musical and emotional level. If they worked in the context of the original album, why wouldn’t they work in shuffle as well, once all the albums were playing at a similar loudness ?

That leads us to another interesting finding, though.
 

Normalizing to the loudest song on an album sounds better than using the average loudness

Apple and Spotify both use the average loudness of each album for their Album Normalization, but Eelco recommended that TIDAL normalize to the loudest song of each album instead. Again, the reasoning behind this is straightforward.

Imagine an album with many soft songs and just one loud song, in comparison to one where all the songs are loud. If the overall loudness of these albums is matched, the loudest song on the album with “mostly quiet” songs will end up playing louder than the songs on the “all loud” album ! This doesn’t work artistically, and also opens the door for people to “game” the system and try to get some songs louder than most others. In contrast, matching the loudest songs on each album and scaling everything else by the same amount plugs this loophole, and keeps the listening experience consistent for the user.

(In fact, it’s exactly the strategy I use myself, when making loudness decisions in mastering, too.)

There were plenty of other interesting findings in Eelco’s research, too – we go into them in the podcast and I recommend you take a listen, even if you’re not that interested in normalization. But right now I want to move on to some…
 

Great news

It happens all the time: The Company has a problem. The Company commissions research. The research comes back, and tells The Company something unexpected, or unwelcome. The Company ignores the research.

But not TIDAL. Not only did they accept the findings of Eelco’s research in full, they paid attention and implemented his recommendations. And we learned yesterday that their new loudness normalisation method is live now – by default, in every new install of their player application on iOS or Android devices. All the time – even in Shuffle mode – and they’re working on the same system in their desktop application, too.

And that’s huge. It means Apple is now the only major streaming service not to have normalization enabled by default – apart from SoundCloud and Google Play, neither of which offer normalization yet.

And not only that, but it’s a significant upgrade in comparison to the normalization used everywhere else. By using the “loudest song method” of Album Normalization to balance albums against each other, TIDAL have ensured not only that their normalization can’t be “gamed”, and the artistic intentions of the artists are preserved, but also that their overall loudness level will comply with the AES streaming loudness recommendations.
 

So what ?

The momentum is building all the time. We saw the most recent signs that streaming services are really taking normalization issues seriously when Spotify reduced their playback reference level to be more in line with Apple, Pandora and YouTube earlier this year, and I’m confident the same thing will happen with these improvements by TIDAL.

After all, it’s a win-win. Using Album Normalization to the loudest song (@ -14 LUFS) gives a better user experience, is simpler and easier to understand, and is preferred by over 80% of users ! What’s not to like ?

These changes are simple, but profound. Most importantly, they overcome a major (and real) objection to normalization in general – that it shouldn’t disturb the delicate balance between songs. I’ve often heard people say “I don’t want the loudness of my songs changed”, and now it won’t be – except to keep a consistent maximum playback level from album to album.

All the streaming services care deeply about music, and high quality – despite the cynicism I sometimes see – and I’m confident they will all adopt Eelco’s recommendations in the near future.

And personally, I can’t wait.
 
 

TIDAL upgrade their loudness normalization – and enable it by default is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/tidal-normalization-upgrade/feed/ 0
YouTube reveals EXACT volume normalization values – find out how to see them http://productionadvice.co.uk/stats-for-nerds/ http://productionadvice.co.uk/stats-for-nerds/#respond Fri, 29 Sep 2017 09:50:34 +0000 http://productionadvice.co.uk/?p=9223   Are you confused about exactly what YouTube’s playback volume normalization is doing to your music ? Maybe you understand the basic idea but struggle to predict exactly what will happen when videos are uploaded ? Well, that’s understandable – the procedure is still inconsistent and unpredictable. Some songs are measured and normalized right away, […]

YouTube reveals EXACT volume normalization values – find out how to see them is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
Are you confused about exactly what YouTube’s playback volume normalization is doing to your music ?

Maybe you understand the basic idea but struggle to predict exactly what will happen when videos are uploaded ?

Well, that’s understandable – the procedure is still inconsistent and unpredictable. Some songs are measured and normalized right away, others take weeks, some never seem to be normalized at all.

But it is happening – and YouTube just added an important new feature which can really help you get a grip on the process.

You can now see exactly what effect the system is having on our audio, because YouTube have exposed the normalization data in their interface. You just need to know where to find it – and what it means.

(Thanks to Paul Maunder for the heads-up !)

To see it for yourself, right-click on any YouTube video and select the “Stats for nerds” option.

 (Yes, this means that you are now a nerd 🙂 )

The fourth item down in the list will say something like:

Volume / Normalized:  100% / 54% (content loudness 5.3 dB)



The first percentage describes the Volume slider setting in the YouTube player window, and can be adjusted by clicking on the “speaker” icon and dragging the slider up or down.

The second percentage reflects the normalization adjustment being used. This is the amount by which the playback volume of the clip has been turned down to prevent users being blasted by sudden changes in volume in comparison to everything else. The value scales in proportion with the Volume slider setting.

So for example, if the normalization percentage reads 60% when the Volume slider is at 100 %, it will scale down to 30% if you move the Volume slider to 50%. This means that if you want to use these stats to compare songs with each other, you should always set the Volume slider to 100% first.

The final value is the “content loudness” value, and indicates the difference between YouTube’s estimate of the loudness and their reference playback level. This value is fixed for each clip, and isn’t affected by the Volume slider.

So for example a reading of 6dB means your video is 6dB louder than YouTube’s reference level, and a 50% normalization adjustment (-6dB) will be applied to compensate. Whereas a negative reading of -3dB, say, means it’s 3 dB lower in level than YouTube’s reference, and no normalization will be applied, so the normalization percentage will always be 100% of the Volume slider’s value – YouTube doesn’t turn up quieter videos.

(Important note – I’ve seen the way these values are reported change several times over the last couple of weeks. YouTube are obviously still working on this feature, so it may change again, and I’ll try to keep this post updated if they do.)

So what ?

Firstly, these “Stats for nerds” give you a quick and easy way to check whether your video has been normalized yet. If there’s no “content loudness” value listed, the video hasn’t yet been normalized, and the second value will always be the same as the Volume slider percentage – the song will be played as loud as the Volume slider allows.

(This happens more often than you might expect – for example normalization seems to have been “on hold” in August and early September 2017 – but more recent uploads have already been measured. It also answers a very common question – yes, adverts are being normalized – or at least, they are right now.)

Secondly, if there is a “content loudness” value listed, then your video is being normalized, and you can see exactly how much by setting the Volume slider to its 100% maximum, and checking the normalization percentage value.

So in the image above, for example, the Metallica song is being turned down to only 54% of it’s original volume (-5.3 dB) and Taylor Swift’s “Shake It Off” is also being turned down by a substantial 4.6 dB.

Whereas the final video in the image is a song that I mastered myself recently – a trance/techno track called “Vi er GodsetGutta” by B Killax – and because YouTube measure it as being 0.7 dB quieter than their desired reference level, it always gets played as loud as the Volume slider setting allows.

Thirdly, it means that if you want your music to stand out in comparison to everything else, you want to avoid large positive or negative “content loudness” values – you need to optimise loudness, not maximize it.

The great news is that when you do this, your music will actually “pop” more other songs, in my experience. For example the song I mastered actually has more punch and impact than the other two, in my opinion, especially in the low end – despite having been mastered at a lower level. Which of course is exactly what you would expect, because it has better micro-dynamics. To see if you agree with me, take a listen to the playlist here.

How do we use this ?

Apart from being interesting, the fact that YouTube have made this information visible means that you can test the effects of normalisation yourself. Simply upload a song, wait for it to be normalized and check the stats.

And then you can tweak, re-upload and test again, if you like – to try and get an even better result.

But here’s the thing. My advice is:

Don’t bother.

The best way to optimize loudness on YouTube

By all means check out the Stats For Nerds for your songs, and see how they compare with other similar tracks – and of course, how they sound.

But getting drawn into a cycle of uploading, testing and re-uploading over and over isn’t an efficient way to work, in my opinion. For one thing, it’s really tedious !

And more importantly, at the rate YouTube are releasing updates to their normalization system, there’s no guarantee that what works today will still work tomorrow – or next month, or next year.

It’s far better to aim for a result in mastering that you can be confident will result in minimal normalization changes to your audio, and therefore maximize both the playback volume, and the punch and impact of the music.

That’s the method I used to master “Vi er GodsetGutta” – and every other song I work on, for that matter.

 All the examples I’ve found on YouTube are being played with no volume reduction from normalization, and are assessed as being within 1 dB of YouTube’s reference level. And it works on all the online platforms, not just YouTube.

It’s a simple method, and straightforward to implement – and I explained it in a blog post a few days ago. (Hint: it’s not about aiming for -14 LUFS !) To find out how it works, click here.

And meanwhile, try not to spend too long worrying about the Stats For Nerds, and focus on making great-sounding music instead 🙂

 
 

YouTube reveals EXACT volume normalization values – find out how to see them is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/stats-for-nerds/feed/ 0
How loud ? The simple solution to optimizing playback volume online – and everywhere else http://productionadvice.co.uk/how-loud/ http://productionadvice.co.uk/how-loud/#respond Tue, 26 Sep 2017 11:02:16 +0000 http://productionadvice.co.uk/?p=9206   I get asked this question literally every day, now. And I see people asking it, everywhere: “What’s the ideal loudness for my music to get the best playback volume online ?” Because people have realized that loudness normalization is a reality. They know that loud songs are turned down to stop users being blasted […]

How loud ? The simple solution to optimizing playback volume online – and everywhere else is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
I get asked this question literally every day, now.

And I see people asking it, everywhere:

“What’s the ideal loudness for my music to get the best playback volume online ?”

Because people have realized that loudness normalization is a reality. They know that loud songs are turned down to stop users being blasted by sudden changes in volume – and they’ve probably heard some numbers: -13 LUFS for YouTube, -16 for iTunes and Pandora, -14 for Spotify and TIDAL… but which one should you choose ? Is there a perfect number, or do you have to submit different masters for every platform ?

In this post I’ll answer that question, simply and clearly.

(If you’re impatient, feel free to skip to the end – but please come back and read this explanation afterwards, too !)

Before that though, it’s important to realise – asking this question misses three key points.

The first is:

1 – There are no ideal loudness values – just guidelines you can follow

Because although although all the streaming services are measuring loudness and turning loud songs down, they all do it in different ways. They don’t all use LUFS loudness units, and they’ve all chosen slightly different reference levels.

So you can’t choose an ideal loudness that suits all platforms, because there isn’t one.

But the good news is – you don’t need to.

The whole point about loudness normalization is that each streaming service will measure the loudness, and adjust the playback volume according to their rules.

So you can make your music as loud as you like, if you want to – it just might get turned down. And that’s OK, because so does everything else.

Which means targeting a specific integrated loudness is a red herring. Lots of people are asking if they should aim for an integrated loudness of -14 LUFS, for example – because that’s the volume TIDAL uses, and Spotify recently reduced their level to something similar (although they don’t use LUFS to make their measurements, so this is only an approximate value). Plus -14 is only a dB quieter than YouTube’s approximate level of -13 LUFS, and 2 dB louder than Apple Sound Check… so all in all it seems like a pretty good value to have in mind.

But that brings us to the second key point I mentioned:

2 – Integrated loudness isn’t the best way make loudness choices

Here’s what I mean.

Integrated loudness is an overall value for a song, album or any section of audio.

Just one number.

It does take account of the loudest moments, and the quietest – but you can’t tell what they were, just by looking at the number.

Imagine two songs, balanced by ear. One of them could be straight-ahead rock, with almost the same short-term loudness all the way through, hovering around -14 LUFS – so that’s what the integrated loudness reading across the whole song will read. And now imagine a more varied song – still heavy, but with a quiet introduction and more mellow verses. These quieter sections will reduce the overall integrated loudness reading – down to -16 LUFS, perhaps.

So far so good – you can’t tell by looking at the integrated loudness if you have two “loud all the way through” songs, or one loud and one with more varied dynamics – but so what ? You matched them by ear, and when you play them back one after the other, they sound great. The loud sections of both are at similar levels, and the quieter choruses work for the more varied song – who cares if they measure slightly differently ?

The problems start when you turn this process the other way around.

Rather than measuring the songs, you want to choose how loud they should be.

If you use your ears again, you’ll be fine – but that’s not what people are asking me about. If you just follow the numbers and make things match an integrated loudness value – making both songs measure -14 LUFS for example – the more varied song will sound 2 LU too loud in comparison to what you would have chosen by ear. The integrated LUFS value tells you nothing about the dynamic variety in the song. In other words, our opinion about what integrated loudness feels musically right changes, depending on the song – and genre, and arrangement… and everything.

Don’t worry, there is a solution to this – but before I get to it I just want to highlight the third, simplest and probably most important point in all of this:

3 – Loudness is an artistic decision

You probably already guessed this one – loudness shouldn’t be about the numbers.

And neither should any other property of music, of course. Numbers are helpful as a sanity-check, and for training our ears. But that doesn’t mean you should choose the EQ balance or how loud to master a song based purely on measurements – in an ideal world you just choose what sounds best.

And the great news is that we’re headed in that direction ! Since loudness levels are being adjusted on playback, you’re free to make that choice based on what’s right for the music, and not have to worry that someone else will “cheat” and try to make theirs sound better just by making it louder – that won’t work.

(Up to a point – see the very end of this post…)

Just tell us the numbers !

OK, I said I’d answer the “how loud” question simply and clearly – and I will.

But from what’s written above you’ll have gathered by now that I’m not going to be recommending any of the LUFS numbers suggested above – or any integrated loudness.

Instead, my recommendation uses short-term loudness values, and it’s this:

Master no louder than -9 LUFS short-term at the loudest moments
(with True Peaks no higher than -1)

That’s it.

If you follow this suggestion, you’ll be in great shape, in almost any genre. Your songs will be loud enough to sound “competitive”, whilst still retaining plenty of punch and dynamic contrast. They’ll stand shoulder to shoulder with anything else, on all the streaming platforms, and they won’t get turned down.(*)

(*) Actually they might get turned down a little, but it’s not the end of the world – because so will almost everything else.

OK, now explain how the numbers work !

This suggestion is based on over 20 years of my experience as a professional mastering engineer, on conversations with other mastering engineers, on analysis of my favourite-sounding albums, and on teaching an online course to over 1000 students who’ve also had great results.

The theory is simple – make all the loudest moments similar in loudness, and not too loud – and then balance everything else with them musically.

It just works ! It avoids the problem of using integrated loudness as a target, where you get lower values for music with more dynamic variety, even if the loudest moments are just as loud. But it still gives you a useful benchmark – something to aim for. There can be occasional louder moments, if they work musically, and of course you can go quieter if you want to – always make decisions based on musical considerations, not just the numbers – but this is the simplest and best guideline I can give you.

And in fact when I follow this rule, in most popular genres the integrated loudness often comes out in the -12 to -14 LUFS range – bang in the sweet spot for all the online streaming platforms…

Optimize, don’t maximize – seize the opportunity of dynamics

Maximising loudness doesn’t work, any more. Aiming for a specific integrated loudness doesn’t work, reliably.

But deciding how loud to master the loudest sections of music, keeping them consistent and balancing everything else to feel right musically does work – and it helps you optimize the loudness of your music, making the most of the peak headroom the online streaming services make available.

This is a fantastic opportunity – a true win-win ! You can make the best decisions for your music based on the music itself – and feel confident that it will sound great online, and everywhere else.

(Because these guidelines not only work online, they’re how I’ve been optimizing loudness and dynamics for years, even on CD. Guess what – listeners adjust playback levels, too !)

Make your loudness decisions based on the way the music sounds, rather than arbitrary numbers – but keep an eye on the guidelines, even so.

Coda – The devilish details

The method described above works, but there are a couple of extra details to be aware of.

Firstly, all the streaming services turn louder music down, but not all of them turn quieter music up – for example YouTube & TIDAL. And the ones that do turn quieter songs up will try to avoid causing peak clipping as a result, either by restricting the extent to which levels can be lifted (iTunes) or by using a peak limiter (Spotify).

What does that mean ? If you master your music very quietly, it may not sound as loud as other similar songs. That might not bother you, but if it does, it’s worth keeping an eye on. It’s one of the reasons I developed my Dynameter plugin, which visualizes the dynamics of your music in realtime, to help you optimise it for maximum dynamic impact and compatibility online. I use it on every master I do, these days. For more information, click here.

And secondly, it may sound obvious, but loudness isn’t everything ! Not by a long shot.

To sound great, you still need a great song, great performance, great arrangement, great mix, balanced EQ and dynamics… but that’s what keeps all of this interesting, right ?!?
 
 

How loud ? The simple solution to optimizing playback volume online – and everywhere else is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/how-loud/feed/ 0
The surprisingly simple hack to make your music POP online – and everywhere else ! http://productionadvice.co.uk/make-music-pop/ http://productionadvice.co.uk/make-music-pop/#respond Tue, 25 Jul 2017 01:35:31 +0000 http://productionadvice.co.uk/?p=9181   This video shows a surprisingly simple technique to make your music stand out online – even in an aggressive genre like EDM. The trick is easy, the video includes a real-world example to prove that it works, and best of all – it’s free ! Actually that’s not the best of all – the […]

The surprisingly simple hack to make your music POP online – and everywhere else ! is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
This video shows a surprisingly simple technique to make your music stand out online – even in an aggressive genre like EDM.

The trick is easy, the video includes a real-world example to prove that it works, and best of all – it’s free !

Actually that’s not the best of all – the best of all is that this tip works in any genre, and it doesn’t only work online.

And along the way, it proves once and for all that people who tell you there’s only one way to get “The Sound” in EDM… are wrong.

So, what are you waiting for ? Take a look, and if you like it – please share !

[Updated video – remix matches CD master more closely for a better comparison]

How to persuade your clients

If you like the idea of this technique but don’t think you’ll be able to persuade the artists, labels and engineers you work for – try this.

(And to find out more about my Dynameter plugin, click here)

More details (warning, spoilers)

I deliberately didn’t say what the “hack” is above – so if you haven’t watched the video, do that first – there are some clues below.

Several people have commented that there’s too much pumping in the remix, which is fair enough. But bear in mind that the remix is made from stems, and the pumping is part of the stems.

In other words the CD master squashed the dynamics of the original so much it even reduced the pumping effect that the artists chose in the studio !

And I’m sure there’s all kinds of other extra subtlety in the real mix, too. Maybe the remix would benefit from a little more dynamic control in mastering, but it doesn’t need to be crushed by an extra 6 dB.

Bottom line – if you prefer the CD version that’s absolutely fine, but the reason is the mix – not the crushed dynamics.
 
 

The surprisingly simple hack to make your music POP online – and everywhere else ! is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/make-music-pop/feed/ 0
The Foo Fighters just proved me right about loudness – and dynamics http://productionadvice.co.uk/foo-fighters-dynamics/ http://productionadvice.co.uk/foo-fighters-dynamics/#respond Fri, 16 Jun 2017 12:00:22 +0000 http://productionadvice.co.uk/?p=9143   Foo Fighters just released a surprise new single, “Run” – and the biggest surprise to me is that it has great dynamics. All their recent releases have been pushed really hard, in the loudness department – not disastrously, but I’ve always thought they would have sounded better with more room to breathe. This single […]

The Foo Fighters just proved me right about loudness – and dynamics is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
Foo Fighters just released a surprise new single, “Run” – and the biggest surprise to me is that it has great dynamics.

All their recent releases have been pushed really hard, in the loudness department – not disastrously, but I’ve always thought they would have sounded better with more room to breathe.

This single proves me right.

But then, I would say that ! I’m always saying that balanced dynamics beat loudness.

So in this post, I’m not going to offer any personal opinions at all, I’m just going to let the facts speak for themselves – and the reviews.

Reviews like this one, in Billboard:

Foo Fighters Crank Up the Heavy… play[ing] with a soft-loud-soft dynamic on the new single “Run,” which opens as a dreamy, slow burner then, as you’d expect with the Foos, quickly turns heavy as thunder. How heavy? So heavy your mom will hate it and your neighbors will tell you to turn it down. So heavy it might just feature some of the most hulking moments in the Foos’ canon

Or this one, in Blabbermouth:

a monolithic song of the summer shoo-in as melodic as it is monstrously heavy

– and these comments are about a song that is 4dB quieter than their 2011 single “Rope” !

So how does this compute ?

How can it be a “a full-bore riff-rocker with a huge, triumphant chorus” (Stereogum) with “the speakers going to 11” (SPIN) when it’s mixed and mastered at a lower level than their earlier releases ?

How can it be quieter but sound louder ?

Because dynamics.

And because loudness management.

This song sounds just as loud as “Rope” on YouTube, TIDAL and Spotify. But “Run” has 4 dB more peak-to-loudness impact than “Rope”, as my Dynameter plugin clearly shows – and the Foos have made it count:

QED

Don’t trust the reviews, though – listen for yourself. Listen to the way the guitars pile in during the chorus, the pounding drums – this song still sounds exactly like a Foo Fighters record should, proving yet again that “loudness” isn’t a requirement of “the sound”, it’s just an increasingly irrelevant technicality.

The Foo Fighters have seized the opportunity of using more dynamics in their music, and it’s worked.

Maybe you should, too.

 
 

The Foo Fighters just proved me right about loudness – and dynamics is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/foo-fighters-dynamics/feed/ 0
Spotify just reduced its loudness playback level ! http://productionadvice.co.uk/spotify-reduced-loudness/ http://productionadvice.co.uk/spotify-reduced-loudness/#respond Mon, 22 May 2017 16:03:14 +0000 http://productionadvice.co.uk/?p=9114   The post title says it all – in the last few days it’s become clear that Spotify have chosen to reduce their playback loudness reference level from approximately -11 LUFS down to approximately -14, broadly in line with YouTube and TIDAL. This is a big deal, and in a minute I’ll discuss why, but […]

Spotify just reduced its loudness playback level ! is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

 
The post title says it all – in the last few days it’s become clear that Spotify have chosen to reduce their playback loudness reference level from approximately -11 LUFS down to approximately -14, broadly in line with YouTube and TIDAL.

This is a big deal, and in a minute I’ll discuss why, but before that – what does it mean, in simple terms ?

[This change is very recent, and you may need to update to the latest release of Spotify before you see it – the build number we are testing is 1.0.54.1079.g3809528e. It’s also possible this change hasn’t rolled out in all territories at the time of writing – 22nd May 2017]

In a nutshell, it means it doesn’t matter how high you push the level of your mixes and masters. Once the raw loudness of the files gets past a certain point, online streaming services will turn them down – keeping them all at the same reference level, to stop users being annoyed by sudden changes in volume.

Exactly where the “point of no return” is varies slightly between different streaming services, but Spotify always used to be the loudest, by a whopping 2-3dB.

And this was a real shame, because it put pressure on musicians, labels and engineers to make the raw loudness levels higher to try and “compete” – even if it didn’t suit the style of the music.

But now, all that has changed.

Why this matters

YouTube, Spotify and TIDAL all now use playback reference levels within a dB of each other, and Apple Sound Check and Pandora are another 2 dB lower than that, matching the recommendations of the Audio Engineering Society for streaming loudness.

So there’s no pressure any more to master louder in order to “compete” on Spotify – you can use the same guidelines for all the major streaming services, and be confident of a great-sounding result.

You can have great dynamics and sound loud – that’s a win-win !

How Loud ?

In a nutshell, the new magic number is a reading of -14 LUFS integrated, meaning an overall value measured across the whole song, while keeping peak levels no higher than -1.

YouTube’s reference level is actually 1 dB louder since the Spotify change, so you might choose to push things a little harder if maximum loudness on YouTube is important to you. If your music has varied dynamics though, it probably isn’t necessary.

And of course you do still need to keep an eye on the “crest factor” – the difference between the peak level and the short-term loudness. If this drops too low, your music may be turned down more than you expect. This value is labelled PSR in my Dynameter plugin, which was designed specifically to optimise this value property for optimal audio dynamics.

A huge improvement

This change is fantastic news. The -14 LUFS figure may not comply with the AES recommendations, but the reality is that this figure allows plenty of peak-to-loudness headroom for most mainstream music these days to have plenty of dynamics and sound great – which is a win-win for everyone.

I’ve been campaiging a change like this for some time now, both on the Spotify forums and via the Streaming Loudness Petition. There’s no way to know whether either of these initiatives actually influenced Spotify’s decision, but it really doesn’t matter.

The great news is that all the online streaming services now cater for music with decent dynamics – and they’re close enough to each other that there’s no need to create specially optimised masters for each platform – although this is still an option for people who want it.

Over 6 years ago now, I predicted that Spotify would end the loudness wars. Today is another important step towards that prediction coming true.

Hats off to Spotify, and long may the trend continue !

Coda

You may be reading this thinking – “What’s the big deal ? it’s just normalisation”.

And you’re right – this kind of processing won’t fix the damage that’s already been done in the process of making those loud-songs-that-are-being-turned-down loud in the first place.

But over the longer term, it removes the incentive to do it again. Sooner or later, the questions change:

Old question: “Why does Song X stand out on the CD changer ?”

Old answer: “Because it’s louder”

New question: “Why does Song Y stand out online ?”

New answer: “Because it has great dynamics”

And that is the start of a really interesting conversation.
 
 
Thanks to Home Mastering Masterclass members Norbert Tomczak and Sigurdór Guðmundsson for the heads-up on Spotify’s decision !

Update – Thanks also to Jean-Michel Kovacs, who actually told me about this in a YouTube comment even earlier than Sigurdor or Norbert !
 
 

Spotify just reduced its loudness playback level ! is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/spotify-reduced-loudness/feed/ 0
Which mastering EQ plugin sounds best ? Hear for yourself ! http://productionadvice.co.uk/which-is-best/ http://productionadvice.co.uk/which-is-best/#respond Sat, 15 Apr 2017 01:24:24 +0000 http://productionadvice.co.uk/?p=9032 People ask me this kind of thing all the time. What’s your favourite mastering limiter ? Or compressor ? Or EQ ? And I’m always reminded of a saying I’ve heard, which goes something like: “Ask an audio engineer what the best ______ is, and he’ll just tell you whatever he’s using right now” There’s […]

Which mastering EQ plugin sounds best ? Hear for yourself ! is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
People ask me this kind of thing all the time.

What’s your favourite mastering limiter ?

Or compressor ?

Or EQ ?

And I’m always reminded of a saying I’ve heard, which goes something like:

“Ask an audio engineer what the best ______ is, and he’ll just tell you whatever he’s using right now”

There’s a lot of truth in that, and it’s one of the reasons I try to avoid getting into detailed recommendations myself, although I’m happy enough to tell people particular processors I’ve used and liked.

But now, there’s another option – you can listen and decide for yourself, thanks to a cool new website called Gearshoot.

Decide for yourself

Gearshoot is a fantastic resource put together by the guys at Kog Mastering in New Zealand, and allows you to set up your own A/B comparisons between a massive (and ever-increasing) range of hardware and software processors, with a variety of musical examples in several different genres.

(And crucially, they’re all loudness-matched using the LUFS standard, so you’ll have the most objective comparison of how things really sound, without being fooled by the Loudness Deception.)

So for example, you can design your own shootout between a hardware 1176 and various plugin emulations, using drums, bass or a whole mix. Or you could browse some of the many interesting examples the site owners have already put together as presets – for example, this one on mastering EQ plugins:

Digital EQs for Mastering Review – Part 1

The results can be fascinating. Sometimes there’s almost nothing to choose between the various examples, when people have told you to expect night-and-day – and sometimes there are very clear differences where you might not have expected them, especially with the more extreme processing examples.

And despite the fact that I’ve said on several occasions that my own choices of digital EQ are driven far more by the features and interface, there are some clear and interesting differences between some of the examples in the above test.

But here’s the thing.

It ain’t what you use…

Those differences have a completely different effect, based on the material that’s being tested. So what sounds right to you for one music clip, might sound completely wrong for another one.

And when I listen to ANY of these examples, there are still tweaks I want to make, even to the ones I like best. And in my experience, after making those tweaks, the overall differences between the different processors sound even less significant.

Now that’s not true of all the examples, of course – in this EQ shootout for example, the slightly fuzzy, saturated quality of the vintage emulations can’t be achieved with the cleaner digital varieties. But I bet I could achieve something similar (or better) with some of the other tools in my collection – and probably with more control over the final result.

Of course if the EQ I’m using just happens to have exactly the flavour I’m looking for, then great – but I’m a control freak ! More often that not I still want to tweak and refine further – that’s part of what being a mastering engineer is.

And that’s why the unofficial motto of my Home Mastering Masterclass course is “it ain’t what you use, it’s the way that you use it”.

Everyone loves sexy analogue hardware, me included ! It’s just a pleasure to use, and if you’re lucky enough to have a room full of it, go for it.

But don’t agonise about it if you don’t. Nine times out of ten you can achieve a very similar result with a little ingenuity and experience using the gear and software you already have – and sometimes you can get something even better.

It’s more important to know the gear you have inside out, than have a room full of alternatives – analysis paralysis is a very real problem…

Check out Gearshoot

Having said all that though, don’t take my word for it.

Head over to Gearshoot and try it for yourself ! There’s so much to listen to there, I’ve barely scratched the surface, and it’s great fun. You can spend hours checking out all that high-end gear you’ve been dreaming about for so long, and try to decide if you really need it or not. And who knows, there may be some magic, unique sounds in there that simply can’t be achieved in any other way.

If you find some, please let me know !

PS. You may be thinking that the special analogue magic of the hardware units in these shootouts is being lost because we’re listening to digital recordings. If so, please read this. And this.

Which mastering EQ plugin sounds best ? Hear for yourself ! is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/which-is-best/feed/ 0
Do YOU know someone who needs this infographic ? http://productionadvice.co.uk/compression/ http://productionadvice.co.uk/compression/#respond Tue, 07 Feb 2017 17:53:06 +0000 http://productionadvice.co.uk/?p=8958 It’s probably THE single most common source of confusion I see in discussions of audio. People say things like: “The compression on YouTube really kills the dynamics” or “To get a good encode the music needs to be compressed really hard” or “I hate the sound of compression. mp3s sound really squashed” Now, all of […]

Do YOU know someone who needs this infographic ? is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

It’s probably THE single most common source of confusion I see in discussions of audio.

People say things like:

“The compression on YouTube really kills the dynamics”

or

“To get a good encode the music needs to be compressed really hard”

or

“I hate the sound of compression. mp3s sound really squashed”

Now, all of those statements are based on real opinions about music and sound quality, but they’re all also horribly confused.

Which is understandable, because they’re all talking about compression – but in audio, we commonly talk about two completely different types of compression !

They do different things, they have different purposes, and they have different effects on the sound. But people still refer to them both as “compression”, without saying which one they’re talking about. Sometimes it’s obvious from the context – but often, it’s not.

So, this my infographic is my latest attempt to help people sort out the difference – if you know someone who might find it helpful, please share !

(Click on the image above to see a higher-resolution version, or to download a PDF copy, click here)

And if you want to dig this topic in more depth, here’s something I wrote a few years ago which explains the difference using sponges.

The Gory Details

OK, so you already get it – data compression affects file size, but not dynamics. Dynamic compression affects dynamics, but not file size. And they both affect the sound, but in different ways.

High-quality data-compression can sound almost identical to the original source, while using far less space and bandwidth. But some encoders, codecs and data-rates can suck the soul out of the music, rendering it subtly cold, lifeless, edgy and two-dimensional – or even blatantly distorted, with added ultra-sonic birdies for good measure.

Whereas great dynamic compression can enhance almost every aspect of a recording, adding punch, power, impact, consistency, density and warmth. Just for starters. But inappropriate or clumsy over-compression can also suck the life out of the music, robbing it of almost all the same attributes, or even blatantly distorting the sound.
And even then we’re not done, because the two types do interact in some subtle ways.

Compression plus compression

Excessive dynamic compression actually makes it harder to get a great-sounding data-compressed encode, contrary to popular belief, because the encoder struggles to decide what’s important mjusically when everything is at full tilt the whole time.

And data-compresion can seem to affect the micro-dynamics of the music, by changing the peak level of the reconstructed waveform as a side-effect of the encoding & decoding process. And the more heavily dynamically compressed and limited the source, the more noticeable this effect is. It has no audible effect, though – except perhaps adding extra clipping distortion on playback systems that don’t have enough headroom to deal with the higher peaks.
And because almost all online streaming services use data compression plus loudness management, it’s easy to be fooled into thinking they’re somehow affecting dynamics, too – since really “loud” music seldom sounds anywhere near as impressive when it’s reduced to the same playback loudness as everything else.

Find the sweet spot

Luckily though, the solution to both these complications is straightforward.

Always leave at least 1 dB of peak headrooom, and then find the loudness sweet spot for your music -where you have the perfect balance of loudness and dynamics.

It won’t get turned down online, it’ll encode cleanly to mp3, AAC and other lossy data-comressed formats, and it’ll sound great – maximising the potential for punch, power and impact.

Job done.

Compression is your friend (both kinds!) provided you understand how it works, and how to get the best out of it.

Do YOU know someone who needs this infographic ? is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/compression/feed/ 0
Mastering for Native Instruments’ Stems format http://productionadvice.co.uk/mastering-native-instruments-stems/ http://productionadvice.co.uk/mastering-native-instruments-stems/#respond Thu, 22 Dec 2016 15:01:26 +0000 http://productionadvice.co.uk/?p=8927 Native Instruments’ Stems format is a different way to distribute music, especially EDM/dance/electronica – each file bundles together both a stereo master, plus 4 stereo “stems”: Drums Bass Melody Voice Having these elements stored separately gives far more flexibility when playing the file using compatible software – for example DJs can choose to layer different elements […]

Mastering for Native Instruments’ Stems format is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>

Native Instruments’ Stems format is a different way to distribute music, especially EDM/dance/electronica – each file bundles together both a stereo master, plus 4 stereo “stems”:

  • Drums
  • Bass
  • Melody
  • Voice

Having these elements stored separately gives far more flexibility when playing the file using compatible software – for example DJs can choose to layer different elements from different songs in a mix.

Ever since the format was released people have been asking me for ideas on how to master for it, though – how do you process stems to sound good individually, but also combine correctly to create a satisfying mix ? The format includes the ability to add compression and limiting when playing the files back, but these won’t necessarily sound the same as your favourite mix bus or mastering processors, and the metering options are very limited. How can you deal with this ?

I still haven’t dug into the format myself, but in this video, mastering engineer and mixer Ian Stewart shows how he solves the challenges of the format. Ian took my Home Mastering Masterclass course a few years ago, and has been active in the Facebook group ever since, helping other members. He’s one of several members I particularly appreciate there, because he almost always answers questions in exactly the same way that I would, and I even invited him to be a guest on my podcast recently to help explain the topic of mid-side processing. (And if you haven’t read his blog post on EDM dynamics yet, you should !)

In the video above Ian builds on the methods I recommend in the masterclass course, and walks you through his entire process, showing how he has adapted them to master for the stems format, including:

  • How to set up your DAW to master for Stems
  • How he uses EQ and stereo processing when mastering for Stems
  • How to stop the final limiter working too hard
  • How to get consistent results with compression on both separate stems and the final mix
  • How to store lossless audio in Stems format
  • How to get better metering options within the stems creator

So, if you’re getting started mastering for the Stems format, or are interested to give it a try, I think you’ll find it really helpful – take a look !

 

Mastering for Native Instruments’ Stems format is a post from Ian Shepherd's: Production Advice

Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

]]>
http://productionadvice.co.uk/mastering-native-instruments-stems/feed/ 0