Read our post on linear phase filters and frequency response with the Fabfilter Pro Q 2
Filters - Minimum phase or linear phase? One of these can seriously mess with your frequency response. Today you're going to learn which, and why.In the analogue world, all we have, and have ever had, are minimum phase filters. As the level changes according to frequency, so does the phase change. That's the way things are due to the eternal laws of the universe, which are very difficult to argue with.
To create a linear phase filter in the analogue domain, you'd have to go back in time. In the immortal words of Dick Dastardly, "Drat and double-drat".
But in digital audio, we can in effect go back in time. Simply delay the signal and use that delay to process. Everything comes out a little later than it should, but latency compensation, or you can call it delay compensation, in the digital audio workstation can fix that.
With a linear phase filter, the level changes according to frequency, but the phase stays exactly as it was.
This seems better, and it can be. There's a price to pay in terms of pre-ringing, but that's another topic for another video.
So what's the problem with minimum phase?
Usually nothing of any consequence, as the history of the last 70 years of analogue audio has shown us.
But there are situations where there can be an issue. Typically this would be where you're splitting a signal in two then mixing the two parts back together again, which would probably be to process them differently.
An example would be multi band compression. You could for instance filter out the low frequencies, compress them, then mix them back in again. That's something you might do in mastering.
The potential issue is that the filters might cause frequency response problems when you mix the signals back together.
Let's move on to a demonstration.
What I have here is a frequency sweep from 100 Hz to 1600 Hz and in the following demonstrations I'm going to use a centre frequency of 400 Hz.
Here's the sweep with no filtering...
I'm going to make two copies of the sweep and apply a low-pass filter to one, and a high-pass filter to the other.
I'm going to set the cutoff frequencies so that the minus 6 dB point of both is 400 Hz. This is so when I mix them back together, because they are correlated they should add together at the original level.
Here's the low-pass filtered version...
And here's the high-pass version...
So when I mix the two filtered versions together, we would hope that they sound the same as the original, and there are no frequency response problems.
So let's try this with linear phase. I'm using the FabFilter Pro Q2 set to linear phase.
You can hear that the level hardly changes at all, and you can see that on the meter. Let's play it again...
That's all well and good. But is it so much better than minimum phase?
To find out, I'm going to set the filters to minimum phase. FabFilter calls it 'natural phase'.
Before I play the sweep, take a moment to imagine what's going to happen. Is it going to be pretty much the same? Or will you notice a difference?
Here we go...
Well I bet that was more of a difference than you thought. It was what I expected, but a lot more than I expected. Let's play it again...
So going back to my mastering example, if you split the low frequencies from a signal to compress them using minimum phase filters, then mix them back in again, you are going to have a frequency response issue.
It isn't going to be a disaster, but you should consider correcting the problem with EQ. Just make it so it sounds good.
So this is what happens with linear phase and minimum phase filters.
The further question is why does it happen?
The clue is there already. Phase.
I'm going to demonstrate this with a vectorscope. I'll pan the low-pass filter left and the high-pass filter right.
In the vectorscope, signal on the left channel causes the trace to tilt to the left, signal on the right channel tilts to the right. With mono the trace goes vertically up and down.
If the channels are in phase, the trace will be a straight line at whatever angle.
Let's play the linear phase sweep...
That is exactly as we should expect. The signal pans from left to right, and the trace is always a straight line, showing that the low-pass and high-pass filters are exactly in phase all the way through.
I'm going to play the minimum phase version now. Like before, take a moment to imagine what you're going to see. It is going to be different, but in what way?
Here we go...
That was an experience. What it shows is that there is a huge variation in phase as the frequency rises and a huge difference between the low-pass and high-pass filters. When identical signals that are exactly in phase are mixed together, they'll add up by 6 decibels. If they are not in phase, the level they add up to will be lower.
So this explains the frequency response issue I demonstrated earlier.
In summary, analogue audio has been minimum phase for decades, and it works. But there are occasions where linear phase is better, and you should probably use it.
Note on the Bessel filter: This is a filter with a highly linear phase response that can be constructed in the analogue domain. It is commonly used in loudspeaker crossovers. Its frequency response however is not as useful in practice for other purposes and filter designs such as the Butterworth are more often used despite their comparatively poor phase response.
Comments on this video
You can comment on this video at YouTube
Does anyone understand why stereo panning is introduced in the final step, when (presumably) all that's needed to demonstrate the phase interactions is a db meter?
With linear phase filters the signal pans Left - Center - Right. With minimum phase filters the signals pans Left - Stereo - Right (L and R are phase shifted and the sound never appears to come from dead center).
This depends on the ORDER of the traditional (minimum phase) high pass or low pass filters.
Every order (6dB/octave) adds a 90 degree phase difference between MATCHING low pass and a high pass filters at the -3dB point.
Thus a 2nd order (12dB/octave) filter introduces a 180 degree phase shift. Reversing the phase of the low pass signal negates the cancellation problem.
2nd order 12dB/octave filters are common in 2way passive speaker crossovers and to negate the cancellation problem the tweeter is always wired backwards inside the speaker cabinet.
A 4th order (24dB/octave) butterworth filter (commonly used both electronically and digitally in active speaker crossovers) introduces a 360 degree phase shift. Which is effectively zero. Every other order of filter will have significant phase mismatch errors at the crossover point.
I'm having trouble understanding this, do you have any resources that can explain this more?
You make three big mistakes in the first 30 seconds. First, you can certainly make a non minimum phase all pass filter with electrical components. Second, you can approximate a linear phase filter with an analog high order Bessel all pass filter to any degree you want to. Thirdly, linear phase does not mean looking back in time unless the phase slopes upwards. A general linear phase filter adds positive latency, so there are no causality issues. You have been told these things without having the theoretical knowledge to check them.
Very useful and clear!
I have crown xls for my sub. I need phase adjustment at 60degree. How can I achieve that? Thank you.
amazing video! 🎧💜💜💜
You're welcome. DM
Hi. Basing on this video: https://youtu.be/efKabAQQsPQ, did I understand correctly: when doing high cut and low cut filters, fabfilter not only reduces the frequencies intended to be cut out, but also raises all the rest of the frequencies in a shelf-like manner? Isn’t that dangerous to the ears in headphones, because you can easily exceed the safe threshold in certain frequencies, especially after a few hours, when your ears adjust? Say, you are listening to the solo track at 76 db, and then do a low cut, and all the high frequencies are boosted to 80 db, but you look at the graph and think that you only cut the low frequencies, but you actually boosted the high frequencies, unknowingly putting high frequency hearing capability to danger?
A filter reduces the level in the stop band. The level in the pass band stays the same, except there may be a peak at the cut-off frequency. DM
@Audio Masterclass that peak in the cutoff frequency also does not increase the volume of that frequency, right?
Can't wrap my head around to how to mix/blend real + sampled kick tracks when EQing 2 tracks independently before summing to a kick bus (and then add more processing) I made sure both tracks were visually time and phase aligned. Then I was forced to downmix the two tracks to save some CPU, and started going crazy. In the resulting waveform, both kicks had moved slightly in time, creating a strange and longer waveform. I had made sure that the recorded kick was time/phase aligned with the overheads (did the same with the rest of the drum tracks. In fact, I've become obsessed with this lately), and now everything was all over the place because of individual eq processing. I am 100% sure it is not a latency compensation issue. When mixing down/freezing tracks, my daw compensates for plugin latency automatically. So I decided to mixdown each track individually to see what was going on. Zero phase eq processing created a 3x longer kick on each track (because of different frequency phase missalignment, I guess), and linear phase created lots of pre-ringing and missing transients. The less destructive setting was linear phase in "minimum" setting, but still had issues. All of this considering I was using a sample kick sound that was nothing like the recorded one, except for adjusting the pitch to match the fundamental note.
I got rid of HPFs and LSFs I was using to polish low end, and things started to improve a little. Then applied those filters back in the kick bus afterwards, and things kind of started working again. But now I'm not confident anymore about the kick staying phase aligned with the rest of the drum tracks (specially OHs) unless I bounce the bus output to audio and compare the timing to the overheads. And that's a bummer.
If I've understood correctly how this works, despite of visually aligning the recorded audio tracks and its transients, heavy eq processing can missalign back the timing of frequencies separately, and the only way to make sure everything's ok, you have to use your ear, or bounce the individual/bus tracks to audio after processing to re-check the alignment before summing in a drum bus. My final thoughts: the best you record things and the later you process drum tracks after summing, the better. There's only so much correction and tweaking you can do before things start to get all oven the place and out of control. I really struggle to get things right when I receive badly recorded tracks for working on a mix. I feel like I spend 90% fixing the impossible before I can even enjoy the process.
Okay so as a basic general isation, is it fair to say that DAWs decide gain boost/cut depending on the concept of constructive and destructive interferences of sound waves (as phase matters in case of interferences)?
My preference for understanding EQ and filters is to turn first to analogue electronics and then consider that their behaviour can be emulated digitally. Following that there is further digital theory to examine. Or it's a lot simpler just to consider the principal controls of a real-world EQ - gain, frequency, Q etc - and listen to what they do. DM
Hi. Great tutorial. Those 2 cut filters, do they result in a 360° phase rotation? Cheers
My maths isn't up to the level that would give you a reliable answer to your excellent question. I do remember however an expert on phase pedals telling me how the phase shift went through several full rotations so I'd guess it's possible. Perhaps the answer is somewhere in here... https://en.wikipedia.org/wiki/Butterworth_filter Let me know when you've worked it out! DM
Very interesting! Thank you for this video. Would you say that the phase distortions that would happen through non-linear phase filters would be a reason against using standard application filters for multiband crossovers? I've always wondered what types of filters I could use in order to make possible crossover filters that allowed me to more flexibly process signals in a multiband fashion, but could never figure it out. I know that equal power or equal amplitude throughout filtered bands would be something to desire, but I could never figure out how I could obtain such parameters.
Linear phase filters are most suitable any time a signal is mixed with a processed version of the same signal, or two processed versions of the same signal are combined.
Great video. Would be nice if you tried to compensate the minimum phase crossovers level dip with a bell band eq boost on yhe summed signal, just to see how close one can get. Is that possible?
That's what you would try. In the mastering example given, it's important to remember that you're changing the sound anyway with the compression, so you wouldn't want to get back to the original exactly. In the video I said, "Make it sound good", which is what you would do. If it didn't sound better then the original then you would scrap the idea. Good audio is often about spending time and effort trying something, then having the courage to scrap what you've done because it didn't work out. DM
6:38 "When they're not in phase, they'll add up lower than 6dB"
In addition to level fluctuation, does phase cancellation also cause the mixed signal colored? My thought is that different frequencies will have different degree of phase cancellation so the mixed signal's frequency response is going to be changed.
As always, thank you for this precious masterclass.
If it's sine waves, then I wouldn't call that colouration. But if you mix a music or speech signal with a second version of itself, then yes there will be colouration, in the sense that various frequencies will be cut or boosted. A classic example is a microphone placed close to a reflective surface. The reflection is delayed and causes comb filtering. DM
@Audio Masterclass Yes I'm thinking delays and comb-filtering as well. Thank you (again) for making things clearer to me.
I experienced this (but couldn't explain it) while using Cubase 5 stock EQ when I had both the HPF & LPF set at 1kHz.
Thanks for the insight.
Thank you for your comment. It's easy not to notice this effect in real-world recording, so well done for hearing it. DM
I teach audio engineering professionally and I am very impressed with this demonstration 🔊🎤
Thank you. It depends what you mean by audio engineering, but I've taught sound engineering since 1986. I'll get the hang of it one day... DM
Thanks for the great infos David. So should we therefore always have the Linear phase "On"for mix and mastering if the computer can manage?
It's just my opinion, but I would use minimum phase for everything except when I'm mixing a signal with a version of itself. But it's all down to what sounds good. In the mastering example mentioned in the video, then in theory linear phase would be better. But I would still try minimum phase to check that linear is actually an improvement. Pre-ringing can be an issue with linear phase, so it isn't guaranteed to sound better in all cases. DM
You can comment on this video at YouTube