Sound Design Live

Build Your Career As A Sound Engineer

  • Podcast
  • Training
    • My Courses
  • Archive

Should audience depth influence crossover frequency between main and sub?

By Nathan Lively

Hypothesis: By choosing a lower crossover frequency I can expand the coupling zone between main and sub.

Conclusion: While lowering the crossover frequency does expand the coupling zone between main and sub and this fact may influence the system design, its advantages are secondary to the efficient functionality and cooperation of both drivers.

Coupling zone: The summation zone where the combination of signals is additive only. Phase offset must be <120º to prevent subtraction.

Bob McCarthy, Sound Systems: Design and Optimization

While working on a recent article about crossover slopes I started thinking about main+sub alignment and its expiration. If we know that ⅔ of the phase wheel gives us summation and ⅓ of it gives us cancellation and we know the point in space where the two sources are aligned, then we should be able to predict the expiration date of the alignment, compare it to the audience plane, and consider whether lowering the area of interaction will benefit coverage.

If two sources are aligned at 100Hz and the wavelength of 100Hz is 11.3ft, then a 3.8ft distance offset will create a ⅓ λ (wavelength) phase shift (120º). If we have two sources at opposite ends of a room and they are aligned in the center, then we have a 7.6ft coupling zone. From one edge of the coupling zone to the other is ⅔ λ (240º).

80Hz has a λ of 14.13ft and would give us a coupling zone of 9.4ft, an expansion of 1.8ft.

Lowering the crossover frequency to expand the coupling zone

Here’s a section view of a main+sub alignment where you can clearly see a cancellation at 24ft. The coupling zone is 29ft, which is 65% of the audience plane.

I can lower the crossover frequency and expand the coupling zone by 4ft, which is 71% of the audience plane.

This process can be sped up using Merlijn van Veen’s Sub Align calculator. Here’s the same system design observing the relative level difference at 100Hz.

And here it is at 80Hz. Notice that the checkered pattern indicating the coupling zone has expanded.

Instead of putting every design through every potential crossover frequency, I made a new calculator that shows the percentage of audience within the coupling zone by frequency.

I am now able to quickly compare the potential benefit of selecting one crossover frequency over another by how much the coupling zone will expand or contract. Using the example from above we can see that changing the crossover frequency from 100Hz to 80Hz only provides a 7% improvement. This doesn’t seem significant enough to make a system design decision, but it could be included in other factors in the decision making processes.

Let’s look at another example. In this case the vertical distance offset is reduced and the audience depth is increased.

The calculator reveals that a 120Hz crossover would include 58% of the audience in the coupling zone, but a 75Hz crossover gives us a 13% improvement.

Should I use this calculator to pick my crossover frequency?

No. When it comes to choosing a crossover frequency there are other more important factors to consider like mechanical and electrical limitations. If your design only puts a small portion of the audience in the coupling zone, changing the crossover frequency is not going to save you.

Instead, start by observing the manufacturer’s recommendations, then the native response of each speaker, and the content of the show and its power requirements over frequency.

All that being said, knowing more about the expected performance of a sound system is powerful. I might make design changes based on the calculator’s predictions. I might I do nothing. Either wa,y I walk into the room with fewer surprises during the listening and optimization steps.

If lowering the crossover frequency increases the coupling zone, why not just always make it as low as possible?

I don’t have a great answer for this question. As I mentioned already, there are limitations to how low you can go. One major tradeoff is that your main speaker will need to handle more and more power as the crossover frequency lowers, making it less efficient.

One clear benefit I can see is estimating the viability of an overlap crossover. If you are planning a system with an overlap crossover that goes all the way up to 120Hz and you look at the calculator and see that 120Hz will only be coupling through 50% of the audience, you might decide on a unity crossover to limit the main+sub interaction into those higher frequencies, making it more stable over distance.

What about aligning at 3/4 depth?

Right! I included a phase offset option to test this and it makes a big difference. In the most recent example, if I use a ⅓ λ offset (120º), the portion of the audience in the coupling zone goes up to 88%.

Do all-pass and FIR filters cause delay?

By Nathan Lively

For a long time I was afraid of all-pass and FIR filters because they seemed exotic and supposedly cause lots of delay, making them unusable for live sound. Turns out this was just an excuse I was using to avoid some mental hurdles.

Do all-pass filters cause delay?

Here’s a measurement of my BLU-160. It’s an output processor from BSS.

Pretty boring.

Here’s that same measurement with 5ms of delay inserted. Let me draw your attention to the Live IR. It’s the exact same shape as in the previous measurement, just pushed 5ms down the time axis.

Let’s take out the delay and insert a second-order 180º APF (all-pass filter) at 100Hz.

Cool.

Wait a second. What’s going on with the Live IR?

Isn’t an APF a frequency-specific delay?

If the half-period of 100Hz is 5ms, shouldn’t we see a 5ms delay in Live IR?

Maybe the Live IR is over represented by high frequency content. Let’s start over and switch to using a using band-limited pink noise (50-200Hz) instead for the signal generator.

I moved the Live IR window over a bit since the peak shifted when I switched to the band-limited pink noise, but I didn’t adjust the delay finder.

Now I’ll insert the APF again.

I see phase shift, but I don’t see delay.

Maybe we just can’t see with enough resolution. Let’s record the wavelet and look at it as an IR (impulse response).

Ah, ha! Now we see some delay. But is it 5ms of delay?

Rats. It’s only 0.042ms.

I have one more idea. What if I record them and look at them in a wave editor.

Here are the waveforms superimposed. It looks like some delay, or maybe a polarity inversion?

But if I invert the polarity…

Instead of the same IR pushed 5ms down the time axis we see…something different; a new waveform.

This is an important distinction. Delay causes delay. It returns a copy of the original, just farther down the time axis. It does not alter the wave shape and there is no frequency dependence.

On the other hand, an APF does not cause delay. Instead, it causes phase shift, which is frequency dependent and returns a new waveform. Phase shift causes the waveform to rotate around the time axis, which can make it difficult to distinguish from delay.

It’s important that we do, though, because if I sent you this new waveform and you wanted to reverse the process you wouldn’t use delay. You would use a complimentary APF.

Problem solving

If we observe summation that looks like this:

We’ll want to fix it with delay so that it looks like this:

If we observe summation that looks like this:

We’ll fix it with a matching APF in the other channel:

Do FIR filters cause delay?

FIR (finite impulse response) filters do not cause delay for the same reasons that APF do not cause delay. Their implementation, though, may result in latency any time they include excess phase or linear phase.

  • Excess phase is any additional phase beyond minimum phase.
  • Minimum phase defines the predictable relationship between magnitude and phase.
  • Linear phase breaks all the rules and removes any relationship between magnitude and phase. 😨

This leads to a very simple rule for a minimum phase network: at a local maximum or minimum in the transfer function, a frequency of maximum curvature of amplitude corresponds to a point of inflection of phase.

Richard Heyser

As long as your FIR filter only includes minimum phase filters, there will be no processing delay. Where there is a change in magnitude, there were will be a corresponding and predictable change in phase. Where there is a change in phase there will be a predictable and corresponding change in magnitude.

Here’s a measurement of a JBL PRX615M.

Here’s the same speaker, but with an FIR filter inserted (800 samples!). Notice the Live IR. The FIR filter is entirely minimum phase so there is no processing delay.

If I were to introduce linear phase filters and make changes to the phase and magnitude independently a filter delay would be necessary and we’d some amount of processor latency.

Why should I care?

To know the right tool for the job, you have to know what it’s called. Imagine a surgeon calling out for the thingy-thing.

  • If you have a time alignment problem, fix it with delay.
  • If you have a phase alignment problem, fix it with an APF.
  • If you want to adjust magnitude with phase, use a minimum phase FIR filter and incur no processing delay.
  • If you need to adjust magnitude and phase independently, use a linear phase FIR filter and incur predictable processing delay.

Trivia: Is Ø the symbol for phase or polarity? Comment below.

Fighting Microphone Feedback WITHOUT a Graphic EQ While Mixing Monitors from FOH in a Reverberant Room

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live I talk with Michael Lawrence who is Document Jockey at Rational Acoustics, Technical Editor at Live Sound International, and a freelance audio engineer. We discuss lots of tips for fighting microphone feedback without a graphic EQ (and without Smaart!) while mixing stage monitors from FOH in a reverberant room.

I ask:

  • How did you get your first job in audio?
  • Looking back on your career so far, what’s one of the best decisions you made to get more of the work that you really love?
  • Running monitors from FOH? Here’s some tips.
    • Like me and many other people, you often work by yourself. You are the system designer, system tech, A1, etc. You’ve developed a lot of processes to efficiently get it all done. Can we go over some of your best tips for running stage monitors from FOH?
    • Why don’t you use Smaart with your stage monitors?
    • Walk me through your process for ring out the monitors.
  • And from Facebook
    • Manuel Elias Costa: What does he know about automix and machine learning algorithms research? It would be interesting to hear more about the developments in mixing and automation.
    • Andrey Andreev: What does he think of the way music industry is developing production-wise and is audio quality still a thing? How are decision been made for speccing a certain sound system? Why is point source so much neglected when it can be so many times better sounding than the usual line array type of systems? (Of course line array has its place but it’s hardly always the right solution)
    • Dave Gammon: Does he see sound being more immersive in the future. Less about right and left but more about a total experience and encapsulating the audience.
    • Garrick Quentin: With the new advancements in line source technology, where do old line arrays go to die? What’s the next game changer in line array technology that we don’t yet know about?
    • Lou Kohley: How do you stay relevant to opposite edges of your market? The novice just starting at a bar gig to working professional to industry veteran.

I hate graphic EQs. I don’t use them unless I don’t have a better choice. You’re talking about 1/3 of an octave. That’s like a C to an F on a piano.

Michael Lawrence

Notes

  1. All music in this podcast by Bionik.
  2. Running Monitors from FOH? Here are some tips.
    1. Verify all outputs with pink noise. If they are all the same model, they should all sound the same. Check settings (line/mic switch, gain).
    2. Practice identifying feedback frequencies. Sing it to yourself.
  3. Hardware: X32, Midas Pro1, LS9
  4. My Results from 30 Days of Ear Training, Download the Aiming Triangles Business Card
  5. Quotes
    1. I treat everything like I’m on tour even when I’m not on tour; doing the same things in the same order every time.
    2. Before you do any test, have an idea of what you expect it to look like.
    3. I always have a cue wedge at FOH. If you don’t have one, you’re guessing.
    4. I hate graphic EQs. I don’t use them unless I don’t have a better choice. You’re talking about 1/3 of an octave. That’s like a C to an F on a piano.
    5. We ignore the polar pattern of the mic. That’s super important. Buy yourself every dB you can get.
    6. I always double patch my money channel.

3 EQ Snapshots That Will Make Your Corporate Event Mics More Transparent Using Smaart®: SM58, 185, MX412

By Nathan Lively

The most common microphones I use on corporate events are Shure wireless SM58, WL185 Lavalier, and wired MX418. I created three EQ snapshots using a mic compare measurement in Smaart for a more transparent starting point.

What is a mic compare measurement?

In the same way that we use the transfer function measurement in Smaart to observe changes to our mix as it passes through speakers and the air, we can also observe changes to the source as it passes through microphones.

To do this, first position the mics so that their capsules are as close as possible to each other. It’s often easiest to place them on-axis with each other with the source at 90º.

Connect the monitor output of your mix console to an input of your audio interface. Create a new transfer function measurement pair using the console’s monitor output as the measurement signal and your measurement mic as the reference signal.

Start the measurement and signal generator in Smaart.

On the mix console, hit solo on the mic channel that you want to measure (sending it to the monitor buss), flatten the EQ, and turn up the monitor output until the measurement trace in Smaart centers around 0dB. For better data on global trends, create an average from several mics like I did.

Creating the snapshot

Adjust the channel EQ to your satisfaction.

Keep in mind that many (most?) microphones include purpose built non-linearities like helpful EQ enhancements. Think every kick mic you’ve ever used.

Here’s an RE320 I measured. I chose to make no EQ changes because I listened to it in headphones and the room and it sounded great.

Shure SM58

Here’s the pre and post EQ measurement for the SM58.

Here’s the manufacturer’s specification.

Here’s the EQ I came up for a more transparent response.

And here’s the EQ I settled on after listening on a show. You can download all of the snapshots here.

Shure 185

Here’s the pre and post EQ measurement for the 185 capsule.

I couldn’t find the manufacturer’s specification. If you have it, let me know.

Here’s the EQ I came up for a more transparent response.

And here’s the EQ I settled on after listing on a show. You can download all of the snapshots for X/M32 here.

Shure MX412

Here’s the pre and post EQ measurement for the MX412.

I couldn’t find the manufacturer’s specification. If you have it, let me know.

Here’s the EQ I came up for a more transparent response.

And here’s the EQ I settled on after listing on a show. You can download all of the snapshots here.

Have you tried a mic compare measurement? What were your results?

Smaart® and the Smaart logo are registered trademarks of Rational Acoustics LLC and are not affiliated with Nathan Lively or Sound Design Live.

Invasion of the Phase Invaders: An Audio Engineer’s Guide to Battle Tactics and Big Scores

By Nathan Lively

If you lay awake at night thinking about improving your crossover alignments and winning big at Phase Invaders, then this guide is for you.

What follows are three proven battle tactics from beginner to advanced.

#1 – Beginner

Press all the buttons until you win.

a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type any given text, such as the complete works of William Shakespeare.

Wikipedia

I have to thank my wife for this one. She is often my first beta tester and while not a professional audio engineer, she is driven by a competitive desire to win unlike most people I’ve met. While making random combinations of the delay slider and polarity switch, she keeps track of her score and returns to the combination that returned the highest score.

#2 – Intermediate

Make the pictures match.

The battle tactic I think most audio engineers will start with is the visual one. Naturally, we are all visual learners. One good picture is worth a thousand words. It’s one thing to talk about speaker coverage, but the first time you see its prediction in your modeling software or do your first speaker autopsy, a new level of understanding is reached.

In Phase Invaders, you’ll want to use coherence blanking to remove any noise, zoom into the crossover region, then adjust delay and polarity until the Sum matches the Target. Use the score to fine-tune.

Pro battle tactic: Click and drag to zoom in on the graph. Double click to zoom out.

#3 – Advanced

Find optimum alignment through the crossover region using the phase graph and phase delay formula. (scary!)

Finding alignment on the phase graph may be as easy as sliding the delay around a bit until the pictures match, but many times the data is so hard to read that it can be difficult to tell if you are a half or full rotation away from a better result.

Here are the steps:

  1. Pick a frequency (f) that is near the center of the crossover region, has near matching amplitude on main and sub, and relatively high coherence.
  2. Use the phase delay formula to calculate the delay needed to align main and sub at one frequency. (((Main Phase/360)(1000/f))*-1)-(((Sub Phase/360)(1000/f))*-1). Pop in the four variables and you can paste it directly into Google. As far as I can tell, you can simplify this formula by removing the parenthesis and it still works, but I gave you both just in case. (Main Phase/360*-1000/f)-(Sub Phase/360*-1000/f). For more on this formula read this article.
  3. Use the result as your delay in Phase Invaders. If the phase traces are aligned, Sum is on top of Target, and you have a big fat score, you’re done. If not, try something else. Add or subtract delay to rotate the phase at f by 180º+pol. inv. or 360º. Use 500/f and 1000/f, respectively. Keep going until you find the combination with best alignment, summation, and score.

Pro battle tactic: The phase delay and time period are calculated for you in the cursor read out at the top of the screen.

Walkthrough

Let’s work through it together. Here’s a four-mic average of a measurement I took of a Martin CDD-Live 12 (main) and an SXP118 (sub). The main is 18ft directly above the sub.

The first thing I’ll do is adjust the coherence blanking to get rid of some of the noise.

Now I’ll zoom into the area of interaction and pick a frequency. I’m going to pick 105Hz because it’s near the center of the crossover region, has matching magnitude, and high relative coherence.

If you are already familiar with the phase graph then this probably looks like a polarity inversion. If you’re not, you might look at the cursor readout and see that it says 184.72 °Δ.

(I know the font is hard to read. It seemed clever at first.)

I’ll insert a polarity inversion and we get a near perfect score of 9947396.

This is a pretty clear cut case, but let’s try a couple of other options.

Earlier, with the cursor on 105Hz, we saw that the delta delay was 184.72º and the phase delay was -4.87ms. I’ll take out the polarity inversion and try -4.87ms in the delay.

Right away I can see that while we are aligned at 105Hz, the phase slopes do not match, the sum trace is not on top of the target trace through the entire crossover region, and the score of 9719481 is lower than before.

Our alignment was not improved by going in this direction, but now that we are aligned at one frequency we can easily test other possibilities.

In the cursor readout I can see that the period of 105Hz is 9.48ms and half of that is 4.74. Let’s try 9.48ms.

Our current delay setting is -4.89ms. To get a 360º rotation: -4.89+9.48=4.59ms.

Now we have better alignment of slopes, better summation, and a score of 9913456. That’s still not better than our first score.

Let’s test one more. This time half a rotation at 105Hz with a polarity inversion. 4.59+4.74=9.33ms

This gives us worse alignment and a score of 9619582.

Through efficiently testing a handful of options we have discovered the option with maximum summation and improved our relationship with the phase graph.

Pro battle tactic: Have an empty text document open to keep track of your scores and settings.

Have you tried Phase Invaders? What’s your favorite battle tactic?

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • …
  • 11
  • Next Page »

Search 200 articles and podcasts

Copyright © 2021 Nathan Lively