Sound Design Live

Build Your Career As A Sound Engineer

  • Podcast
  • Training
    • My Courses
  • Archive

Will adding a crossover filter create better phase alignment between speakers from different families?

By Nathan Lively

In an ideal world, we would always use matching speaker sets. Main from brand A is designed to work with Sub from brand A and we have confidence that when they are deployed in the field that they will work harmoniously together.

But what happens when we combine two speakers from different families or brands?

Takeaway

If phase slopes do not match, look for differences in phase divisible by 45 to identify filter opportunities.

Seeing actionable data in our audio analyzer for the main to subwoofer crossover frequency range is like seeing a shooting star. It’s rare and it never happens when someone else is looking.

Sometimes we get lucky and they match up pretty well.

VRX918SP
t4

For example, a dB Technologies DVA T4 would normally be matched with something from the same family like a DVA S1518N subwoofer. But if there are no more S1518N available at the sound company, they may send a substitute. This exact situation happened to me and they sent a VRX918SP. I had not used one in a long time and I wondered, will they work together?

Luckily, they did.

T4 + VRX918SP
T4 in red, 918 in blue, summation in pink

Anecdotally, I would say that 60% of the time, it works every time. You get lucky and they play together out of the box. So what do you do the other 40% of the time?

What if you go through your normal alignment process and you end up with something like this.

milo+650p w3.66ms
Milo + 650p w/3.66ms

I know, I know. Not the most dramatic example, but here we have a main+sub pair that will not achieve maximum summation through the crossover region. You could try a polarity inversion and different delay values, but no matter how much you fiddle with it, you won’t be able to grab that last 5%.

You might start wishing you were an expert at all-pass filters, but then you realize that your DSP doesn’t even have them. You start wondering if maybe adding a filter would produce better alignment.

To help satisfy your curiosity here’s a helpful rule to memorize: every order * 45º = total phase shift.

crossover over
I’m too lazy to label them all.

In practice, if you compare two phase measurements and observe phase shift of 45º, 90º, 135º or any multiple of 45º, consider adding a filter.

In the example we started with above, it looks like we might need to add a filter to red Sub trace. After all, it needs to be 45º more steep, right?

Not so fast. The measurement is misleading because I added 3.66ms to the sub measurement in pursuit of alignment. Let’s remove the delay first. I’ll also add a few milliseconds to the delay locator to unwrap the phase and make it easier to look at.

unwrap phase

From this graph we can identify a couple of things:

  1. The red Main trace is steeper than the blue Sub trace.
  2. They are 90º apart at 72Hz and 180º apart at 190Hz.

90 / 45 = 2

This makes me think that a 2nd order filter may work. The image below of a 2nd order (12dB/oct) Butterworth filter shows both the magnitude and the phase. You can see 90º of phase shift at 72Hz sloping down to 150º at 190Hz.

Let’s insert it and see the result.

The slopes of the phase traces are now better matched (without electronic delay) and I have reduced the crossover interaction from 2oct to 1oct.

Now that we have a relative preset, let’s deploy these speakers and see how easy it is to restore alignment with a simple distance measurement.

room design

If we did nothing. The prediction would look like this.

It looks like we have put the audience into a null while our precious summation blasts into the sky.

Here’s the magnitude at the measurement mic location.

measurement mic null

Let’s see if we can fix that.

  1. Distance to main = 22ft
  2. Distance to sub = 18.7ft
  3. 22 – 18.7 = 3.3
  4. 3.3 * 0.9 = 2.97ms in the Sub output
deployed

Wait, it didn’t work. What happened? Why is there still 180º of phase difference between main and sub??

Let’s check to see if I applied any extra processing.

This u-shaping filter says that it’s only applying 30º of phase shift, but our results argue otherwise. There may be another detail I’m missing here, but I think a polarity inversion in the Sub will fix it.

deployed aligned

Better!

And here’s the final measurement of combined systems.

And we have pushed the coupling zone down onto the audience.

Big thanks to Mauricio Ramirez for the feedback! [why is there no luchador emoji??]

What about you? Have you tried improving alignments by adding crossover filters? What were your results?

Further questions

Mystery polarity inversion?

I discovered later that if I turn the delay integration (Lyon 55-70, not sure if that matters) on and back off again, the polarity inversion from the u-shaping filters disappears.

Why not just compare magnitude slopes instead of phase?

I thought about this, but the more I looked into it, the more confused I got. Measuring electrical filters makes pretty pictures that are easy to compare, but real speakers measured in the field are more difficult to compare.

Here’s an example speaker pair. Do the slopes match?

Looking at the phase graph, it seems clear, but I’m not so sure about the magnitude graph. Where should I start counting dB/oct and where should I stop? The red trace has a nice steady slope that looks like 24dB/oct, but the blue trace looks almost asymmetrical since it has a gradual slope and then a sudden drop-off. 🤷‍♂️

How Dave Gunness Brought Single-Driver Cardioid Subs To Market

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live I talk with the vice president of engineering at Fulcrum Acoustic, David Gunness. We discuss the improved gain-before-feedback with coaxial speakers, the cell phone material that helped crack the code on passive cardioid subs, and ground-stacked vs flown subwoofers.

I ask:

  • “Fulcrum’s revolutionary coaxial designs allow for improved intelligibility, higher gain before feedback…” How does a coaxial design improve GBF?
  • When most of us think of a directional subwoofer, we think of multiple elements arrayed together. Even when it’s a single box, like the old Meyer Sound M3D subs, there were multiple drivers in the box to create the directional result. But your cardioid subs have a single driver. How does it work?
  • In The Best Place To Put Subwoofers Is… ? Jerrold Stevens writes that it’s best to ground stack your subs if the entire system is ground-stacked, but better to fly the subs if you’re flying the mains. In Comments On Half Space in the section on subwoofer deployment, you write about the height of the subs affecting the difference in path lengths between direct sound and ground bounce and the resulting comb filter. If my mains are flown and my subs are flown, then should the comb filter affect my design choices and limit the subwoofer height? Is there a maximum angle I can use relative to the listener’s perspective to remove the comb filter from the operating range of my subwoofer? Or does the improved SPL distribution with flown subs out-weigh the potential comb filter throughout the audience?
  • Tell us about the biggest or maybe most painful mistake you’ve made on the job and how you recovered.
  • From FB
    • Nathan Riddel: What is his process for developing the FIR presets? What type of tonal shaping EQ does he pre-bake into the presets for his speakers (if any)? What compromises do the passive-cardioid boxes have or where wouldn’t they be a good idea?
    • Bodo Felusch: What was your biggest challenge by the time you created EAW focused NT series?… and how did you solved it
    • Menno Zijlstra Fulcrum is based on Dsp. How will he see the future develop in the years to come. What can we expect?
  • What’s in your work bag?
dave gunness

I’ve encountered people over the years who when they developed an important skill, wanted to keep it close to the vest and not let someone else figure out how to do it. But then you’re stuck doing that for the rest of your career.

Dave Gunness

Notes

  1. All music in this episode by Hospitalized and JNGS.
  2. The Mom Test
  3. Hydrophobic: Tending to repel or fail to mix with water. The opposite of hydrophilic.
  4. Workbag: RF measurement mic
  5. Book: Acoustics, Acoustical Engineering
  6. Quotes
    1. The biggest area of mistakes is EQing finished systems.
    2. The house curve for Fulcrum is flat through the mids.
    3. It’s not how good you can make FOH. It’s how even you can make it everywhere.
    4. It’s easy to say, this is how you make a passive cardioid, but when you try to make it work you discover the material challenges.
    5. [Sub-cardioid] is generally more useful. Even with a subwoofer sitting on the floor, if you walk around behind it, your ears are not on-axis with the cabinet. You are above it. Having more attention at 135º makes it feel like more attenuation than from an active pure cardioid.
    6. How tympanic is the stage?
    7. In sheds, [subwoofers] are on the floor and in the air. In some cases that means you overlap the subs with the response of the mains. In a line array situation, you’re extending the line all the way to the floor. That’s particularly helpful in sheds with a metal roof. Extending that line makes it long enough that you’re putting less energy into the roof, which means it’s less likely to rattle and bounce back down 80ms late.
    8. The one I don’t like is flying [subs] at the top of a line array. They’re coupled tightly but can’t have the same phase relationship as you move forward and backward in the room. It’s better if they are next to the line array.
    9. I’ve encountered people over the years who when they developed an important skill, wanted to keep it close to the vest and not let someone else figure out how to do it. But then you’re stuck doing that for the rest of your career.
    10. The things that you do to make a loudspeaker flat out of the box don’t make it sound better. You can always make it sound better if you don’t try to make it flat out of the box. You let the compression driver run flat out. That means there’s no high impedance between the amp and the compression driver, etc.

Are you still trying to align your subwoofers with an audio analyzer?

By Nathan Lively

The challenge of collecting and interpreting high-quality, low-frequency data with modern audio analyzers leads to user error and misaligned subwoofer crossovers. We need a simpler, more reliable approach.

tl;dr

  1. Crossover alignment is hard.
  2. We need a database and mobile app to make it easier.
  3. Fill out this survey: Should I build a sub alignment app?

Seeing actionable data in our audio analyzer for the main to subwoofer crossover frequency range is like seeing a shooting star. It’s rare and it never happens when someone else is looking.

…unless you’re at a workshop! Like me, you’ve probably learned how to create a phase-aligned crossover at a workshop under controlled conditions. It’s gratifying to measure two unmatched slopes and then manipulate delay and polarity until they are aligned. Almost anyone can do it! (One of my discoveries from building Phase Invaders is that with a couple of simple tools even my wife, who struggles to join Zoom meetings, could easily beat the game through random trial and error.) So at the workshop, everything goes well and we think, “Easy! All I have to do is make the pictures match.”

Then we get into the field and it doesn’t work, not because of the workshop’s teacher whose method was solid, but because the pictures literally cannot match. As smart as our modern audio analyzers have become, they can’t reject the room (resistance is futile) that has become an extension of the sound system in the frequency range where they share custody. You are stuck with an impossible puzzle.

I have tried many solutions to this problem. Let’s do a quick recap of my years of confusion and what I think we should do about it.

13 Things That Don’t Work

1 – Invert the polarity. Does it sound better?

This is what I learned in recording school. If you have a microphone on top and bottom of a snare drum or front and back of a guitar cabinet, you should try inverting the polarity to hear if any low end is restored.

This works fine for matched distances where the tonality of a single instrument is at stake, but not for larger distance offsets where large portions of the audience and frequency spectrum may be affected.

2 – Add delay while listening to a kick drum or reference track.

This doesn’t work because your hearing sensitivity changes by as much as 40dB through the operating range of the subwoofer (30-120Hz). Plus, a better sounding kick drum might not equal proper phase alignment. It might just be a cool sound.

I recently helped my friend Zeke with an alignment. The mains were initially set with 30ms of delay. Test tracks with lots of drum sounded, honestly, really cool. There was a really nice BOOoooming length that gave a super power to the low end.

After following a different alignment strategy (detailed below), we ended up with 9ms of delay in the sub instead. We were able to AB test the two results against each other and it was clear that one was an artistic choice (BOOoooom) and one was aligned (just Boom).

3 – Add delay while listening to a sine wave with the sub polarity inverted.

This is still the most common solution that people tell me they use in the field. The reason this one won’t work is that you are only listening to a single frequency. When perfectly aligned, it sounds the same at 0º, 360º, 720º and so on. Your alignment could be any number of cycles off and you wouldn’t know it. It is a tunnel vision solution that doesn’t tell you anything about the rest of the crossover region that typically takes place over a one octave wide frequency span, before main or sub once again becomes sole custodian over the remainder of the audible frequency range.

4 – Add delay while listening to a warble.

This is a step in the right direction, because now you are exposing yourself to the frequency spectrum where the main and sub share joint custody. It’s not a reliable solution for me, though, because I don’t hear the change. I remember when Merlijn van Veen first published a video about how to use the warble test. I was excited to try it. Unfortunately, I couldn’t hear the changes he demonstrated in the video and I have not been able to use it successfully in the field. That doesn’t mean that it might not work for you, though. The same is true for many of these tools, so don’t immediately discount them just because they didn’t work for me.

5 – Add delay while listening to band-limited pink noise.

This stimulus has definitely been more useful for me than the ones I’ve listed so far, but it won’t work for the same reasons I outlined related to the kick drum sound. If you want to try it, most audio analyzers allow you to band limit the pseudorandom pink noise generator.

6 – Add delay while listening to a tone burst.

This is the best stimulus I have found for listening tests. It’s great for doing an AB test of two different settings. My ears are not sensitive enough to dial in an exact delay value, though, so I wouldn’t try to use it alone.

distance offset

7 – Measure distance offset and convert to delay.

I discovered at a recent workshop that many of my students are using this method as a “get in the ballpark” or “better than nothing” strategy. While I agree that it may get you into the ballpark, you have a 33% chance of ending up on the cancellation side of the phase wheel. Sometimes being a half-step off sounds worse than several steps off. Imagine that you are perfectly in time, but the polarity is inverted. You’re in the ballpark, but getting hit in the back of the head with the ball.

8 – Ground plane microphone position.

This is another technique that can be helpful, but won’t remove reflections from the other 5 out of 6 boundaries/walls that typically constitute a room.

9 – Measure solo elements with a loud and long transfer function.

To get a highly coherent measurement (>95%), you need a good signal to noise ratio (>10dB). No problem. Turn up the signal generator. You can improve your coherence even more by using a higher number of averages. Great. Set the averages to infinity and let it run all day.

Both of these things may help, but can’t be confirmed until you energize the room and observe the results. If there’s a cancellation because of a detrimental reflection at 100Hz, you’ll never be able to get the coherence close to 100%. You can even get high average coherence with misleading data. This may be hard to believe until you realize that the lowest time record (for modern analyzers that make use of multiple time records) typically lasts about 1 second, which means that a 1,000ft reflection will still be considered coherent signal power. For a deeper dive on this subject, please see my workshop Getting Work Done with an Audio Analyzer.

10 – Measure average phase.

This one was a real eye opener for me. I had heard people mention measuring average phase a few times in the past, but it always sounded like a dumb idea for some reason. Then I tried it.

Holy shit. Amazing improvement in measurement quality. If you were previously measuring crossover alignment with a single microphone and then tried a three microphones deployed at strategic points through the audience, it’s like night and day.

Unfortunately, this technique falls apart for me in wide rooms with center subs. In this case, their phase response at each microphone is so different that they average into confusion.

Here’s one I struggled to decipher while at a hotel in New Orleans.

three mic average

11 – Measure an IR and filter it around the crossover frequency.

I love this one. It’s fun! It works…sometimes. Unfortunately, a band-limited subwoofer only transmits about 0.5% (-46dB) of the information it is being presented in comparison to a full-range speaker. Its IR has very little amplitude, leaving no prominent peak to track.

Here’s an IR of a full-range speaker.

full range IR

And here’s a sub.

sub IR

You can see how it would be challenging to compare their peaks.

Of course, you could view the ETC graph instead. Here’s the same full-range speaker, this time with the ETC graph filtered around 80Hz.

full-range ETC

And here’s the sub.

sub ETC

We’ve achieved actionable data at the expense of losing polarity information. 🤷‍♀️

12 – Play Phase Invaders.

I built Phase Invaders to not only get more practice reading the phase graph, but also to help me align systems in the field. It allows you to upload measurements and quickly several alignment options while observing summation against a target.

I was able to use it in the field a few times and sadly discovered that it doesn’t always work. Phase Invaders can only give you results that are as good as your data. If you’re measuring reflections, the result will be misleading. You can’t beat the room.

Here are the two speakers I used in the previous example. The best alignment I could find adds 13ms of delay to the Main, which is going in the wrong direction.

phase invaders

It’s hard to trust LF measurements in far-field. Who knows what they have been through?

13 – Add delay while observing combined systems.

Normally this would be one of the last steps in your alignment process to verify summation. Unfortunately, it suffers from the same problems as measuring solo elements above. If the solo element data is misleading, the combined systems data will be as well.

Remember the alignment I mentioned in #2 with my friend Zeke. That was done with an audio analyzer. Here’s why it didn’t work.

M10 groundplane

Check out the phase. How many wraparounds do you count between 50Hz and 150Hz. Maybe four? That’s not normal, is it? I might notice that coherence is low at 70Hz and 140Hz and therefore ignore those wraparounds. Or more likely, I’ll miss those details when I’m in a hurry.

(Spoiler alert: There should only be one wraparound, but that isn’t obvious until you have a reflection-free comparison.)

reflection comparison

[Bonus] Apply smoothing to make the graph easier to read

Some smoothing should clean up the graph and reveal the truth, right?

smoothing

Unfortunately, gratuitous amounts of smoothing do not necessarily reveal the near anechoic trace as you might expect.

Back to combined systems. You might say, “Look at the combined measurement. That will verify it.”

green = combined

Doesn’t look too bad to me. I don’t love what’s happening 83-100Hz, but if I were getting a show up, I would run with it.

But take a look at the room.

wizeta

And now the output delay.

output delay

I’m not sure how easy it is to tell from the photo above, but the main arrays are physically farther away from FOH. Pushing them back an extra 30ms seems excessive, but the combined measurement seems to indicate that we’re in the ball park.

Here it is again compared with the combined systems measurement with the alternative alignment I calculated.

Pre post

Although an improvement in summation is visible, I expected it to be much more significant because it sounded so different in the room (remember BOOooom).

You might be wondering how I created the alternative alignment. So far I’ve only covered what hasn’t worked. It may seem like I am criticizing you. I’m not. This is not about right and wrong. I always say that at the end of the day, if you can walk away happy and you got paid then you’re a success. No one is going to check your work and call the SPL Preservation Society to arrest you.

But, if you have already tried some of these methods and are nodding your head as you’re reading this, then you may be interested in a more efficient method that does work.

One thing that does work

I have only found one method that works every time: the relative/absolute method.

It goes like this:

  1. Create a relative alignment preset for a known distance offset.
  2. Modify that that preset in the field using the speaker’s absolute distance offset.

I first learned this method from Merlijn van Veen at one of his workshops 3 years ago (which inspired me to build Zoid) and have yet to find anything better. You can read Merlijn’s article here, listen to our discussion here, and watch my video here.

But, there’s a catch. 🤔

There’s always a catch. (no free lunch, etc.) 😐

Well, three of them. 😕

  1. Time: You need a block of unhurried focused time in a low-stakes environment so you can methodically work through each element and find the best relative alignment. The good news here is you only need to create each preset once.
  2. Practice: There are a handful of little details to get right. The first time you do it, it will make your brain melt. The second time, not so much. Chop wood, carry water.
  3. Resources: You need to be able to get your hands on those speakers.

Not all things are equal. If you are already well-practiced with the audio analyzer and creating phase aligned crossovers, then you might be able to get away with doing this in the field as speakers are being deployed. This has never worked for me. I’m always too nervous about being ready for soundcheck to dedicate time to set this up properly in the field.

At this point, you are probably starting to understand why this method hasn’t caught on like wildfire. It’s hard. It takes planning and forethought. It takes time and resources that you may not have access to. This is why most of us use one of the solutions mentioned above or just give up and do nothing at all. (Doing nothing at all does work, after all; you just don’t know where it worked.) Some of you may play the long game: spend years learning how to use an audio analyzer, track down the speakers you need to measure, and slowly build your own personal database of presets.

Of course, the solution is obvious: a magical warehouse in your backyard where you can pause time and experiment with any speaker in the world in any configuration you like. Oh, and at any SPL you please without bothering the neighbors.

Unfortunately, I’m not a wizard. 🧙🏼‍♂️

….yet. So until that happens, I have another idea. Maybe software can help.

What if a giant database of high-quality measurements could be consulted from anywhere in the world?

This would allow us to document pre-alignment values. Plus, you could compare real world data to near anechoic data to discern data from noise (reflection-full/free).

What if a mobile app could do some of the mathematical heavy lifting for quick results in the field?

This would allow you to use laser disto measurements or proxy loudspeakers to complete crossover alignment in about 30 seconds without getting out a calculator or audio analyzer.

What do you think?

Do you want to help me build it?

I’m still hesitant, though, because maybe there’s a reason there’s nothing like this, yet. Maybe people just prefer figuring out things on their own or feel like a mobile app won’t work for them.

I’ve started talking to a few other engineers about it and I think it might work. If we can combine a public database with an app for practical field work and a community where people can get answers, we might have something that would be helpful to a lot of live sound folks.

I don’t want to build something and then find out that nobody’s interested, though, so what do you think? Should we do it?

If yes, fill out this survey: Should I build a sub alignment app?

Acknowledgments: Thanks to Steve Smith, Lou Kohley, and Mike Reed for valuable feedback!

How Using Less EQ Can Stop Your Show from Sounding Horrible

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live I talk with the Sound Designer for Broken Chord and Project Design Manager for Sound Associates, Phillip Peglow. We discuss the night broadway was shutdown by COVID-19, whether or not you should go to graduate school, how EQ is ruining your show and what to do about it, why you’ll never beat the room, and why you should give the producer whatever they want.

I ask:

  • What are your concerns about work because of the pandemic?
  • When you get a new system installed and calibrated in a theatre and you want to give it a test drive, what music do you listen to?
  • How did you get your first job in audio?
  • 8 years ago I interview your partners in Broken Chord, Aaron Meicht and Daniel Baker, after I saw their sound design for the pulitzer prize winning play “Ruined”. So how did you meet those guys and what do you like about working in a team instead of working solo?
  • How did you get the job at Sound Associates?
  • Looking back on your career so far, what’s one of the best decisions you made to get more of the work that you really love?
  • What are some of the biggest mistakes you see people making who are new to sound design for theatre?
  • Tell us about the biggest or maybe most painful mistake you’ve made on the job and how you recovered.
  • What’s in your work bag?

You can’t beat the room. Trying to punch your way through with EQ or level is a fool’s errand.

Phillip Peglow

Notes

  1. All music in this episode by HouseFrau and RRound.
  2. Workbag: headlamp,
  3. Books: Sound System, Yamaha Sound Reinforcement Handbook
  4. Quotes
    1. Unless you have that opportunity [graduate school] at zero to you, I wouldn’t do it.
    2. If you really really really really really really really want to be on Broadway then you must move to NYC. It’s not an option.
    3. Use your ears first, before you put pink noise through anything. Start there.
    4. “If you’re making anything more than a 6dB cut, it’s probably time to reevaluate your decisions.” -Jamie Anderson
    5. Don’t ever use a GEQ.
    6. If you’re trying to make narrow narrow cuts, you are probably trying to optimize for a specific point in the room that has not bearing on 3-4 inches away from that position.
    7. You can’t beat the room.
    8. Trying to punch your way through with EQ or level is a fool’s errand.
    9. If I want into a theatre style setup and I have 5 minutes to get it going, I’m going to delay the system before I do anything else.
    10. When the people who sign your checks say, “This is what I want,” then just do what they want. It’s as much a psychological issue as it is an audio issue.

Can you estimate line array splay in the field without software while the riggers are waiting?

By Nathan Lively

I have developed, what seems to be, a lesser known method to find target coverage angle and quickly estimate average splay for a line array in the field in relatively few steps. I discovered it by necessity while creating Pro Audio Workshop: Seeing Sound 3 years ago. Recently a student challenged me on a couple of points and it motivated me to take a closer look to see if I could make it more efficient.

Here’s how I have seen other people do it.

Bottom speaker down angle – Top speaker down angle = Target coverage angle

bottom angle
Bottom speaker angle
top angle
Top speaker angle

17º – 6.78º = 10.22º target coverage angle

Target coverage angle
array splay
Result using auto-splay in MAPP

This works fine when you are using modeling software, but I was looking for a solution for the field with a laser disto and a calculator while I have a team of people waiting on me. After playing around with some right triangles for a bit, I discovered a pretty simple method

In short, if you know the array’s rigging height and where the audience starts and ends, you can find the target coverage angle without software.

Find target coverage angle without software

Here are the steps:

  1. Solve triangle Y. You need the length of two sides or one side and one angle. I would go with two sides since that seems to be more reliable.
  2. Solve triangle Z. You can find the length of the opposite side (6.07′) by subtracting the array height from the from the rigging height. You can estimate the array height by multiplying the number of boxes by a single box height.
triangle1

Then plug those numbers into a triangle solver.

triangle2

16.88º – 7.03º = 9.85º

What about inclined audiences?

But that only works for flat audience planes. What if the audience is at an angle?

inclined audience

The process is a similar. To solve triangle Y, we’ll subtract the the height of the end of the audience plane from the rigging height above the audience.

rectangle2

14.8 – 6 = 8.8ft

Solve for the missing angle. 4.19º

We already have the solution for triangle Z (16.88º).

16.8 – 4.19 = 12.61º target coverage angle

inclined
array splay inclined
Result in MAPP using auto-splay

Now what?

With one more step we can calculate average splay.

tar cov ang / available splay angles = average splay

12.61º / 11 = 1.2º

total splay

My speakers don’t offer a 1.2º splay, so I’ll round down to 1º and make up for the loss with a few of the last speakers. Now I have plan to hand the riggers.

angles 1

What is the result using average splay?

avg splay prediction

It’s not great, but in a pinch I’d rather go with this result rather than leave everything at 0º or just guessing.

0deg splay

The easiest way to improve this result is to use the the automatic solvers built into your modeling software. The best way to refine the result manually for even more control is covered in detail in Pro Audio Workshop: Seeing Sound.

Warning: Software should always be used to double check rigging points and weight distribution. (Thanks Samantha Potter!)

Have you tried calculating line array splay in the field without software? How did you do it? What were your results?

  • « Previous Page
  • 1
  • …
  • 3
  • 4
  • 5
  • 6
  • 7
  • …
  • 54
  • Next Page »

Search 200 articles and podcasts

Copyright © 2021 Nathan Lively