The challenge of collecting and interpreting high-quality, low-frequency data with modern audio analyzers leads to user error and misaligned subwoofer crossovers. We need a simpler, more reliable approach.
tl;dr
- Crossover alignment is hard.
- We need a database and mobile app to make it easier.
- I built TraceBook and SubAligner.
Seeing actionable data in our audio analyzer for the main to subwoofer crossover frequency range is like seeing a shooting star. It’s rare and it never happens when someone else is looking.
…unless you’re at a workshop! Like me, you’ve probably learned how to create a phase-aligned crossover at a workshop under controlled conditions. It’s gratifying to measure two unmatched slopes and then manipulate delay and polarity until they are aligned. Almost anyone can do it! (One of my discoveries from building Phase Invaders is that with a couple of simple tools even my wife, who struggles to join Zoom meetings, could easily beat the game through random trial and error.) So at the workshop, everything goes well and we think, “Easy! All I have to do is make the pictures match.”
Then we get into the field and it doesn’t work, not because of the workshop’s teacher whose method was solid, but because the pictures literally cannot match. As smart as our modern audio analyzers have become, they can’t reject the room (resistance is futile) that has become an extension of the sound system in the frequency range where they share custody. You are stuck with an impossible puzzle.
I have tried many solutions to this problem. Let’s do a quick recap of my years of confusion and what I think we should do about it.
13 Things That Don’t Work
1 – Invert the polarity. Does it sound better?
This is what I learned in recording school. If you have a microphone on top and bottom of a snare drum or front and back of a guitar cabinet, you should try inverting the polarity to hear if any low end is restored.
This works fine for matched distances where the tonality of a single instrument is at stake, but not for larger distance offsets where large portions of the audience and frequency spectrum may be affected.
2 – Add delay while listening to a kick drum or reference track.
This doesn’t work because your hearing sensitivity changes by as much as 40dB through the operating range of the subwoofer (30-120Hz). Plus, a better sounding kick drum might not equal proper phase alignment. It might just be a cool sound.
I recently helped my friend Zeke with an alignment. The mains were initially set with 30ms of delay. Test tracks with lots of drum sounded, honestly, really cool. There was a really nice BOOoooming length that gave a super power to the low end.
After following a different alignment strategy (detailed below), we ended up with 9ms of delay in the sub instead. We were able to AB test the two results against each other and it was clear that one was an artistic choice (BOOoooom) and one was aligned (just Boom).
3 – Add delay while listening to a sine wave with the sub polarity inverted.
This is still the most common solution that people tell me they use in the field. The reason this one won’t work is that you are only listening to a single frequency. When perfectly aligned, it sounds the same at 0º, 360º, 720º and so on. Your alignment could be any number of cycles off and you wouldn’t know it. It is a tunnel vision solution that doesn’t tell you anything about the rest of the crossover region that typically takes place over a one octave wide frequency span, before main or sub once again becomes sole custodian over the remainder of the audible frequency range.
4 – Add delay while listening to a warble.
This is a step in the right direction, because now you are exposing yourself to the frequency spectrum where the main and sub share joint custody. It’s not a reliable solution for me, though, because I don’t hear the change. I remember when Merlijn van Veen first published a video about how to use the warble test. I was excited to try it. Unfortunately, I couldn’t hear the changes he demonstrated in the video and I have not been able to use it successfully in the field. That doesn’t mean that it might not work for you, though. The same is true for many of these tools, so don’t immediately discount them just because they didn’t work for me.
5 – Add delay while listening to band-limited pink noise.
This stimulus has definitely been more useful for me than the ones I’ve listed so far, but it won’t work for the same reasons I outlined related to the kick drum sound. If you want to try it, most audio analyzers allow you to band limit the pseudorandom pink noise generator.
6 – Add delay while listening to a tone burst.
This is the best stimulus I have found for listening tests. It’s great for doing an AB test of two different settings. My ears are not sensitive enough to dial in an exact delay value, though, so I wouldn’t try to use it alone.

7 – Measure distance offset and convert to delay.
I discovered at a recent workshop that many of my students are using this method as a “get in the ballpark” or “better than nothing” strategy. While I agree that it may get you into the ballpark, you have a 33% chance of ending up on the cancellation side of the phase wheel. Sometimes being a half-step off sounds worse than several steps off. Imagine that you are perfectly in time, but the polarity is inverted. You’re in the ballpark, but getting hit in the back of the head with the ball.
8 – Ground plane microphone position.
This is another technique that can be helpful, but won’t remove reflections from the other 5 out of 6 boundaries/walls that typically constitute a room.
9 – Measure solo elements with a loud and long transfer function.
To get a highly coherent measurement (>95%), you need a good signal to noise ratio (>10dB). No problem. Turn up the signal generator. You can improve your coherence even more by using a higher number of averages. Great. Set the averages to infinity and let it run all day.
Both of these things may help, but can’t be confirmed until you energize the room and observe the results. If there’s a cancellation because of a detrimental reflection at 100Hz, you’ll never be able to get the coherence close to 100%. You can even get high average coherence with misleading data. This may be hard to believe until you realize that the lowest time record (for modern analyzers that make use of multiple time records) typically lasts about 1 second, which means that a 1,000ft reflection will still be considered coherent signal power. For a deeper dive on this subject, please see my workshop Getting Work Done with an Audio Analyzer.
10 – Measure average phase.
This one was a real eye opener for me. I had heard people mention measuring average phase a few times in the past, but it always sounded like a dumb idea for some reason. Then I tried it.
Holy shit. Amazing improvement in measurement quality. If you were previously measuring crossover alignment with a single microphone and then tried a three microphones deployed at strategic points through the audience, it’s like night and day.
Unfortunately, this technique falls apart for me in wide rooms with center subs. In this case, their phase response at each microphone is so different that they average into confusion.
Here’s one I struggled to decipher while at a hotel in New Orleans.

11 – Measure an IR and filter it around the crossover frequency.
I love this one. It’s fun! It works…sometimes. Unfortunately, a band-limited subwoofer only transmits about 0.5% (-46dB) of the information it is being presented in comparison to a full-range speaker. Its IR has very little amplitude, leaving no prominent peak to track.
Here’s an IR of a full-range speaker.

And here’s a sub.

You can see how it would be challenging to compare their peaks.
Of course, you could view the ETC graph instead. Here’s the same full-range speaker, this time with the ETC graph filtered around 80Hz.

And here’s the sub.

We’ve achieved actionable data at the expense of losing polarity information. 🤷♀️
12 – Play Phase Invaders.
I built Phase Invaders to not only get more practice reading the phase graph, but also to help me align systems in the field. It allows you to upload measurements and quickly several alignment options while observing summation against a target.
I was able to use it in the field a few times and sadly discovered that it doesn’t always work. Phase Invaders can only give you results that are as good as your data. If you’re measuring reflections, the result will be misleading. You can’t beat the room.
Here are the two speakers I used in the previous example. The best alignment I could find adds 13ms of delay to the Main, which is going in the wrong direction.

It’s hard to trust LF measurements in far-field. Who knows what they have been through?
13 – Add delay while observing combined systems.
Normally this would be one of the last steps in your alignment process to verify summation. Unfortunately, it suffers from the same problems as measuring solo elements above. If the solo element data is misleading, the combined systems data will be as well.
Remember the alignment I mentioned in #2 with my friend Zeke. That was done with an audio analyzer. Here’s why it didn’t work.

Check out the phase. How many wraparounds do you count between 50Hz and 150Hz. Maybe four? That’s not normal, is it? I might notice that coherence is low at 70Hz and 140Hz and therefore ignore those wraparounds. Or more likely, I’ll miss those details when I’m in a hurry.
(Spoiler alert: There should only be one wraparound, but that isn’t obvious until you have a reflection-free comparison.)

[Bonus] Apply smoothing to make the graph easier to read
Some smoothing should clean up the graph and reveal the truth, right?

Unfortunately, gratuitous amounts of smoothing do not necessarily reveal the near anechoic trace as you might expect.
Back to combined systems. You might say, “Look at the combined measurement. That will verify it.”

Doesn’t look too bad to me. I don’t love what’s happening 83-100Hz, but if I were getting a show up, I would run with it.
But take a look at the room.

And now the output delay.

I’m not sure how easy it is to tell from the photo above, but the main arrays are physically farther away from FOH. Pushing them back an extra 30ms seems excessive, but the combined measurement seems to indicate that we’re in the ball park.
Here it is again compared with the combined systems measurement with the alternative alignment I calculated.

Although an improvement in summation is visible, I expected it to be much more significant because it sounded so different in the room (remember BOOooom).
You might be wondering how I created the alternative alignment. So far I’ve only covered what hasn’t worked. It may seem like I am criticizing you. I’m not. This is not about right and wrong. I always say that at the end of the day, if you can walk away happy and you got paid then you’re a success. No one is going to check your work and call the SPL Preservation Society to arrest you.
But, if you have already tried some of these methods and are nodding your head as you’re reading this, then you may be interested in a more efficient method that does work.
One thing that does work
I have only found one method that works every time: the relative/absolute method.
It goes like this:
- Create a relative alignment preset for a known distance offset.
- Modify that that preset in the field using the speaker’s absolute distance offset.
I first learned this method from Merlijn van Veen at one of his workshops 3 years ago (which inspired me to build Zoid) and have yet to find anything better. You can read Merlijn’s article here, listen to our discussion here, and watch my video here.
But, there’s a catch. 🤔
There’s always a catch. (no free lunch, etc.) 😐
Well, three of them. 😕
- Time: You need a block of unhurried focused time in a low-stakes environment so you can methodically work through each element and find the best relative alignment. The good news here is you only need to create each preset once.
- Practice: There are a handful of little details to get right. The first time you do it, it will make your brain melt. The second time, not so much. Chop wood, carry water.
- Resources: You need to be able to get your hands on those speakers.
Not all things are equal. If you are already well-practiced with the audio analyzer and creating phase aligned crossovers, then you might be able to get away with doing this in the field as speakers are being deployed. This has never worked for me. I’m always too nervous about being ready for soundcheck to dedicate time to set this up properly in the field.
At this point, you are probably starting to understand why this method hasn’t caught on like wildfire. It’s hard. It takes planning and forethought. It takes time and resources that you may not have access to. This is why most of us use one of the solutions mentioned above or just give up and do nothing at all. (Doing nothing at all does work, after all; you just don’t know where it worked.) Some of you may play the long game: spend years learning how to use an audio analyzer, track down the speakers you need to measure, and slowly build your own personal database of presets.
Of course, the solution is obvious: a magical warehouse in your backyard where you can pause time and experiment with any speaker in the world in any configuration you like. Oh, and at any SPL you please without bothering the neighbors.
Unfortunately, I’m not a wizard. 🧙🏼♂️
….yet. So until that happens, I have another idea. Maybe software can help.
What if a giant database of high-quality measurements could be consulted from anywhere in the world?
This would allow us to document pre-alignment values. Plus, you could compare real world data to near anechoic data to discern data from noise (reflection-full/free).
What if a mobile app could do some of the mathematical heavy lifting for quick results in the field?
This would allow you to use laser disto measurements or proxy loudspeakers to complete crossover alignment in about 30 seconds without getting out a calculator or audio analyzer.
What do you think?
Do you want to help me build it?
I’m still hesitant, though, because maybe there’s a reason there’s nothing like this, yet. Maybe people just prefer figuring out things on their own or feel like a mobile app won’t work for them.
I’ve started talking to a few other engineers about it and I think it might work. If we can combine a public database with an app for practical field work and a community where people can get answers, we might have something that would be helpful to a lot of live sound folks.
I don’t want to build something and then find out that nobody’s interested, though, so what do you think? Should we do it?
If yes, fill out this survey: Should I build a sub alignment app?
UPDATE: The app is built! Learn more on the SubAligner homepage.
Acknowledgments: Thanks to Steve Smith, Lou Kohley, and Mike Reed for valuable feedback!
Hi,
see how it works https://www.trinnov.com/
It’s profesional automating sound in the cinema and studio room
Hi,
see how it work Trinnov Audio https://www.trinnov.com
It is a professional automatic measurement for cinema and studio room
Looks cool. Could you link to the software itself?