Sound Design Live

Build Your Career As A Sound Engineer

  • Podcast
  • Training
    • My Courses
  • Archive

(remote) Home Theater Sound System Calibration with Smaart & Crosslite

By Nathan Lively

Back in November I had the opportunity to work on a multi-channel home theater sound system built with custom speakers. The client had their own Smaart rig so they measured the drivers and I worked on the EQ and crossover alignment within Crosslite.

What follows are clips from the meeting transcript.

We need for both of these drivers to be captured with the exact same delay locator value. Copy the delay locator value from your HF measurement. Paste it into your TF delay, and this time measure your low frequency driver again, but don’t change the delay locator.

Disable phase smoothing and magnitude smoothing and coherence blanking. Now what you’re going to do is select both of those over in the data tab and export to ASCII.

I’ve marked our crossover region here and our sums looking pretty good because I’ve done a little bit of work already. Let’s see how we’re doing in the phase. Yeah, it’s looking pretty good. So we’re adding both together, the natural row off, plus this electronic filter that I’m implementing. And so the first thing was to just check, what if we just add steeper filters instead of adding delay?

The last thing would be to now apply some EQ to make this peak go down because we did an overlap crossover instead of a unity crossover. Now I can just move this filter around a little bit and try to get a nice result there.

Yeah, I’m going to say that that’s 12dB/oct. Our goal is 24. So we need to add another 12dB/oct. So this could be perfect. Okay, let’s find our crossover region. So I’ll look for anywhere where they are 10dB apart. Delta magnitude is ten. The way you can do that in Crosslite is with these cursors. And the way you do that in Smart is you would do a trace offset. So you would offset one of these by ten and then look at this value where they still interact, where they still cross, and then you would go the other way minus ten, and then just put some kind of a marker there.

Oh, shit. It matches already with only a polarity inversion. All right, that was easy. So now we just need a little bit more EQ. Heading back to the input EQ. Okay. Should be pretty good. We can just have a look and see if we like this results. That look pretty good to you?

Let’s deploy these settings into your DSP and verify the alignment.

No delay? Shit!

How to phase align main to sub in Smaart, REW, Open Sound Meter, SATlive, and Crosslite

By Nathan Lively

The audio analyzer functions primarily as a verification tool. For this reason this article will focus on creating alignment presets, which can then be modified in the field using simple distance measurements.

To fit this into a single article I will offer an overview of a single method for each software. Although the steps with each tool might differ slightly, in general they follow this pattern:

  1. Measure each source solo.
  2. Do whatever is necessary to achieve alignment.
  3. Measure sources combined and verify summation against a target. Listen.

The Setup

  • Ground-plane.
  • Grille-to-grille (coplanar, side by side).
  • Microphone placed equidistant from each LF driver at a reasonable overall distance in order to capture actionable data and still measure the entire loudspeaker as a whole instead of a single driver or port. For subwoofers, this usually means going outside unless you have a very large room. (approx 5x measurement distance)
with permission from Merlijn van Veen

Set Levels

If you are designing an overlap crossover (+0dB), this is easy. Simply match solo measurements to the target and EQ out the summation bump at the end.

If you are designing a unity class crossover (0dB), this is surprisingly one of the most difficult steps because you want the end result to hit the target, not the individual measurements themselves. The goal is to hit the target in a single step. With most tools you’ll be working in the dark, trying to imagine where the sum is going to end up. This is why there’s a whole subroutine in my SubAligner app dedicated to finding the perfect level relationship to hit the desired target. Shout out to SATlive for being the only software that I now of that includes a perfect addition trace so you can set initial levels without worrying about the alignment right away.

For everyone else, you can start by setting levels at -6dB relative to the target and you’ll probably need to do more adjustments in the end once you see the final result.

Where is the spectral acoustic crossover?

For efficiency, it is recommended to focus on the area of interaction at greatest risk of cancellation where magnitude values are within 10dB of each other, aka the combing and transition zones.

Make the pictures match

Use delay, polarity, and filters to achieve your desired result. Either follow manufacturer specifications or get creative and come up with your own path. Maybe create presets for both and see which one your colleagues prefer in a blind listening test.

A common first step is to achieve alignment at a single starting frequency within the crossover region where you have high confidence (coherence). Find the phase offset (ΔPhase) between main and sub, then close the gap. Since the sources are equidistant, you might want to start with filters, but try both ways. Again, if you’re using a manufacturer’s preset, always start by following their guidelines.

If you’d like to use filters:

  • ΔPhase / 45º = Filter order to try. eg. 90º / 45 º = 2nd order (12dB/oct) filter (Butterworth, Bessel Normalized, and Linkwitz-Riley)
  • For all-pass filters (APF): ΔPhase / 90º = Filter order to try.
  • High-pass filters (HPF) will cause positive phase shift.
  • Low-pass filters (LPF) will cause negative phase shift.
  • It may be easier to see this in action on an unwrapped phase plot.

Applying filters is a big topic outside the scope of this article, but if your interested, please see Phase Alignment Science Academy.

If you’d like to use delay:

  • ΔPhase / 360 / Frequency * 1000 = time in milliseconds
  • If you need to wrap around the top and bottom of the phase graph then use 360 – ΔPhase. eg. If the measured phase offset between two points is 200º, but the traces are near the top and bottom of the graph and you suspect that they need to wrap around, then 360º – 200º = 160º Δphase.
  • Once you have a single frequency aligned, test out other variations at half and whole cycles away. For half cycles, add a polarity inversion. eg. If you’re aligned at 100Hz then try variations at +5ms INV, +10ms, -5ms INV, -10ms.

If you’d like to consult the Southern Oracle, you must first pass the Sphinxes’ Gate and the Magic Mirror Gate.

Verification

After you have tried several variations, choose the one who’s combined result best matches your preferred target. To break a tie, use the option with less delay or less processing overall. Listen to the result or audition multiple presets to find the one that sounds the best.

Smaart

One of the reliable things about Smaart is that the data will never change after it is stored outside from the quick compare function. This means that any change you care to make must be implemented directly in your output processor and then measured in real time.

  1. Add 10ms of delay to both outputs. The amount of delay is arbitrary, but will save you time in step 6.
  2. Measure the Main solo and capture the trace.
  3. Without changing the compensation delay, measure the Sub solo.
  4. Set the sub level to match your target trace. Capture the trace.
  5. Find the spectral crossover using trace offsets.
  6. Make the pictures match.
  7. Verify alignment and summation. Listen.
  8. Remove any extra delay left over from step 1. 

Here’s an example combining an L-Acoustics X15-HiQ with an SB118. Initial measurements reveal a 38º phase offset between them. We might first attempt to close this gap with 1.16ms of delay on the sub (38º / 360 / 91Hz * 1000), but further tests would reveal an improved alignment with a half cycle of delay and polarity inversion in the main.

Recommendations from SubAligner and the L-Acoustics Preset Guide confirm this result. If you’re a SubAligner user you can open this direct link to the alignment.

Tips: For high quality actionable data I recommend setting temporal averaging to Inf and resetting the averages with each new measurement. Consider downloading measurements from the manufacturer, Tracebook, or SubAligner in order to have some expectations to work against.

REW

The rest of the audio analyzers covered in this article offer functions to simulate output processing. In REW the EQ window allows you to experiment with different filters and then generate a new measurement that includes those filters. Then you can experiment with gain, delay, and polarity using the Alignment Tool and its auto solver options.

  1. Measure Main solo.
  2. Estimate IR Delay. Shift and Update Timing Reference.
  3. Measure Sub.
  4. Find the spectral crossover using Measurement Actions.
  5. Experiment with filters and the Alignment Tool to make the pictures match. Generate an Aligned Sum for each variation.
  6. Compare all of the Aligned Sum variations for alignment and summation. Listen.

Tips: For high quality actionable data I recommend setting the number of measurement repetitions to 8 and the length to 256k.

Open Sound Meter

  1. Measure the Main solo and capture the trace.
  2. Without changing the compensation delay, measure the Sub solo. Capture the trace.
  3. Set the sub level to match your target trace.
  4. Find the spectral crossover using gain changes.
  5. Make the pictures match. You can click on a measurement and adjust its delay and polarity while watching a sum trace calculated with File > Add math source.
  6. Verify alignment and summation. Listen.

In this image you can see me creating the sum trace on the left and then manipulating the main trace on the right to achieve better summation.

SATlive

SATlive includes some of my favorite tools for crossover alignment, which were my inspiration for getting started with SubAligner. The Live Add trace gives you a real time crystal ball preview of what the combination of main and sub will look like. The Perfect addition trace creates a target so you can see how well you are doing. The Delay-Suggestion Tool will run an auto solver and make recommendations for delay and polarity. The Area Of Interaction Tool can be used to visualize the crossover region.

  1. Measure the Main solo and capture the trace.
  2. Without changing the compensation delay, measure the Sub solo. Capture the trace.
  3. Set the sub level to match your target trace while observing the Perfect Addition trace.
  4. Find the spectral crossover using the Area Of Interaction tool.
  5. Make the pictures match with the aid of the Delay-Suggestion Tool.
  6. Verify alignment and summation by comparing the Live Add Trace against the Perfect Addition Trace. Listen.
satlive

Crosslite

Crosslite also includes auto solver functions, but instead of using a brute force iterative approach, it will attempt to align the start or peak of the impulse responses, which can be filtered to focus on the crossover region. One of my favorite tools in Crosslite is the cursor. It can be enabled to find the phase difference between measurements and even converted into time for the alignment. Crosslite also offers various filter options and can be thought of as a full DSP simulator.

  1. Capture the Main and Sub solo.
  2. User Memories > Functions > Sum > Process Method > Sum Magnitude to generate a perfect addition trace. Adjust the sub level until the Sum Magnitude matches your target trace.
  3. Find the spectral crossover either using Gain or cursors.
  4. Make the pictures match. The most efficient starting point may be found by inserting a peak filter at the input around the center of the crossover region and running the Optimize Time function. Experiment with changing the alignment to rise or peak and the filter from normal phase to phase zero. The best option here may depend on the quality of the measurement data. Always check the phase graph afterwards.
  5. Verify alignment and summation. Listen.

Next Steps

Now that you’ve created an alignment preset, it can be deployed and modified in the field using distance measurements. If you’d like to send me the speaker measurements you took along the way, I’ll add them to the SubAligner app.

How to practice at home without a PA

You can download lots of high quality data from Tracebook to practice with.

Have you tried any of these softwares? What method do you use to optimize phase alignment between main and sub?

Subwoofer Alignment at The Redmoor Cincinnati

By Nathan Lively

Recently I had the opportunity to help my friend Nick work on the calibration of some new components at a great looking venue in Cincinnati called The Redmoor. We met on TeamViewer and recorded the entire thing so that it could serve as a combination of consultation and training. If you’d like my help on your project, you can schedule an appointment here.

In this post I’ll walk you through some of the EQ and crossover alignment steps we took.

Pre-production

First, gather materials. I checked Tracebook for the HDL10-A and STX828S. No luck. I found the GLL file for the HDL10-A on the RCF website, though. I opened that in the GLL viewer, built an array with the settings I expected to see, calculated the balloon, opened the transfer function, and exported it to a file.

The get the subwoofer data into my audio analyzer I used VituixCAD2 to convert the image on the spec sheet into response data. Then I imported everything into Crosslite.

The sub’s native response will allow me to experiment with different low-pass filters.

Next I needed to choose a target. Since we are looking at anechoic responses, it makes sense to use a flat target, but recently I have started using a +6dB slope in the low end because I have found that that will push the crossover region up, which is a better representation of what will happen in the field when someone inevitably turns up the subs.

I’ll start by inserting those filters recommended by the manufacturer.

Next I’ll apply some initial EQ and gain to make a better match of the target.

Should we design an overlap or unity crossover? Let’s do both!

Before I we even look at the phase graph, let’s measure some slopes. We know that the sub’s slope is 24dB/oct because it was a flat line before the LPF was inserted. Switching to view Data Pre in the bottom window, we can look at different HPF slopes on top of the HDL10-A response and find that it is 48dB/oct. This is a clue that the phase response of the main will be steeper than the sub.

Switching to the phase graph we find that that is in fact the case.

Let’s try adjusting the sub’s LPF to match the response of the main.

Now the sub is too steep. Let’s split the difference with 36dB/oct. Now we have a nice match with only about a 30º maximum phase offset between them.

The filter change required a small gain adjustment. Here’s the sum.

What should we do about that bump? We could leave it alone and say we like it, but let’s insert symmetrical filters to restore the response to the target.

Now let’s try the unity class crossover.

The HDL10-A already has a steep HPF so I am reticent to make it even steeper, but I need main and sub to meet at -6dB. I know that the DSP at The Redmoor is a Venu360 so we’ll only have access to basic EQ filters. Let’s try the least steep HPF available, 6dB/oct. I’ll adjusted the delay by 0.5ms for a slightly better phase alignment.

Let’s try a 12dB/oct option.

Let’s zoom in and compare them. Which one is better? I don’t know, but now we have some options.

Now let’s see how this actually played out in the field.

Production

We started by verifying all settings and taking eight measurements of the HDL10-A through their coverage area.

We applied EQ towards the target and took more measurements to verify the EQ and prepare for alignment. We exported averages from Smaart and imported them into Crosslite. Here’s the phase graph.

At this point I realized that we could have made our work a lot easier by starting out with a ground plane measurement very close to the speakers and without any processing to get cleaner data for alignment. But, we were running out of time so I decided to simply apply 3ms of delay to the sub and move forward with this solution.

Here’s what the final measurement looked like.

Post production

Let’s see if we can improve on the alignment I came up with in the field.

Interestingly, measured in the room, the HDL10-A appear to have a 24dB/oct slope, not 48dB/oct as expected. Maybe this is a result of one of the user definable settings on the back.

The STX828S appear to have an 18dB/oct slope, even though we used the recommended 24dB/oct slope on the LPF.

How can we equilize this relationship? 24 – 18 = 6, so we can add a 6dB/oct LPF to the sub, right?

But that will add another 3dB of attenuation at the cutoff frequency, which we don’t want because we are trying to simulate what would have happened if we would have used a different LPF from the beginning.

One option is to simply switch the LPF to zero magnitude. That will give us a steeper slope without affecting the magnitude. Of course the magnitude won’t be accurate, but we can still research the phase alignment.

The result is better alignment without any additional delay.

I should make it clear here that a zero magnitude filter is not something you would normally find in a DSP. It is a special kind of simulation that Crosslite offers for research purposes. The closest thing you would find in a DSP is an all-pass filter or within a variable architecture FIR filter.

How do we know if we are making an improvement? We can see the phase come into better alignment and we can see the sum go up, but I find it helpful to have a goal of perfection to compare it to. In SATlive you would load the Perfect Sum trace. The workaround I used in Crosslite was to simply import the data a second time, but this time without the phase.

In this graph you can the the perfect sum target with two delay options. Both of the options include the new zero magnitude LPF.

How do you prepare for crossover alignments?

Do subwoofers need time alignment?

By Nathan Lively

It’s really important to get the low end right at live events. Research has shown that 3/4 of what people consider to be high quality sound comes from the low frequency content.

Subwoofers are a big part of that low frequency content, supporting and extending its capabilities. However, subwoofers also require careful setup and alignment to ensure optimal performance.

If you’ve ever had trouble getting your low end right, then you might want to read this article. It will explain why subwoofers need to be aligned properly and how to do it.

What is subwoofer time alignment?

Subwoofer time alignment is the compensation for arrival time differences between sources at the listening position. The difference in arrival times may be caused by a physical distance offset or an electronic delay. It is not frequency dependent.

The journey of sound from transmitter to receiver is not instantaneous. If two sources are separated by any distance then their sound arrivals will also be separated. This is the common situation with mains in the air and subs on the ground. From the listeners perspective the subwoofer is closer and must therefore be delayed (or physically moved) to be time aligned with the main.

distance offset

Does high frequency sound travel faster than low frequency sound?

In short, no.

The speed of sound in an ideal gas depends only on its temperature and composition. The speed has a weak dependence on frequency and pressure in ordinary air, deviating slightly from ideal behavior.

Wikipedia

To compare the speed of sound a different frequencies using the Rasmussen Report PL11b at 20ºC, 50% RH, 101.325kPa:

20Hz: 343.987m/s

20kHz: 344.1206m/s

From 20-20,000Hz the speed of sound changes by only 0.1336m/s.

What causes subwoofer time misalignment?

Subwoofer time misalignment can be caused by acoustic or electric latency. Acoustic latency occurs when two sources do not have matched distances. Electric latency happens upstream in the signal chain, often in a digital signal processor (DSP).

Unless the receiver sits equidistant from both sources, some amount of acoustic latency will always occur. Ignoring any boundary effects, imagine a situation where the entire audience stands at 1.6m height. With a subwoofer on the ground and a main speaker at 3.2m height there is no difference in distance from each speaker to the audience. Everywhere else, there is.

Electrical latency can occur anywhere in the signal chain, but often occurs when one source is processed separately and differently than the other. If two matching copies of a signal are sent to the main and sub then there is no latency, but if the signal for the sub is processed independently through ten plugins then there will be a difference in latency.

How much latency or misalignment is too much?

When a main is stacked on top of a sub we don’t usually worry about the acoustic latency. When the sends for main and sub are split in a DSP we also don’t usually worry about the electrical latency.

Why is that?

Acoustical latency

The wavelengths of low frequencies are relatively large and require a big change for misalignment to bother us. For the purposes of this article I will define a significant misalignment as anything beyond 60º or 17% of a cycle because it will produce a reduction in summation of 0.5dB.

How far apart do our speakers need to be to create a 60º misalignment?

The operating frequency range of a Meyer Sound 750-LFC is 35-125Hz. The highest frequency has the shortest wavelength and therefore the greatest risk. The wavelength of 125Hz is 2.75m, about the height of a male ostrich. 17% of 2.75m is 0.46m, about the length of your forearm.

If we return our example of a seated audience with a sub on the ground then the main would need to raise up to 5.15m to be 0.46m farther away than the subwoofer from the mic position.

offset

Don’t try to generalize this example into a rule. You could just as easily put the sub in the air with the main, but 0.46m behind it to create the same misalignment or change the microphone position.

It is difficult to generalize, unfortunately, because the relationship between source and audience will always be different. However, I can see how it is helpful to translate alignment to distance. This is why the SubAligner app includes maximum distance offset in the Limits pop-up.

limits

The opportunity here is that after you have performed an alignment for a single location then you can move out from that location in any direction while observing the change in distance offset to find the edges of the area of positive summation (aka the coupling zone, aka <120º).

Electrical latency

Matched electrical latency is maintained by splitting the send to main and sub at the last moment necessary. This doesn’t mean you can’t mix to subs on a group in your console if you prefer, just make sure that the sends to main and sub are coming out of the console with the exact same latency. You can verify this with an audio analyzer. 

Time alignment vs Phase alignment

Subwoofer time alignment can be confused with subwoofer phase alignment because the two are interconnected. Time offset causes phase offset, but phase offset doesn’t necessarily cause time offset.

In most cases the timing is set to “align” two (or more) signal sources so as to create the most transparent transition between them. The process of selecting that time value can be driven by time or phase, hence the relevant terms are “time alignment” and “phase alignment.” These are related but different concepts and have specific applications. It’s important to know which form to use to get your answers for a given application.

prosoundweb.com

Time alignment connotes a synchronicity of sources, e.g., they both arrive at 14 milliseconds (ms). Phase alignment connotes an agreement on the position in the phase cycle, e.g., they each arrive with a phase vector value of 90 degrees.

prosoundweb.com

We have already seen how acoustic and electronic latency can affect time alignment. Let’s look closer at what can affect phase alignment.

What is subwoofer phase alignment?

Phase alignment is the process of matching phase at a frequency and location.

If a sine wave is generated starting at the 0º position of its cycle and then fed into a subwoofer, will it come out at 0º?

That will only tell us the story at one frequency, though. How can we look at the story of the entire operating range?

What does sound look like before it goes into a subwoofer?

This video compares the input and output of a microphone cable passing sine waves at 41, 73, and 130Hz with an oscilloscope. Traveling at the speed of light the mic cable appears to create no time offset.

I could insert a video comparing the input and output of a microphone cable with an impulse response, but without anything in line, they look the same. I added a 1ms delay to put the IR in the middle of the graph.

This image shows the transfer function of a microphone cable with a magnitude and phase graph. The magnitude and phase trace are effectively flat. Exactly what we want from from a cable.

What does sound look like when it comes out of a subwoofer?

This video compares the input and output of a subwoofer passing sine waves at 41, 73, and 130Hz with an oscilloscope. I have removed any latency so that we can focus on phase shift created by the sub.

This video compares the input and output of a subwoofer with an impulse response (IR). The IR seems to get stretched out as the amount of phase shift changes over frequency. This is the normal behavior of a transducer who’s group delay, and therefore phase shift, is variable and unable to reproduce every frequency at the same time through the operating range.

This video compares the input and output of a subwoofer with a magnitude and phase graph. Unlike most full-range speakers, the phase response of a sub never flattens out. It’s a moving target.

Do all subwoofers have the same phase response?

A subwoofer’s response will change with its mechanical and electrical design. Matching drivers in different boxes may have quite different responses. Even the same combination of driver and box might have a small contrast in response because a typical manufacturing tolerance is ±5dB.

For this reason it is important to avoid making assumptions based on a manufacturer’s spec sheet, but instead measure the final product and prove it to yourself.

Does the phase response of a subwoofer change with level?

A cold subwoofer operating within its nominal range should maintain a steady phase response against any change in level. But, as a sub approaches maximum SPL or begins to heat up, its response may become non-linear. This behavior will vary from subwoofer to subwoofer so it’s important to avoid driving two different subwoofers with the same channel.

Unfortunately, I don’t know a rule of thumb to guide you, but it would make sense to compare the response of a subwoofer when it’s cold to when it is hot. When I worked at Amex in Slovakia and we were setting up a new system, Igor would punish it outside playing loud music for a few hours and listen to it afterwards.

Of course you can measure this change with your audio analyzer, but another fun test is to push on the driver with your hand when it’s cold to feel how rigid it is. Run it at a maximum level for two hours. Push on it again. Feel how it has become less rigid (increased compliance).

Here is a graph from Heat Dissipation and Power Compression in Loudspeakers from Douglas J. Button showing a sample loudspeaker before and after full-term power compression. The solid line is the one with more heat and a worse quality rating.

Does the phase response of a subwoofer change over distance through the air?

…allow me to remind you that the loudspeaker’s phase response, within its intended coverage, typically doesn’t change over distance, unless you actually did something to the loudspeaker that invokes actual phase shift, i.e., applying filters of some sort which you should be able to rule out!

merlijnvanveen.nl

Here is the magnitude and phase response of a subwoofer measured at 1m and 100m. The only thing that has changed is the level due to the inverse square law.

1mV100m

Room interaction however, will make it appear like the loudspeaker’s phase response is changing over distance because the room makes the traces go FUBAR.

merlijnvanveen.nl

Here’s what that above measurement looks like if I enable four boundaries. At 100m the reflections have transformed the phase trace (blue).

1mV100mWithBoundaries

Where is the acoustic center of a subwoofer?

Why does distance offset not correspond exactly with phase offset?

All other things being equal, the distance offset measured from your microphone to your subwoofer may not exactly correspond to the measured phase offset in your audio analyzer. This is due to an interesting acoustical phenomenon documented by John Vanderkooy.

As a useful general rule, for a loudspeaker in a cabinet, the acoustic centre will lie in front of the diaphragm by a distance approximately equal to the radius of the driver.

J.  Vanderkooy, “The Low-Frequency Acoustic Center:  Measurement, Theory, and Application,” Paper 7992, (2010 May.)

This fact becomes important when estimating delay times for subwoofer arrays where a small distance in the wrong direction could compromise the results. It may also be important if you are attempting to estimate subwoofer phase delay from far away without prior access to its native response.

What is the subwoofer crossover frequency?

A subwoofer’s recommended crossover frequency may exist on its spec sheet, but when it comes to subwoofer alignment in the field we must look beyond a single frequency to the entire crossover region affected by the alignment. To make an exaggerated theoretical example, imagine if you turn the subwoofer up by 100dB. The crossover region where they interact will also move up.

crossover
crossover+50

The crossover region is commonly found where magnitude relationships are within 10dB because you have the highest risk of cancellation and the highest reward of summation. To find this region in your audio analyzer, insert a 10dB offset and find the magnitude intersection. Some audio analyzers offer other tools like cursors.

What causes subwoofer phase misalignment?

The most common reason for subwoofer phase misalignment is user error. This may seem like a bold or aggressive claim, but manufacturers have historically placed their responsibility on their customers.

There are many subwoofers in the world and only a small number of them have detailed instructions on phase alignment within a narrow set of limitations. The rest require the user to discover an optimal alignment for themselves. This is further complicated by the fact that reflections can make measurement and listening tests misleading or impossible when performed under typical field conditions.

We saw above that what comes out of a subwoofer is not what goes in due to system latency and phase shift. Some products take this fact into account and are specifically designed to work together and are phase aligned when equidistant, therefore only requiring compensation for any distance offset. Other products are designed to work together, but are not phase aligned when equidistant. The third, and most common, scenario is that sound engineers like me and you end up combining products from different generations, families, and manufacturers that were never designed to work together.

I should pause here for a moment to say that I’m not passing judgment or point a finger. I don’t have enough aware of all conditions to say why things are this way, just that the complications exist. And honestly, I enjoy the puzzle. See any of the video on my YouTube channel from the past couple of years for evidence. 🙂

What are the consequences of subwoofer phase misalignment?

Let’s ask Nexo.

Consequences of badly aligned systems
Mis-aligned systems have less efficiency: i.e. for the same SPL you will be obliged to drive the system harder, causing displacement & temperature protection at lower SPL than a properly aligned system. The sound quality will decrease. The reliability will decrease as the system is driven harder to achieve the same levels. In certain situations you may even need more speakers to do the same job.

NXAMP4x1 User Manual v3.1

Do subwoofers need time alignment?

Yes, subwoofers need time alignment any time there is a distance offset creating acoustic latency. They also need phase alignment in any event when they are being combined with another source that is not already phase aligned when equidistant.

Do not assume that your main and sub are phase aligned when equidistant just because they came from the same manufacturer. You have a 33% chance of creating cancellation instead of summation.

How do you time and phase align a subwoofer?

Although there seem to be many methods, I have only ever found one that works reliably and has all three unobtainable characteristics: fast, cheap, and good. It may sound like I’m about to go into some wild conspiracy theory you’ve never heard of, but the method I use is also recommended by L-Acoustics, d&b audiotechnik, RCF, and Coda Audio (and probably more). It involves two steps: first in the phase and then in the time domain.

  1. Create a relative equidistant alignment preset using filters, delay, polarity, etc. (this is the fun part)
  2. Modify that preset in the field using the speaker’s absolute distance offset by adjusting the output delay time or physical placement.

The method goes by various names, but I’ll give Merlijn van Veen the credit for the Relative Absolute Method since he introduced the idea to me. I then packaged the idea into an app called SubAligner. It not only includes alignments for many major brands, but a total of 39,183 possible combinations between different brands.

How do you verify subwoofer alignment?

How do you know if you’ve done it correctly?

A listening test should reveal higher SPL and a tighter response around the crossover region. SubAligner offers a black and red pulse to focus your ears in the right area.

An audio analyzer should show matching phase response between each speaker and expected summation in the magnitude response through the crossover frequency range. Appropriately filtered IR peaks should be aligned.

All of these methods should work, but can be ruined by reflections. In these worst case scenarios, I still rely on the Relative Absolute Method because I’d rather use something I know to be true than try to speculate on what might be true. I have written more about this in Don’t Align Your Subwoofer to a Room Reflection and Can you remove reflections from live measurements for more accurate alignments?.

Have you tried this method? What were your results?

Acknoledgements

I want to thank Francisco Monteiro for the feedback and patience with my many questions and misunderstandings.

Know Your Audio Analyzer Averages

By Nathan Lively

After you take several measurements and average them together, what do you expect to see?

If these two measurements are averaged, what do you expect?

Is zero the average of -6dB and 6dB, or something else?

1 octave wide parametric filters at 1kHz

Here are four possible averages you may have guessed, depending on which audio analyzer you use.

tl;dr

  1. Know what kind of averaging your audio analyzer uses.
  2. Collecting more data is more important than the way it is averaged.

Here are demos from a selection of audio analyzers in alphabetical order.

Crosslite+ v2.0.0 8

Along with options for pre and post processing, Crosslite+ offers four different options.

“Arithmetic Average Complex” : Arithmetic mean in complex values.

“Quadratic Average Complex” : Quadratic average or RMS in complex values.

“Arithmetic Average Magnitude” : Arithmetic mean in real values in dB, with phase zeroed.

USER GUIDE CROSSOLITE REV 1.1

Does the magnitude average change with trace offset? Yes. It appears that trace offset in Crosslite is the same as a gain change.

Does the magnitude average change with phase offset? Yes, except for Arithmetic Average Magnitude.

In this test I averaged the response of two microphone cables with a second order APF inserted on one of them at 1kHz.

Se deseja saber a média de fontes separadas coerentes ou de uma curva polar, é bom utilizar a aritmética complexa.

Se é média para curvas de equalização, melhor a de magnitude em dB.

Escolhi os tipos que estão mais presentes na maioria dos softwares que possuem funções de média.

Francisco Monteiro

If you want to know the average of separate coherent sources or a polar curve, it’s a good idea to use complex arithmetic.

If it’s an average for equalization, better the magnitude in dB.

[For Crosslite] I chose to offer the types that are most present in most software that have averaging functions.

Francisco Monteiro

L-Acoustics M1

M1 offers a single kind of average, which appears to be a simple average with phase zeroed.

Does the magnitude average change with trace offset? No. There is no trace offset option.

Does the magnitudeaverage change with phase offset? No.

OpenSound Meter v1.0.5

Open Sound Meter offers vector and polar averaging.

Does the magnitude average change with trace offset? Yes, the results are the same for a measured gain change.

Does the magnitude average change with phase offset? Yes for vector. No for polar.

For in-space averaging I use polar method. For vector (complex) you need to have very close phase responses.

Pavel Smokotnin

REW v5.20

REW offers two options for averaging.

Vector average, which averages the currently selected traces taking into account both magnitude and phase. It can only be applied to measurements that have an impulse response.

RMS average, which calculates an rms average of the SPL values of those traces which are selected when the button is pressed. Phase is not taken into account, measurements are treated as incoherent. This does the same as the Average The Responses button. If the measurements were made at different positions (spatial averaging) it may be helpful to first use the Align SPL… feature to remove overall level differences due to different source distances.

REW Help

Does the magnitude average change with trace offset? Yes, but only after the data is permanently changed with the Add offset to data button.

Does the magnitude average change with phase offset? Yes for vector. No for RMS.

RiTA

RiTA offers a single options for averaging traces: Arithmetic Average Complex

Does the magnitude average change with trace offset? Yes.

Does the magnitude average change with phase offset? Yes.

The next version of RiTA will include three options for averaging.

Complex AVG: magnitude estimation is greatly affected when complex averaging is performed. It is useful when you are interested, in close measurements, in knowing the constructive and destructive interference of the sound system.

ABS AVG and dB AVG are intended for spatial averaging of several microphones. Abs AVG tends to give priority to good data and less to data affected by reflections. dB AVG gives equal weight to all data.

By default, RiTA 2.5 uses ABS AVG

Pepe Ferrer

SATlive

SATlive offers a three options for averaging: Create Sum Trace, Complex Add, and Weighted Average.

Does the magnitude average change with trace offset? No

Does the magnitude average change with phase offset? Yes for Complex Add. No for Sum Trace.

SATlive offers 3 different approaches for averaging different measurements.

1. Complex averaging: Will calculate the sum using the amplitude values of each trace and the phase relation between the traces. It is intended to average measurements taken at the same mic position, like Sub/Top time align or interference of different sources. (quick traces -> sum trace complex averaging).

2. Amplitude based averaging: Will calculate the sum by normalizing (center at 0 dB) each trace and afterwards adding the amplitude values only. This is helpful when you want to average traces taken at different mic locations (and in most cases, using the same source).  (quick traces -> sum trace Create Sum Trace).

3. Weighted averaging: This is a special version of 2. where you can assign a weighting factor to each trace (three configurable settings). This was inspired by the Primary/Secondary/Tertiary measurement approach, which I first heard about during my SIM II seminar. In fact, it does not make much sense to add tertiary traces to the result, but it would be possible. (Trace Manager)

Hint: There is a Valid only if all traces valid option for 1. and 2. where you can define wether just one valid result at a certain frequency will be sufficient for a valid result or all traces averaged must contain a valid value to create a valid result.

Which of the options do you recommend to your users for judgement of tonality and EQ operations?

Only option 2. and 3. will make sense here. I rarely work with averaged measurements during EQing. Normally I’d use the Primary Location trace as the base for EQ while the other traces help me to distinguish if the problem is global or just local.

Big differences between the different mic-locations (primary/secondary) indicate a problem that you should fix before applying the eq (redirecting the speaker, additional speaker, speaker with a different directivity pattern).

For overall tonality I’d go for 2 and for Eq-ing for 3

Thomas Neumann

Smaart v8.5.0.2

Smaart offers two options of averaging with the second including built-in proprietary pre-processing.

Decibel spatial averaging, sometimes called arithmetic averaging, is a simple average of decibel magnitudes at each frequency. Spatial power averaging is the average of squared linear magnitudes at each frequency with the result converted to decibels.

Unweighted dB averaging works exactly the same way both transfer function magnitude and spectrum averages. When you select Power averaging for transfer function measurements, however, Smaart automatically adjusts the overall level of all individual measurements going into the average according to their average decibel magnitudes in the range of 225 Hz to 8.8 kHz so that they are all approximately equal in level throughout that range.

Rational Acoustics Smaart v8 User Guide, Release 8.3

Does the magnitude average change with trace offset? No.

Does the magnitude average change with phase offset? No.

Our data is in dB, so we have to decide whether to average linearly or logarithmically, whether to normalize first, whether to weight by coherence (does it make sense that poor-quality data gets as much “say” in the final result as high-quality data?) and of course remembering that FFT math spits out complex data points, not simple integers.

So you can end up with a lot of approaches that are all valid from a mathematical standpoint, but the question becomes “which method gives us the most useful result?” (I could average together the number of socks in my drawer and the number of tires on my car, and even if my math was correct, it’s a meaningless answer for all practical purposes.) So at the end of the day, we want averaging that produces information that’s helpful to the user. If you have a bunch of traces and you average them, we have an expectation of what that final averaged response should look like. How well does it highlight the trends indicated by the individual traces? That’s what we’re looking for when we take an average, and so our averaging is designed with that in mind.

In terms of which to use, just like everywhere else in Smaart: if you’re not sure which setting you need, use the defaults. They’ve been carefully chosen over many years to give good results without the user having to tweak around. I actually reset the software to default configuration every time I use it, and I pretty rarely need to go in and change a bunch of things from that state. The primary advantage of power averaging would be if you’re averaging together a bunch of traces that have severe comb filtering (which hopefully doesn’t happen all that often). The math will give more weight to the peaks and less to the dips, so you end up with something that can be more representative of the overall response in that area and what your ear might tell you. But – in most circumstances, the differences between coherence-weighted dB average and power average end up being very small. If you create both types of average from the same dataset, and lay the two averages traces on top of each other, you’ll see they tend to agree very well. I think you’d have to come up with a pretty contrived situation or have pretty bad-quality measurement data to get a result where the power averaging and the dB averaging disagree.

Michael Lawrence

All together now

Here’s an overview of the different averages being discussed in high contrast. All of these are my own estimations since the math is not exposed and is in some cases proprietary.

Which one should I use?

Please follow the manufacturer guidelines and in most cases stick with the default settings.

The demos in this post average electrical measurements of symmetrical EQ filters in order to clearly expose the calculations being used. I want to be able to see clearly if the average of +6 and -6 is 0 or something else. Measurements of speakers in rooms will feature many wide peaks and narrow valleys instead of this symmetrical behavior.

As I worked through each demo I found myself wondering why I might use one average over another. Being visually inclined and looking at a graph, at first the simple magnitude average made the most sense.

(-6 + 6) / 2 = 0

M1 offers this as its only option and it is the default option in Smaart and SATlive.

Why do the other options exist?

If you had one subwoofer and I gave you another, how much would that be in decibels? You would add 0dB + 0dB to get 6dB.

If I gave you another half a sub you would have 8dB because 20 * log10(1 + 1 + 0.5) = 8.

Following the same process of linear to log conversion, we should calculate the decibel average of -6dB and 6dB like this:

20 * log10((0.5 + 2) / 2) = 1.9dB

Maybe it makes more sense now why some audio analyzers like REW, RiTA, and Open Sound Meter show an average of 1.9dB instead of 0dB.

Interestingly, Bob McCarthy finds even this form of average to be lacking since it does not take psychophysics into account.

Studying summation revealed that 20–40 dB dips are likely to stay down in only a small area, whereas 6 dB peaks may spread over a wide area. Studying perception revealed greater tonal sensitivity to wide peaks over narrow dips. Therefore we should be wary of accepting 0 dB as the best representative here. When samples agree, the averaging builds confidence. When samples differ, the average is suspect. There’s safety in numbers when math averaging is used: get a lot of samples.

McCarthy, Bob. Sound Systems: Design and Optimization: Modern Techniques and Tools for Sound System Design and Alignment (p. 453). Taylor and Francis. Kindle Edition.

In this case “0 dB” refers to the average of a 6dB peak and a -40dB valley.

My takeaway from all of this is that more measurements combined with optical averaging (looking at them all at once) is more important than the specific form of mathematical averaging you choose.

What do you think?

Search 200 articles and podcasts

Copyright © 2022 Nathan Lively

 

Loading Comments...