Sound Design Live

Build Your Career As A Sound Engineer

  • Podcast
  • Training
    • My Courses
  • Archive

(remote) Home Theater Sound System Calibration with Smaart & Crosslite

By Nathan Lively

Back in November I had the opportunity to work on a multi-channel home theater sound system built with custom speakers. The client had their own Smaart rig so they measured the drivers and I worked on the EQ and crossover alignment within Crosslite.

What follows are clips from the meeting transcript.

We need for both of these drivers to be captured with the exact same delay locator value. Copy the delay locator value from your HF measurement. Paste it into your TF delay, and this time measure your low frequency driver again, but don’t change the delay locator.

Disable phase smoothing and magnitude smoothing and coherence blanking. Now what you’re going to do is select both of those over in the data tab and export to ASCII.

I’ve marked our crossover region here and our sums looking pretty good because I’ve done a little bit of work already. Let’s see how we’re doing in the phase. Yeah, it’s looking pretty good. So we’re adding both together, the natural row off, plus this electronic filter that I’m implementing. And so the first thing was to just check, what if we just add steeper filters instead of adding delay?

The last thing would be to now apply some EQ to make this peak go down because we did an overlap crossover instead of a unity crossover. Now I can just move this filter around a little bit and try to get a nice result there.

Yeah, I’m going to say that that’s 12dB/oct. Our goal is 24. So we need to add another 12dB/oct. So this could be perfect. Okay, let’s find our crossover region. So I’ll look for anywhere where they are 10dB apart. Delta magnitude is ten. The way you can do that in Crosslite is with these cursors. And the way you do that in Smart is you would do a trace offset. So you would offset one of these by ten and then look at this value where they still interact, where they still cross, and then you would go the other way minus ten, and then just put some kind of a marker there.

Oh, shit. It matches already with only a polarity inversion. All right, that was easy. So now we just need a little bit more EQ. Heading back to the input EQ. Okay. Should be pretty good. We can just have a look and see if we like this results. That look pretty good to you?

Let’s deploy these settings into your DSP and verify the alignment.

No delay? Shit!

How to phase align main to sub in Smaart, REW, Open Sound Meter, SATlive, and Crosslite

By Nathan Lively

The audio analyzer functions primarily as a verification tool. For this reason this article will focus on creating alignment presets, which can then be modified in the field using simple distance measurements.

To fit this into a single article I will offer an overview of a single method for each software. Although the steps with each tool might differ slightly, in general they follow this pattern:

  1. Measure each source solo.
  2. Do whatever is necessary to achieve alignment.
  3. Measure sources combined and verify summation against a target. Listen.

The Setup

  • Ground-plane.
  • Grille-to-grille (coplanar, side by side).
  • Microphone placed equidistant from each LF driver at a reasonable overall distance in order to capture actionable data and still measure the entire loudspeaker as a whole instead of a single driver or port. For subwoofers, this usually means going outside unless you have a very large room. (approx 5x measurement distance)
with permission from Merlijn van Veen

Set Levels

If you are designing an overlap crossover (+0dB), this is easy. Simply match solo measurements to the target and EQ out the summation bump at the end.

If you are designing a unity class crossover (0dB), this is surprisingly one of the most difficult steps because you want the end result to hit the target, not the individual measurements themselves. The goal is to hit the target in a single step. With most tools you’ll be working in the dark, trying to imagine where the sum is going to end up. This is why there’s a whole subroutine in my SubAligner app dedicated to finding the perfect level relationship to hit the desired target. Shout out to SATlive for being the only software that I now of that includes a perfect addition trace so you can set initial levels without worrying about the alignment right away.

For everyone else, you can start by setting levels at -6dB relative to the target and you’ll probably need to do more adjustments in the end once you see the final result.

Where is the spectral acoustic crossover?

For efficiency, it is recommended to focus on the area of interaction at greatest risk of cancellation where magnitude values are within 10dB of each other, aka the combing and transition zones.

Make the pictures match

Use delay, polarity, and filters to achieve your desired result. Either follow manufacturer specifications or get creative and come up with your own path. Maybe create presets for both and see which one your colleagues prefer in a blind listening test.

A common first step is to achieve alignment at a single starting frequency within the crossover region where you have high confidence (coherence). Find the phase offset (ΔPhase) between main and sub, then close the gap. Since the sources are equidistant, you might want to start with filters, but try both ways. Again, if you’re using a manufacturer’s preset, always start by following their guidelines.

If you’d like to use filters:

  • ΔPhase / 45º = Filter order to try. eg. 90º / 45 º = 2nd order (12dB/oct) filter (Butterworth, Bessel Normalized, and Linkwitz-Riley)
  • For all-pass filters (APF): ΔPhase / 90º = Filter order to try.
  • High-pass filters (HPF) will cause positive phase shift.
  • Low-pass filters (LPF) will cause negative phase shift.
  • It may be easier to see this in action on an unwrapped phase plot.

Applying filters is a big topic outside the scope of this article, but if your interested, please see Phase Alignment Science Academy.

If you’d like to use delay:

  • ΔPhase / 360 / Frequency * 1000 = time in milliseconds
  • If you need to wrap around the top and bottom of the phase graph then use 360 – ΔPhase. eg. If the measured phase offset between two points is 200º, but the traces are near the top and bottom of the graph and you suspect that they need to wrap around, then 360º – 200º = 160º Δphase.
  • Once you have a single frequency aligned, test out other variations at half and whole cycles away. For half cycles, add a polarity inversion. eg. If you’re aligned at 100Hz then try variations at +5ms INV, +10ms, -5ms INV, -10ms.

If you’d like to consult the Southern Oracle, you must first pass the Sphinxes’ Gate and the Magic Mirror Gate.

Verification

After you have tried several variations, choose the one who’s combined result best matches your preferred target. To break a tie, use the option with less delay or less processing overall. Listen to the result or audition multiple presets to find the one that sounds the best.

Smaart

One of the reliable things about Smaart is that the data will never change after it is stored outside from the quick compare function. This means that any change you care to make must be implemented directly in your output processor and then measured in real time.

  1. Add 10ms of delay to both outputs. The amount of delay is arbitrary, but will save you time in step 6.
  2. Measure the Main solo and capture the trace.
  3. Without changing the compensation delay, measure the Sub solo.
  4. Set the sub level to match your target trace. Capture the trace.
  5. Find the spectral crossover using trace offsets.
  6. Make the pictures match.
  7. Verify alignment and summation. Listen.
  8. Remove any extra delay left over from step 1. 

Here’s an example combining an L-Acoustics X15-HiQ with an SB118. Initial measurements reveal a 38º phase offset between them. We might first attempt to close this gap with 1.16ms of delay on the sub (38º / 360 / 91Hz * 1000), but further tests would reveal an improved alignment with a half cycle of delay and polarity inversion in the main.

Recommendations from SubAligner and the L-Acoustics Preset Guide confirm this result. If you’re a SubAligner user you can open this direct link to the alignment.

Tips: For high quality actionable data I recommend setting temporal averaging to Inf and resetting the averages with each new measurement. Consider downloading measurements from the manufacturer, Tracebook, or SubAligner in order to have some expectations to work against.

REW

The rest of the audio analyzers covered in this article offer functions to simulate output processing. In REW the EQ window allows you to experiment with different filters and then generate a new measurement that includes those filters. Then you can experiment with gain, delay, and polarity using the Alignment Tool and its auto solver options.

  1. Measure Main solo.
  2. Estimate IR Delay. Shift and Update Timing Reference.
  3. Measure Sub.
  4. Find the spectral crossover using Measurement Actions.
  5. Experiment with filters and the Alignment Tool to make the pictures match. Generate an Aligned Sum for each variation.
  6. Compare all of the Aligned Sum variations for alignment and summation. Listen.

Tips: For high quality actionable data I recommend setting the number of measurement repetitions to 8 and the length to 256k.

Open Sound Meter

  1. Measure the Main solo and capture the trace.
  2. Without changing the compensation delay, measure the Sub solo. Capture the trace.
  3. Set the sub level to match your target trace.
  4. Find the spectral crossover using gain changes.
  5. Make the pictures match. You can click on a measurement and adjust its delay and polarity while watching a sum trace calculated with File > Add math source.
  6. Verify alignment and summation. Listen.

In this image you can see me creating the sum trace on the left and then manipulating the main trace on the right to achieve better summation.

SATlive

SATlive includes some of my favorite tools for crossover alignment, which were my inspiration for getting started with SubAligner. The Live Add trace gives you a real time crystal ball preview of what the combination of main and sub will look like. The Perfect addition trace creates a target so you can see how well you are doing. The Delay-Suggestion Tool will run an auto solver and make recommendations for delay and polarity. The Area Of Interaction Tool can be used to visualize the crossover region.

  1. Measure the Main solo and capture the trace.
  2. Without changing the compensation delay, measure the Sub solo. Capture the trace.
  3. Set the sub level to match your target trace while observing the Perfect Addition trace.
  4. Find the spectral crossover using the Area Of Interaction tool.
  5. Make the pictures match with the aid of the Delay-Suggestion Tool.
  6. Verify alignment and summation by comparing the Live Add Trace against the Perfect Addition Trace. Listen.
satlive

Crosslite

Crosslite also includes auto solver functions, but instead of using a brute force iterative approach, it will attempt to align the start or peak of the impulse responses, which can be filtered to focus on the crossover region. One of my favorite tools in Crosslite is the cursor. It can be enabled to find the phase difference between measurements and even converted into time for the alignment. Crosslite also offers various filter options and can be thought of as a full DSP simulator.

  1. Capture the Main and Sub solo.
  2. User Memories > Functions > Sum > Process Method > Sum Magnitude to generate a perfect addition trace. Adjust the sub level until the Sum Magnitude matches your target trace.
  3. Find the spectral crossover either using Gain or cursors.
  4. Make the pictures match. The most efficient starting point may be found by inserting a peak filter at the input around the center of the crossover region and running the Optimize Time function. Experiment with changing the alignment to rise or peak and the filter from normal phase to phase zero. The best option here may depend on the quality of the measurement data. Always check the phase graph afterwards.
  5. Verify alignment and summation. Listen.

Next Steps

Now that you’ve created an alignment preset, it can be deployed and modified in the field using distance measurements. If you’d like to send me the speaker measurements you took along the way, I’ll add them to the SubAligner app.

How to practice at home without a PA

You can download lots of high quality data from Tracebook to practice with.

Have you tried any of these softwares? What method do you use to optimize phase alignment between main and sub?

Subwoofer Alignment at The Redmoor Cincinnati

By Nathan Lively

Recently I had the opportunity to help my friend Nick work on the calibration of some new components at a great looking venue in Cincinnati called The Redmoor. We met on TeamViewer and recorded the entire thing so that it could serve as a combination of consultation and training. If you’d like my help on your project, you can schedule an appointment here.

In this post I’ll walk you through some of the EQ and crossover alignment steps we took.

Pre-production

First, gather materials. I checked Tracebook for the HDL10-A and STX828S. No luck. I found the GLL file for the HDL10-A on the RCF website, though. I opened that in the GLL viewer, built an array with the settings I expected to see, calculated the balloon, opened the transfer function, and exported it to a file.

The get the subwoofer data into my audio analyzer I used VituixCAD2 to convert the image on the spec sheet into response data. Then I imported everything into Crosslite.

The sub’s native response will allow me to experiment with different low-pass filters.

Next I needed to choose a target. Since we are looking at anechoic responses, it makes sense to use a flat target, but recently I have started using a +6dB slope in the low end because I have found that that will push the crossover region up, which is a better representation of what will happen in the field when someone inevitably turns up the subs.

I’ll start by inserting those filters recommended by the manufacturer.

Next I’ll apply some initial EQ and gain to make a better match of the target.

Should we design an overlap or unity crossover? Let’s do both!

Before I we even look at the phase graph, let’s measure some slopes. We know that the sub’s slope is 24dB/oct because it was a flat line before the LPF was inserted. Switching to view Data Pre in the bottom window, we can look at different HPF slopes on top of the HDL10-A response and find that it is 48dB/oct. This is a clue that the phase response of the main will be steeper than the sub.

Switching to the phase graph we find that that is in fact the case.

Let’s try adjusting the sub’s LPF to match the response of the main.

Now the sub is too steep. Let’s split the difference with 36dB/oct. Now we have a nice match with only about a 30º maximum phase offset between them.

The filter change required a small gain adjustment. Here’s the sum.

What should we do about that bump? We could leave it alone and say we like it, but let’s insert symmetrical filters to restore the response to the target.

Now let’s try the unity class crossover.

The HDL10-A already has a steep HPF so I am reticent to make it even steeper, but I need main and sub to meet at -6dB. I know that the DSP at The Redmoor is a Venu360 so we’ll only have access to basic EQ filters. Let’s try the least steep HPF available, 6dB/oct. I’ll adjusted the delay by 0.5ms for a slightly better phase alignment.

Let’s try a 12dB/oct option.

Let’s zoom in and compare them. Which one is better? I don’t know, but now we have some options.

Now let’s see how this actually played out in the field.

Production

We started by verifying all settings and taking eight measurements of the HDL10-A through their coverage area.

We applied EQ towards the target and took more measurements to verify the EQ and prepare for alignment. We exported averages from Smaart and imported them into Crosslite. Here’s the phase graph.

At this point I realized that we could have made our work a lot easier by starting out with a ground plane measurement very close to the speakers and without any processing to get cleaner data for alignment. But, we were running out of time so I decided to simply apply 3ms of delay to the sub and move forward with this solution.

Here’s what the final measurement looked like.

Post production

Let’s see if we can improve on the alignment I came up with in the field.

Interestingly, measured in the room, the HDL10-A appear to have a 24dB/oct slope, not 48dB/oct as expected. Maybe this is a result of one of the user definable settings on the back.

The STX828S appear to have an 18dB/oct slope, even though we used the recommended 24dB/oct slope on the LPF.

How can we equilize this relationship? 24 – 18 = 6, so we can add a 6dB/oct LPF to the sub, right?

But that will add another 3dB of attenuation at the cutoff frequency, which we don’t want because we are trying to simulate what would have happened if we would have used a different LPF from the beginning.

One option is to simply switch the LPF to zero magnitude. That will give us a steeper slope without affecting the magnitude. Of course the magnitude won’t be accurate, but we can still research the phase alignment.

The result is better alignment without any additional delay.

I should make it clear here that a zero magnitude filter is not something you would normally find in a DSP. It is a special kind of simulation that Crosslite offers for research purposes. The closest thing you would find in a DSP is an all-pass filter or within a variable architecture FIR filter.

How do we know if we are making an improvement? We can see the phase come into better alignment and we can see the sum go up, but I find it helpful to have a goal of perfection to compare it to. In SATlive you would load the Perfect Sum trace. The workaround I used in Crosslite was to simply import the data a second time, but this time without the phase.

In this graph you can the the perfect sum target with two delay options. Both of the options include the new zero magnitude LPF.

How do you prepare for crossover alignments?

Single channel RTA targets to improve your mixes

By Nathan Lively

How good is your live stream mix? What does it sound like on the audience end?

How do you quantify that? Loudness metrics are helpful. I discovered YouLean meter like a lot of other sound engineers who have been doing more broadcast gigs during the pandemic. I monitor the live stream on a mobile device on ear buds to experience what the target audience might be hearing.

I still found myself wishing I had another form of reference.

For live in-person events I normally leave Smaart running in two channel transfer function mode to keep track of how the sound in the room might be deviating from the mix I’m pushing out of the console. In broadcast, there is no second channel to compare.

Now that I think about it, I suppose there might be a clever way to set up a transfer function that references the console output and measures a streaming output, but would they stay in synch? Is it even coherent anymore? Anyway, I find myself mostly looking at RTA and Spectrograph graphs these days. If an annoying resonance comes up I can find it quickly, but otherwise I’m not sure how to take action on a single channel measurement.

It occured to me to create a target using some reference material. I can measure a quality broadcast that I enjoy or find something recommended by the client. Since having this idea I have had time to test both. Here’s a screen capture from a recent gig.

Create the target

Creating the target is relatively simple. Just play your reference material, measure it, and store it.

But if you’re like me, you enjoy a certain level of complexity. Maybe you’ve noticed.

Here’s how I did it.

  1. Record a WAV file of the reference material.
  2. Import it into Tonal Balance Control.
  3. Convert the JSON file into three separate spectrum measurements.
  4. Import them into Smaart.

Someone on FB recommended Later with Jools Holland from BBC2. I recorded some long clips of male dialogue with lavalier mics and male hip-hop with hand-held mics. You can imagine the many variations of microphone, mic placement, instrument, and style available.

Tonal Balance Control is a plugin you would typically insert on the master buss of your DAW to take a look at the average frequency spectrum of your mix and compare it to some common genre targets. It will also allow you to import an audio file to generate your own targets. Those targets are stored as JSON files on your computer.

The JSON files cannot be imported directly into Smaart. Smaart wants to see a column of frequency and magnitude values. You’ll need to reorganize the data. There aren’t that many values so you could do it manually, but I’ve been trying learn MATLAB so I decided to use that.

%% JSON to TXT
% Decode file
filename = 'your file path'; % File from Tonal Balance Control.
text = fileread(filename); % Read contents of file as text.
S = jsondecode(text); % Decode JSON-formatted text
% Convert structure to table
f = struct2cell(S.frequencies_hz.Value); % Convert structure to cell array.
f(1,:) = []; f=f.'; f=cell2mat(f); % Clean and convert to matrix.
% Pull out the three traces (high, mid, and low)
high = struct2cell(S.high_normalized_mag_dB.Value);
high(1,:) = []; high=high.'; high=cell2mat(high);
low = struct2cell(S.low_normalized_mag_dB.Value);
low(1,:) = []; low=low.'; low=cell2mat(low);
mid = struct2cell(S.normalized_mag_dB.Value);
mid(1,:) = []; mid=mid.'; mid=cell2mat(mid);
% I ran makima here to make sure the frequency resolution matches that of Smaart, but it's probably optional.
% Create tables
highTbl = table(f,high,'VariableNames',{'frequency','magnitude'});
lowTbl = table(f,low,'VariableNames',{'frequency','magnitude'});
midTbl = table(f,mid,'VariableNames',{'frequency','magnitude'});
% Write table output
writetable(highTbl,'high.txt','Delimiter','tab');
writetable(lowTbl,'low.txt','Delimiter','tab');
writetable(midTbl,'mid.txt','Delimiter','tab');

Results

I like it! It’s nice to have a second opinion. How does my mix compare to some one elses?

I know that none of the targets I’m using were created under the same circumstances, but I have used them on my last ten gigs and I’ve found them helpful. I can find something that’s bothering me or get ideas for improvements.

The most recent example was a colleague asking me if I could make the mix sound more hyped. Hyped does mean something to me, but what does it mean to them? Luckily, they sent me an example.

I measured it. I found that it was different in some specific ways. I made some changes in pursuit of a compromise. It seemed like an objective way to get more of what they wanted.

Ideas

This gave me an idea for a plugin. If we can compare a measurement against a target, the logical next step is a filter suggestion to move the measurement closer to the target. That’s what I’m doing with my eyes anyway. It would just be nice to know the exact filter gain, width, and center frequency to get there.

I was able to come up with a non-realtime function as a proof of concept. It just finds the point of greatest contrast between measurement and target and then the filter to best reduce it. Sophisticated auto-EQ algorithms probably do this is a smarter way, but this seemed to work for now.

%Find a filter
micCompare = micMagnitude - targetMagnitude; % Find the contrast between the measurement and the target.
[pks,loc] = findpeaks(micCompare,'NPeaks',1); % Find the single highest peak.
peakFrequency = w(loc); % What frequency is it at?
startF = round(peakFrequency); % Start looking at the peak.
startGain = round(pks * -1,2); % Start gain at -1dB.
startQ = 1; % Start Q at 1.
x0 = [startF,startGain,startQ]; % All starting values.
fun = @(x) paramEQmagOnly(x(1),x(2),Fs,x(3),w,targetMagnitude,micMagnitude); % Custom function to find a parametric EQ based on magnitude only.
x = fminunc(fun,x0); % Minimize the function.

Here’s an example plot showing a filter inserted at 6.7kHz.

I spent a few days trying to build a plugin prototype in MATLAB, but I didn’t get very far. There are lots of examples out there of how to easily build a plugin to modify the audio passing through it, but not many to just measure the audio.

Have you tried something like this already? What were your results?

Know Your Audio Analyzer Averages

By Nathan Lively

After you take several measurements and average them together, what do you expect to see?

If these two measurements are averaged, what do you expect?

Is zero the average of -6dB and 6dB, or something else?

1 octave wide parametric filters at 1kHz

Here are four possible averages you may have guessed, depending on which audio analyzer you use.

tl;dr

  1. Know what kind of averaging your audio analyzer uses.
  2. Collecting more data is more important than the way it is averaged.

Here are demos from a selection of audio analyzers in alphabetical order.

Crosslite+ v2.0.0 8

Along with options for pre and post processing, Crosslite+ offers four different options.

“Arithmetic Average Complex” : Arithmetic mean in complex values.

“Quadratic Average Complex” : Quadratic average or RMS in complex values.

“Arithmetic Average Magnitude” : Arithmetic mean in real values in dB, with phase zeroed.

USER GUIDE CROSSOLITE REV 1.1

Does the magnitude average change with trace offset? Yes. It appears that trace offset in Crosslite is the same as a gain change.

Does the magnitude average change with phase offset? Yes, except for Arithmetic Average Magnitude.

In this test I averaged the response of two microphone cables with a second order APF inserted on one of them at 1kHz.

Se deseja saber a média de fontes separadas coerentes ou de uma curva polar, é bom utilizar a aritmética complexa.

Se é média para curvas de equalização, melhor a de magnitude em dB.

Escolhi os tipos que estão mais presentes na maioria dos softwares que possuem funções de média.

Francisco Monteiro

If you want to know the average of separate coherent sources or a polar curve, it’s a good idea to use complex arithmetic.

If it’s an average for equalization, better the magnitude in dB.

[For Crosslite] I chose to offer the types that are most present in most software that have averaging functions.

Francisco Monteiro

L-Acoustics M1

M1 offers a single kind of average, which appears to be a simple average with phase zeroed.

Does the magnitude average change with trace offset? No. There is no trace offset option.

Does the magnitudeaverage change with phase offset? No.

OpenSound Meter v1.0.5

Open Sound Meter offers vector and polar averaging.

Does the magnitude average change with trace offset? Yes, the results are the same for a measured gain change.

Does the magnitude average change with phase offset? Yes for vector. No for polar.

For in-space averaging I use polar method. For vector (complex) you need to have very close phase responses.

Pavel Smokotnin

REW v5.20

REW offers two options for averaging.

Vector average, which averages the currently selected traces taking into account both magnitude and phase. It can only be applied to measurements that have an impulse response.

RMS average, which calculates an rms average of the SPL values of those traces which are selected when the button is pressed. Phase is not taken into account, measurements are treated as incoherent. This does the same as the Average The Responses button. If the measurements were made at different positions (spatial averaging) it may be helpful to first use the Align SPL… feature to remove overall level differences due to different source distances.

REW Help

Does the magnitude average change with trace offset? Yes, but only after the data is permanently changed with the Add offset to data button.

Does the magnitude average change with phase offset? Yes for vector. No for RMS.

RiTA

RiTA offers a single options for averaging traces: Arithmetic Average Complex

Does the magnitude average change with trace offset? Yes.

Does the magnitude average change with phase offset? Yes.

The next version of RiTA will include three options for averaging.

Complex AVG: magnitude estimation is greatly affected when complex averaging is performed. It is useful when you are interested, in close measurements, in knowing the constructive and destructive interference of the sound system.

ABS AVG and dB AVG are intended for spatial averaging of several microphones. Abs AVG tends to give priority to good data and less to data affected by reflections. dB AVG gives equal weight to all data.

By default, RiTA 2.5 uses ABS AVG

Pepe Ferrer

SATlive

SATlive offers a three options for averaging: Create Sum Trace, Complex Add, and Weighted Average.

Does the magnitude average change with trace offset? No

Does the magnitude average change with phase offset? Yes for Complex Add. No for Sum Trace.

SATlive offers 3 different approaches for averaging different measurements.

1. Complex averaging: Will calculate the sum using the amplitude values of each trace and the phase relation between the traces. It is intended to average measurements taken at the same mic position, like Sub/Top time align or interference of different sources. (quick traces -> sum trace complex averaging).

2. Amplitude based averaging: Will calculate the sum by normalizing (center at 0 dB) each trace and afterwards adding the amplitude values only. This is helpful when you want to average traces taken at different mic locations (and in most cases, using the same source).  (quick traces -> sum trace Create Sum Trace).

3. Weighted averaging: This is a special version of 2. where you can assign a weighting factor to each trace (three configurable settings). This was inspired by the Primary/Secondary/Tertiary measurement approach, which I first heard about during my SIM II seminar. In fact, it does not make much sense to add tertiary traces to the result, but it would be possible. (Trace Manager)

Hint: There is a Valid only if all traces valid option for 1. and 2. where you can define wether just one valid result at a certain frequency will be sufficient for a valid result or all traces averaged must contain a valid value to create a valid result.

Which of the options do you recommend to your users for judgement of tonality and EQ operations?

Only option 2. and 3. will make sense here. I rarely work with averaged measurements during EQing. Normally I’d use the Primary Location trace as the base for EQ while the other traces help me to distinguish if the problem is global or just local.

Big differences between the different mic-locations (primary/secondary) indicate a problem that you should fix before applying the eq (redirecting the speaker, additional speaker, speaker with a different directivity pattern).

For overall tonality I’d go for 2 and for Eq-ing for 3

Thomas Neumann

Smaart v8.5.0.2

Smaart offers two options of averaging with the second including built-in proprietary pre-processing.

Decibel spatial averaging, sometimes called arithmetic averaging, is a simple average of decibel magnitudes at each frequency. Spatial power averaging is the average of squared linear magnitudes at each frequency with the result converted to decibels.

Unweighted dB averaging works exactly the same way both transfer function magnitude and spectrum averages. When you select Power averaging for transfer function measurements, however, Smaart automatically adjusts the overall level of all individual measurements going into the average according to their average decibel magnitudes in the range of 225 Hz to 8.8 kHz so that they are all approximately equal in level throughout that range.

Rational Acoustics Smaart v8 User Guide, Release 8.3

Does the magnitude average change with trace offset? No.

Does the magnitude average change with phase offset? No.

Our data is in dB, so we have to decide whether to average linearly or logarithmically, whether to normalize first, whether to weight by coherence (does it make sense that poor-quality data gets as much “say” in the final result as high-quality data?) and of course remembering that FFT math spits out complex data points, not simple integers.

So you can end up with a lot of approaches that are all valid from a mathematical standpoint, but the question becomes “which method gives us the most useful result?” (I could average together the number of socks in my drawer and the number of tires on my car, and even if my math was correct, it’s a meaningless answer for all practical purposes.) So at the end of the day, we want averaging that produces information that’s helpful to the user. If you have a bunch of traces and you average them, we have an expectation of what that final averaged response should look like. How well does it highlight the trends indicated by the individual traces? That’s what we’re looking for when we take an average, and so our averaging is designed with that in mind.

In terms of which to use, just like everywhere else in Smaart: if you’re not sure which setting you need, use the defaults. They’ve been carefully chosen over many years to give good results without the user having to tweak around. I actually reset the software to default configuration every time I use it, and I pretty rarely need to go in and change a bunch of things from that state. The primary advantage of power averaging would be if you’re averaging together a bunch of traces that have severe comb filtering (which hopefully doesn’t happen all that often). The math will give more weight to the peaks and less to the dips, so you end up with something that can be more representative of the overall response in that area and what your ear might tell you. But – in most circumstances, the differences between coherence-weighted dB average and power average end up being very small. If you create both types of average from the same dataset, and lay the two averages traces on top of each other, you’ll see they tend to agree very well. I think you’d have to come up with a pretty contrived situation or have pretty bad-quality measurement data to get a result where the power averaging and the dB averaging disagree.

Michael Lawrence

All together now

Here’s an overview of the different averages being discussed in high contrast. All of these are my own estimations since the math is not exposed and is in some cases proprietary.

Which one should I use?

Please follow the manufacturer guidelines and in most cases stick with the default settings.

The demos in this post average electrical measurements of symmetrical EQ filters in order to clearly expose the calculations being used. I want to be able to see clearly if the average of +6 and -6 is 0 or something else. Measurements of speakers in rooms will feature many wide peaks and narrow valleys instead of this symmetrical behavior.

As I worked through each demo I found myself wondering why I might use one average over another. Being visually inclined and looking at a graph, at first the simple magnitude average made the most sense.

(-6 + 6) / 2 = 0

M1 offers this as its only option and it is the default option in Smaart and SATlive.

Why do the other options exist?

If you had one subwoofer and I gave you another, how much would that be in decibels? You would add 0dB + 0dB to get 6dB.

If I gave you another half a sub you would have 8dB because 20 * log10(1 + 1 + 0.5) = 8.

Following the same process of linear to log conversion, we should calculate the decibel average of -6dB and 6dB like this:

20 * log10((0.5 + 2) / 2) = 1.9dB

Maybe it makes more sense now why some audio analyzers like REW, RiTA, and Open Sound Meter show an average of 1.9dB instead of 0dB.

Interestingly, Bob McCarthy finds even this form of average to be lacking since it does not take psychophysics into account.

Studying summation revealed that 20–40 dB dips are likely to stay down in only a small area, whereas 6 dB peaks may spread over a wide area. Studying perception revealed greater tonal sensitivity to wide peaks over narrow dips. Therefore we should be wary of accepting 0 dB as the best representative here. When samples agree, the averaging builds confidence. When samples differ, the average is suspect. There’s safety in numbers when math averaging is used: get a lot of samples.

McCarthy, Bob. Sound Systems: Design and Optimization: Modern Techniques and Tools for Sound System Design and Alignment (p. 453). Taylor and Francis. Kindle Edition.

In this case “0 dB” refers to the average of a 6dB peak and a -40dB valley.

My takeaway from all of this is that more measurements combined with optical averaging (looking at them all at once) is more important than the specific form of mathematical averaging you choose.

What do you think?

  • 1
  • 2
  • 3
  • …
  • 7
  • Next Page »

Search 200 articles and podcasts

Copyright © 2022 Nathan Lively

 

Loading Comments...