FACT: FIR filters have an arbitrary amount of delay based on their design, making their application flexible and practical.
In this interview with Michael John from Eclipse Audio we dispel FIR filter myths and discuss what live sound engineers need to know about their application in the field.
Nathan
So, I definitely want to talk to you about FIR Designer and filters and stuff, but first: when you are doing listening tests, what’s one of your go-to reference tracks?
Michael
I tend to use a lot of Sting and maybe even some Dire Straits, because the production quality on those tracks, especially on some of the older Sting albums, is just exceptional. Maybe the more recent Sting albums have become more highly compressed but if you go back a decade or two… And even “On Every Street,” which I think was the last major Dire Straits release, is just phenomenally mixed and some brilliant musicians there as well: tonally quite broad, dynamically very good, and the production quality is just top notch. It may not be quite as critical as solo piano pieces or solo voice, but they’re my go-tos.
Nathan
How did you get your first job in audio?
Michael
I’ve always played with electronics and audio gear since teenage years; particular assembling small PA systems, running sound at school events, and the like. That kind of led me through a path to do Electrical Engineering at university, and my first audio research job was straight from that. In the 3rd and 4th years at university, advanced signal processing was offered. I found that fascinating and gravitated towards that. That then led to audio research jobs here and in the USA. It felt like a foregone conclusion—starting with a lot of live sound interest and then going through electrical engineering.
I’ve always found live and pro audio a lot of fun. I remember even during school years I was encouraged a lot by my father to do many work-experience stints. I worked a few weeks at Australian Monitor—who used to make some really leading-edge high-power power amplifiers—and other places like Trackdown Studios, a tracking studio here in Sydney. (They later moved to Fox Studios which is now one of the big film production areas here in Australia, where they have space to record orchestras and other audio for film.) So audio, particularly live stuff, has always been in my blood and Electrical Engineering helped fulfil the signal processing / maths side.
Nathan
Nice. So, having started out working on shows, I’m curious if maybe the genesis of your software can be connected back there? Were there times when you were listening and you were like, “Well the speaker sounds really bad. I wish I could do something about it.” And later on you created software to do something about it?
Michael
Sometime in the last 10-15 years I started making a number of cabinets myself, particularly high-power wedges. I wanted to use DSP for those rather than make passive designs. The DSP amps (most of them) had only IIR based filtering, but I could see user-loading FIR was coming in pro; in fully programmable DSP’s (like the SigmaStudio Analog Devices parts), but even in the pro amplifiers and plate modules. But there were really no tools for flexibly designing FIR filters and loading them into those amplifiers. From my signal processing and programming background, I thought, “It’s actually quite, I wouldn’t say easy, but I’m quite capable of building GUI software to do this.” So that set me on a path of developing my own tools which I started using in some of my own small designs. I didn’t sell anything but I was making some nice wedges and some modest size FOH cabinets. Through conversations with a variety of people, including Bennett Prescott, we started to realise there’s something in the software, beyond my own personal use; that maybe we could provide tools to the broader audio community.
Also, a lot of loudspeaker tools and many DSP programming environments involve setting up filters, making a measurement, seeing if the result is as intended, and then iterating. I didn’t see why the work process needed to be that way. I felt you could load a measurement, simulate and apply all the filtering you want, and confidently see what response you’re going to get if you actually put the filters back into the speaker processor. So that became our workflow.
Also, I was really big on wanting the user to be able to immediately see the effect of the changes they’re making.
Nathan
Right. And when you say real-time you mean see the results in the graph.
Michael
Yes, absolutely. And again, many tools don’t do that. You press “calculate” and wait a few seconds, then copy the settings over somewhere else and run a measurement. I intentionally designed the workflow and the compute engine in the software to be able to give the user immediate results. It’s the same for everything in the workflow, including things like the wavelet view.
Which leads me to something else we do a little differently in our tools. Again, rather than setting up processing, measuring, and iterating, we can actually put a target response into the software and actually force each loudspeaker driver towards the intended target. And from the way the targets sum together we know exactly how the (processed) loudspeaker drivers will sum together.
Maybe other software products do this, but we had not seen this before.
Nathan
That’s great. So it sounds like, where maybe people in the past were looking at more of an isolated, like, “Let’s see what each of these filters do electronically,” you started thinking of maybe a more holistic approach, like, “Let’s see what they do together and what’s going to be the acoustic result?”
Michael
Yes, and so the workflow involves loading—particularly in say “FIR Designer M,” although it can be done in “FIR Creator” as well—measurements for each driver and then emulating the processing and looking at the sum. Also, we can take a bunch of measurements for a driver and average them down to a single response before running that through the workflow. “FIR Creator” doesn’t have averaging, but we have a separate “Averager” tool. “FIR Designer” and “FIR Designer M” both have integrated averaging capability.
Some people have asked us to simulate the processing on other measurements in addition to the main measurement and display the responses. That’s something we’re looking to add. This could be used to see the effect of processing on, for example, off-axis measurements. Or we could even show a whole balloon plot and show the effect the crossover and processing have on the overall radiation pattern of the cabinet.
Nathan
I’m interested in this topic because I’ve been showing this game that I’m working on called Phase Invaders to people and one person said, “Hey, if we’re doing this alignment and I tell you about the audience plane that I’m doing the alignment for, you should be able to tell me the expiration date of that alignment,” and I was like, “Yeah, sure. That’s actually a good idea.”
Michael
Yes. I haven’t thought about extending that to the audience plane, but certainly, based on our current workflow we could eventually show that in terms of radiation pattern, which eventually would translate to the audience plane.
Nathan
Looking back on you career so far, what do you think is one of the best decisions you made to get more of the work that you really love?
Michael
I think the decision, and again it was somewhat of a foregone conclusion, to do Electrical Engineering. I have some friends who have gone down the live sound production path rather than going to university first, and they’re doing really well and they’re loving what they do. But for me personally, going to university while indulging my passion for live sound on the side, has helped my understanding of loudspeaker processing, ultimately resulting in the software we have today.
Nathan
Some follow-up questions: first of all, you don’t have any merch on your website yet, but if you did I have a suggestion. A t-shirt with Ned Stark with a thought bubble that says, “FIR is coming.”

Michael
Interesting, I like that.
Nathan
Ok, consider it. Number two, you mentioned several names of software but a lot of people reading this interview have no idea who you or Eclipse Audio are, so could you just go through each of the pieces of software?
Michael
Sure. Firstly, there’s “FIR Designer M,” our flagship product for integrated loudspeaker processing design. It enables the design and simulation for up to 6-way loudspeaker processing. It can inherently show the combined response of all the channels, and it’s our largest and most comprehensive tool.
“FIR Designer” does everything “FIR Designer M” does but for a single channel. With “FIR Designer” it’s possible to do multi-way designs; it just requires multiple projects; one project for each output channel. I think we see “FIR Designer” possibly being used more for creating filters for broad cabinet EQ or for installations.
Both products have unlimited filter capabilities and unlimited auto magnitude and auto phase bands.
“FIR Creator EX” and “FIR Creator” are similar to “FIR Designer” but with some feature limits, including limits on the number of filters and auto mag and phase bands. They’re designed to be more cost effective, including for hobbyists who may want to experiment with FIR and mixed IIR+FIR processing. Also, in “FIR Creator EX” we’ve provided professional export options: output formats for pro processors—such as DFM & Linea Research—whereas “FIR Creator” exports filters only as TXT & WAV files and some broad open formats. So, in “FIR Creator EX” we provide some of the pro capabilities of our flagship products but at a reduced price.
I should also mention “Averager.” We have measurement averaging within “FIR Designer” and “FIR Designer M.” But we also provide averaging as a separate, cheaper tool. It provides four or five different averaging modes. This is maybe more for hobbyist folks who wish to make a bunch of measurements in their space and distill them down to one measurement that they might want to use in some other tools, or in our tools; we don’t mind.
Nathan
You use these two acronyms: IIR and FIR.
Michael
Initially, I would point people to a paper we put on the website, About FIR Filtering, which provides a moderately technical perspective on FIR filters and their uses, as well as IIR filters.
In short, a FIR filter has a finite or limited time length impulse response whereas an IIR impulse response can go on “infinitely.” How does the IIR filter do that? I’ll defer to the paper on our website. The longer the filter impulse response, the more effective the filter is at EQ’ing lower frequencies. Because a FIR filter has a fixed length, a FIR filter has a fundamental limit as to how low in frequency it can start to affect magnitude and phase changes.
An IIR filter is much more efficient at going lower in frequency because of its infinite length. However, with IIR filtering, fine grain control isn’t very easy, and doesn’t have independent control of magnitude and phase. On the other hand, FIR filtering can do some fairly fine grain EQ and has fully independent control of magnitude and phase. That’s the 30 second answer.
Nathan
What are some myths that you would like to dispel about FIR filters?
Michael
FIR’s have often, I think, been associated with linear-phase processing—e.g. linear-phase brick-wall crossover filters. Linear-phase filters are inherently symmetric in their coefficients and so the delay through the filter is half the length. And so often people think, “I’m going to have all this high latency / delay and that’s no good.”
In reality a FIR filter can be anything. It can have minimum-phase behaviour, maximum-phase behaviour, linear-phase behaviour. It can have a variety of multi-phase or, what I call, arbitrary-phase behaviour. And so, the delay through the filter is arbitrary. It’s really how you design the filter. The delay is not limited to the middle point of the tap length.
Also, people often associate FIR filters with things like horn correction. That’s definitely one use, but there are many more uses and we talk about them in the paper.
Nathan
Can you compare linear-phase, minimum-phase, and maximum-phase?
Michael
A minimum-phase filter imparts an EQ profile with the least amount of delay on the signal. IIR filters are, for the most part, minimum-phase and that includes any IIR processing in a speaker processor. By the way, you can measure IIR filtering from a processor (sampling the processing as a FIR filter) and then achieve exactly the same filtering. The measurement is a minimum-phase FIR filter.
A linear-phase filter can impart a particular EQ profile but without any change in delay across frequency. The delay is constant across the whole frequency spectrum.
Maximum-phase is a term not used very often because it’s really something that’s born out of FIR filtering where its length is finite. Imagine you have a minimum-phase filter and you literally time-reverse the impulse response. Rather than every frequency point having the minimum amount of delay added to it as part of the EQ, every frequency point now has the maximum amount of delay added to it, up to tap length of the FIR filter. Maximum-phase filters are not normally used directly. Maximum-phase filter prototypes (at least in our software) are combined with linear-phase and/or minimum-phase filter prototypes to make a single FIR filter that pushes a loudspeaker’s phase towards flat or whatever target you desire. You can even use the phase of another loudspeaker (if the aim is to match two loudspeakers).
Minimum-phase and linear-phase are the two most common filter types.
BTW, it’s probably not obvious, but in some of the leading processors the system EQ is actually implemented as a very long minimum-phase FIR filter. It’s just easier, as the system engineering is adjusting the EQ, for the processor or control software to convert the desired EQ into a minimum-phase FIR filter, rather than to try to emulate the desired EQ with a bunch of biquad IIR filter changes. These implementations make the user experience smoother, in terms of quickly facilitating exactly the EQ the user wants.
Nathan
Great. So now that we’ve defined some of these terms, I think the next thing a lot of people are going to be wondering is, “how do I use them? Where do FIR filters go?”
Michael
In a lot of cases the FIR filters are being created specifically for the speaker, not for the room. And they’re loaded via the control software for particular brand amplifiers.
Nathan
What about you personally?
Michael
For me personally, I have amplifiers – the Lab Gruppen PLM’s – that can run FIR filters (in the “FIR 3-way” module) so I would be loading FIR’s there. I don’t use custom arbitrary-phase FIR for broader system EQ in these or in a separate processor. I tend to personally use the FIR filtering for driver-based adjustments, as part of a loudspeaker preset.
However, to more broadly answer question about where to use FIR filters, we do see some customers using FIR filters in installations. Especially where the system is made up of different makes and models of loudspeakers, each with different phase characteristics. When arraying different cabinets in a larger system, it’s helpful to have the phase matching between those cabinets as part of the full system optimization. If the phase doesn’t match, there’s the risk of getting response holes in overlapping coverage areas in the room. Or at least some lowering of energy in overlapping coverage areas, particular at frequencies where the phase is dramatically different between the two cabinets. Phase matching the cabinets first makes the tuning of the coverage easier in a larger installation.
We’re also seeing some loudspeaker manufacturers phase-match all their products lines, so that end users can mix-match their products without having to think about it.
Nathan
I know that almost every Meyer Sound speaker can play together nicely because of matching phase characteristics.
Michael
Now that we have FIR filters, it makes the matching process a little easier across product lines. Yes, I think Meyer may have been one of the first to do this, but other manufacturers are definitely doing this too.
Some speaker processors and amplifiers are starting to provide dual FIR’s. That is, every output has a FIR filter for the loudspeaker preset, and a second FIR filter for array processing. A line array is the best-use case, where the first filter is used to tailor the cabinet response, and then the second bank of FIR filters (across all amplifiers feeding the line array) is used for coverage optimization within the space. Array processing is becoming a big thing in live. I think Martin’s MLA was one of the early ones and there’s AFMG’s solution FIRmaker. EAW’s Anya is another example of a fully steered array. And they’re not the only ones.
Nathan
Do you think FIR filters have a place in the work of a live sound engineer? Are there some of these things that are field applicable? Or are these only for manufacturers?
Michael
That’s a tough one. I honestly don’t know. In the installation scenario, I think it makes a lot of sense where there’s plenty of time to do many measurements and synthesize the filters to achieve a certain system result.
I think in a touring context, I’m not sure it’s as applicable or useful. If the loudspeakers are inherently phase matched and work well together anyway, and given the time constraints in a tour or a typical live setup, just getting the system EQ to be nice is top priority.
That said, considering the current minimum-phase system EQ in a large system, as you start to make minimum-phase EQ changes – particular a multizone system – you may start to get slight phase changes between regions of the system, or even (short|medium|long throw) sections of the line arrays. So maybe in the future the system EQ will move towards something that at least can maintain phase consistency across the broader PA system. I don’t know. That’s something I think might be worth considering and investigating.
Nathan
I’m sure you get a significant number of support emails. When I first started using “FIR Creator” I had a lot of questions—I was emailing you a lot—so you saw me make a lot of mistakes, and you see other people making mistakes. What do you think are some of the biggest mistakes that people are making who are new to “FIR Creator,” or any of the software or FIR filter use in general?
Michael
I wouldn’t necessarily classify it as a mistake, but I do caution people not to lean too heavily on the auto-correction functions: the Auto Magnitude tab and the Auto Phase tab.
Nathan
But that’s the most fun. It’s the single-button solution.
Michael
Yeah, I completely get it because suddenly everything magically goes flat and the response becomes just the way you want it.
But I guess my caution is because, as you know, drivers change their behavior with level and with temperature, and a measurement taken at one spot in a room is very different to a measurement in another spot in a room. There’s so much variability in the measurement process and in the loudspeaker. A loudspeaker is a mechanical device. It wears out. It changes its behavior over time. If you start to correct for very fine grain structure that’s in your measurement, you may be correcting perfectly for one measurement location in the room on a specific day and time, but you may make things slightly worse at other points in the room or at other levels….
Nathan
Or later in the day; temperature changes.
Michael
Yes. And so, I was even quite nervous putting those features in the software in the first place. I know that’s what people want. They want auto-correction.
Nathan
That sells a lot of software, I’m sure. I mean, the first time I saw someone use it, that’s the thing I remember. Like “oooooh, I need to get that,” and then I did, and then you made some money.
Michael
I completely get it. And yes, it is very satisfying for that measurement and you do notice it. If you’re listening at that spot where your microphone was, you notice it become flatter or clearer, or whatever other perceptive attributes you use.
However, the pro manufacturers know not to over-EQ. They have hundreds or thousands of a cabinet, all potentially with subtle variations (including production variations), and so they know that EQ’ing finely for one cabinet is not necessarily the right thing to do for a loudspeaker preset. That said, I have actually seen a couple of very fine-detail production FIR filters come to me from notable companies who I thought might have been a little more subtle in their correction in their presets.
Nathan
Here comes my second t-shirt pitch to you. How about Spiderman with a big title across the top in block letters “Auto Magnitude”, and then the subtitle, “With great power comes great responsibility”?

Michael
That’s a very relevant one!
Nathan
Ok, just consider it. You don’t have to decide now.
I want to ask you about a lot of things specific to your software. I know that a lot of people reading this aren’t going to care because they don’t use that software right now. But that’s ok; they can skip over this part and then we’ll wrap up at the end with some other stuff.
In the very first step in the Import (and also in the Auto Magnitude tabs) you have smoothing options Complex and Power, and in the Auto Magnitude you have Complex and Mag. What’s the difference between Complex and Power, and when would I use one or the other?
Michael
You’re probably familiar with Smaart and they have a similar option to use one or the other…
Nathan
They have Complex and Polar, right?
Michael
Their Complex and Polar options relate to averaging over time. “Polar” does a dB average over time for each frequency point, which makes the time averaging more stable where wind or other mechanical disturbances are causing fluctuations, particularly phase fluctuations, over time. “Complex” does true complex average over time and is better where the measurement is stable over time. If the phase is fluctuating over time, the complex averaged result can have fluctuations in level due to frequency points that have different phase partially cancelling-reinforcing-cancelling-etc.
Nathan
So if I’m doing TF measurements outside, my magnitude trace in Smaart will be more stable if I’m using polar averaging because it ignores phase, while complex averaging will give me more accurate results when I’m measuring inside in a stable environment?
Michael
Yes, Polar will ignore phase when averaging over time to create the magnitude plot, but I suspect Smaart might be doing full complex averaging as well for calculating the phase for the phase plot. You probably want to clarify that with someone from Rational Acoustics.
The concept is very similar to measurement smoothing in “FIR Designer,” only we’re averaging over frequency (rather than time). Measurement smoothing involves frequency-localized averaging. When frequency components are averaged together, if the phase is changing quite dramatically across frequency, frequency components can partially cancel each other, lowering the energy of the smoothed result. And that’s often more evident at high frequencies. When using complex smoothing, if you start adding some delay to the measurement, you’ll start to see the energy drop quickly at the higher frequencies. However, power smoothing discards the phase completely and smooths just the energy across frequency, resulting in a magnitude that better matches what we’re hearing.
Now, to the difference in labels between the Import tab and the Auto Mag tab, the ‘Mag’ smoothing is power smoothing. I’ll update that.
Nathan
I see. So if 10kHz is at 0º and somehow 10.1kHz is at 180º and they are averaged together, there might be a cancellation?
Michael
Yes, definitely.
Nathan
And could you talk about then when I might need to use one or the other?
Michael
Use “Power” smoothing on the Import tab (or “Mag” smoothing on the Auto Mag tab) if you have a very messy measurement. By that I mean not just messy in level but particularly messy in phase. If the measurement is very messy in phase, you run the risk of losing energy in the frequency smoothing. And messy phase—messy measurements in general—often come from measurements in real rooms, as opposed to measurements done in an anechoic chamber. Loudspeaker preset measurements, often in an anechoic chamber, tend to be very clean. In-room measurements, as you know, tend to be messier, particularly from reflections.
Nathan
In the Magnitude Adjustment tab, how are the minimum-phase filters different from the IIR filters?
Michael
They’re the same. That’s an easy one. So, in the Mag Adjustment tab we provide filters everyone’s familiar with, but we give the option of changing the phase of those filters. For example, you can use a Linkwitz-Riley magnitude response, but with either linear-phase, minimum-phase, or maximum-phase. On the Magnitude Adjustment tab, I tend to refer to those as ‘filter prototypes’ (rather than ‘filters’) because they all get added together to create larger FIR filter response.
The lists are slightly different. There are certain filters on the Mag Adjustment tab that can’t be implemented as IIR’s, like the Keel-Horbach filters.
Nathan
Ah, yes, Keel-Horbach. My favorite. 😳
We’ve made it to the last tab, the Export tab. Sometimes I get a warning that says “Warning. Sample beyond +- one.” I was worried that I might be doing something really wrong, but I was exporting filters and using them and they seemed ok. You told me that it’s only a problem if I’m exporting to a WAV file. Otherwise, if I’m exporting to something like a CSV file that shouldn’t be a problem. Did I get that right?
Michael
That’s absolutely right. WAV files, at least in how they’re used for audio samples, are limited to values between -1.0 and 1.0. (They’re converted to fixed point in terms of the file format, but that’s a separate issue.) If you have FIR filter coefficients that are outside the range -1.0 to 1.0, and you export them as WAV and load them into a processor, there’s a good chance that the response will just not at all be what you expected. The higher value FIR filter coefficients are at best truncated or at worst may be wrapping around digitally. Either way, when you measure the response from the processor, you’ll see it just doesn’t match what you expect. Not many speaker processors and speaker processor control software products use WAV. Most use CSV or proprietary formats to get the FIR filter values over and retain the integrity of the coefficients.
Nathan
A cool thing I saw Merlijn van Veen do in one of his seminars was export to WAV and load it into a convolution reverb plugin in a DAW.
Michael
That’s absolutely right.
Nathan
So, the last thing that we look at when we are trying to optimize the filter length is the total error. Is the ideal total error zero?
Michael
No, not at all. It really comes down to what you can hear and in what parts of the spectrum the differences occur. I guess “error” is a bit of a misnomer. It’s not really an error in the filter. It’s simply a deviation of the response of the truncated filter versus the ideal filter. We call it error because it’s easier to put a label on that plot, but it’s just a filter difference.
When making a FIR filter, particularly from filter prototypes that are derived from IIR filter like prototypes, the FIR filter can have a very, very long impulse response. Too long for a typical FIR filter capable speaker processor. The ideal filter has to be trimmed shorter, and it’s the trimming process that results in differences between the trimmed filter and the original infinite-length filter.
It’s up to the user what amount of discrepancies are important. Again, variations in loudspeakers will inherently be in the order of modest fractions of a dB or larger, so getting the FIR filter, or even the IIR processing, accurate to within 0.25 dB of “ideal” you start to wonder whether filter differences are in the noise, relative to the variations in the loudspeakers themselves.
Nathan
That’s a good guideline. So, you’d say that anything below, probably 0.5 dB is probably amazing?
Michael
I would say, start to question what’s important when differences are that small. 🙂
Nathan
Ok. Since I didn’t understand that until now, initially I thought “I need to get that error number to zero.” And I just started making my filter longer, and longer, and longer, and longer and I noticed that no matter how long I made the filter, even with tens of thousands of taps, I could never get the error down to zero. We covered this already a little bit but just to wrap it up: if it’s impossible to get to zero, how do I optimize the filter length with respect to total error? Is it kind of that guideline that once you get below about half a dB, who’s going to hear that?
Michael
That’s one way. There’re two things that result in really long FIR filters. One is EQ’ing very low in frequency, and the other is EQ regions that have very high magnitude or phase slope. So when you see obvious FIR filter differences in the error plot, it will always be either at the low end of the spectrum or in high slope regions. Making changes in those aspects of the filter reduce the error.
This highlights one of the key differences between IIR’s and FIR’s. IIR’s are just so efficient at reaching low in frequency, whereas it takes a very long FIR to match the capabilities of some simple low frequency IIR filters.
In our paper we provide an example of unwrapping the low-frequency phase of a subwoofer using a FIR filter: to potentially improve the perceptual impact of the sub. The subwoofer inherently has an acoustic roll-off—maybe 2nd or 3rd order—as well as 2nd or 3rd order IIR HPF protection filters. Unwrapping the phase at around 20 to 30 Hz requires more than 8000 taps at 48 kHz and results in significant delay, which may be too long for live applications but possible in cinema, for example.
Nathan
What I’m starting to understand is that some of this is hardware dependent. For example, in the BSS BLU160 that I’ve been playing with, the limit for a filter that I can load is 8000 taps, so I can’t go beyond that. First, I thought, “If I’m using all minimum-phase filters, why would I ever limit myself? I’ll just make a 100,000 tap filter and do whatever I want, and have zero eror.” And it turns out that’s not realistic in terms of hardware capabilities.
Michael
Yes, FIR filters are inherently very computationally intensive and I have some numbers in the paper on the website. That’s one reason you don’t often see user-loadable super-long filters in speaker processors and amplifiers. That will keep changing as the cost/MFLOP drops over time – and we’ve seen a dramatic change in that already in terms of the capabilities of many processors and amps, and even the processors in our phones.
In something like Q-SYS and BSS, the overall architecture is a bit more flexible. FIR filter computation can be traded with other processing. In the more fixed architecture speaker processors, the manufacturer has to make a call on where to put the limits in tap lengths.
Now there are techniques for doing essentially zero-latency but very, very long convolution. They involve different implementation techniques, such as multiple sample rates or partitioning an impulse response into many chunks, doing time domain convolution on the front end, and doing FFT-based computation for all the other chunks.
The Lake Huron, for example, back in I think the late 90’s, was one of the first boxes that could do seconds of convolution. So, there are techniques to do this (particularly for arbitrary phase FIR filters), they’re just more difficult to implement. Speaker processor implementations of user-loadable FIR filters are typically regular time-domain convolution. But no doubt that will change as processing power increases.
Nathan
Taps are the same as samples?
Michael
Yes. I use the terms interchangeably.
Nathan
Back to FIR Creator, I’ve noticed that the window function has an effect on the total error. Do you have any guidance around the window function? Or should it just match the window I’m using in Smaart, where I made the measurements? How do I decide which window to use?
Michael
The only guidance I would give is not to use the square edge box-car, only because you’ll end up with a hard transition at the edges of the filter that may become audible. What the window is doing is crossfading into the filter and crossfading back out again. And the longer the cross-fade region, the more it’s suppressing the coefficients that are providing you the EQ “goodness” in terms of how far the filter can reach down in frequency (and affecting regions of high mag or phase slope). So that’s why some of the longer windows appear to have more error, and the shorter window edges less error.
Personally, I tend to stick with the cosine ramp. The other ones I put there to give options and because people asked for them. By the way, for a fixed window ramp length, the effect of differences in window types can be fairly subtle.
Nathan
What’s one book that’s been really helpful to you?
Michael
Most recently I read an amazing book, The Bond. It’s a memoir by mountain climber Simon McCartney about his climbing life, including first-ascents with climbing partner Jack Roberts of Mount Huntington and Mt McKinley (Denali) in Alaska. Truly incredible and I highly recommend it.
Nathan
What about podcasts?
Michael
NPR “Science Friday” is one I listen to a lot. Also “Fresh Air” (Terry Gross) and “Here’s the Thing” (Alec Baldwin). On YouTube, “Curious Droid” and “Scott Manley.” In terms of sport, I’m a huge follower of Formula 1 motor racing.
Nathan
Where’s the best place for people to follow your work?
Michael
We do most of our notices through Facebook and then directly on the website.
I monitor the “FIR Designer” threads on SoundForums.net, miniDSP forums, and ProSoundWeb. I encourage folks to ask questions on these, so that everyone can benefit from the responses and feedback.
Nathan
Michael, thank you so much for joining me on Sound Design Live!
I don’t see “over-unity” as a problem at all, when exporting FIR coefficients to a WAV file. Just normalize it before writing the file. The only difference is an overall gain reduction. The sound itself is exactly the same.
Thanks Aaron!