Sound Design Live

Build Your Career As A Sound Engineer

  • Podcast
  • Training
    • My Courses
  • Archive

Console Prep: Simplicity is Key (FREE lesson from Guerrilla Mixing on Digital Consoles)

By Nathan Lively

Learn more about Guerrilla Mixing on Digital Consoles.

Quick notes:

  • Start with system output setup, so you can later focus on input
  • Only setup inputs that you need + 1 (minimal with spare lead vocal)
  • Use shortest signal path from input to output (if building from scratch) – get the signal to the listener
  • Make sure you have all the channels you need (consider recording, intro, playback, communication, pink noise, DCA setup, etc)
  • 1:1 patch is usually simplest and most logical (avoid complexity!)
  • Convenient patch is sometimes dictated by the console (only even and odd channels can be linked for stereo, fader bank layout)
  • Have DCAs ready for controlling more channels

Choosing an online mixing course: Essential Live Sound Training vs Guerrilla Mixing

By Nathan Lively

  • Essential Live Sound Training
  • Guerrilla Mixing
  • My review of Essential Live Sound Training

Should audience depth influence crossover frequency between main and sub?

By Nathan Lively

Hypothesis: By choosing a lower crossover frequency I can expand the coupling zone between main and sub.

Conclusion: While lowering the crossover frequency does expand the coupling zone between main and sub and this fact may influence the system design, its advantages are secondary to the efficient functionality and cooperation of both drivers.

Coupling zone: The summation zone where the combination of signals is additive only. Phase offset must be <120º to prevent subtraction.

Bob McCarthy, Sound Systems: Design and Optimization

While working on a recent article about crossover slopes I started thinking about main+sub alignment and its expiration. If we know that ⅔ of the phase wheel gives us summation and ⅓ of it gives us cancellation and we know the point in space where the two sources are aligned, then we should be able to predict the expiration date of the alignment, compare it to the audience plane, and consider whether lowering the area of interaction will benefit coverage.

If two sources are aligned at 100Hz and the wavelength of 100Hz is 11.3ft, then a 3.8ft distance offset will create a ⅓ λ (wavelength) phase shift (120º). If we have two sources at opposite ends of a room and they are aligned in the center, then we have a 7.6ft coupling zone. From one edge of the coupling zone to the other is ⅔ λ (240º).

80Hz has a λ of 14.13ft and would give us a coupling zone of 9.4ft, an expansion of 1.8ft.

Lowering the crossover frequency to expand the coupling zone

Here’s a section view of a main+sub alignment where you can clearly see a cancellation at 24ft. The coupling zone is 29ft, which is 65% of the audience plane.

I can lower the crossover frequency and expand the coupling zone by 4ft, which is 71% of the audience plane.

This process can be sped up using Merlijn van Veen’s Sub Align calculator. Here’s the same system design observing the relative level difference at 100Hz.

And here it is at 80Hz. Notice that the checkered pattern indicating the coupling zone has expanded.

Instead of putting every design through every potential crossover frequency, I made a new calculator that shows the percentage of audience within the coupling zone by frequency.

I am now able to quickly compare the potential benefit of selecting one crossover frequency over another by how much the coupling zone will expand or contract. Using the example from above we can see that changing the crossover frequency from 100Hz to 80Hz only provides a 7% improvement. This doesn’t seem significant enough to make a system design decision, but it could be included in other factors in the decision making processes.

Let’s look at another example. In this case the vertical distance offset is reduced and the audience depth is increased.

The calculator reveals that a 120Hz crossover would include 58% of the audience in the coupling zone, but a 75Hz crossover gives us a 13% improvement.

Should I use this calculator to pick my crossover frequency?

No. When it comes to choosing a crossover frequency there are other more important factors to consider like mechanical and electrical limitations. If your design only puts a small portion of the audience in the coupling zone, changing the crossover frequency is not going to save you.

Instead, start by observing the manufacturer’s recommendations, then the native response of each speaker, and the content of the show and its power requirements over frequency.

All that being said, knowing more about the expected performance of a sound system is powerful. I might make design changes based on the calculator’s predictions. I might I do nothing. Either wa,y I walk into the room with fewer surprises during the listening and optimization steps.

If lowering the crossover frequency increases the coupling zone, why not just always make it as low as possible?

I don’t have a great answer for this question. As I mentioned already, there are limitations to how low you can go. One major tradeoff is that your main speaker will need to handle more and more power as the crossover frequency lowers, making it less efficient.

One clear benefit I can see is estimating the viability of an overlap crossover. If you are planning a system with an overlap crossover that goes all the way up to 120Hz and you look at the calculator and see that 120Hz will only be coupling through 50% of the audience, you might decide on a unity crossover to limit the main+sub interaction into those higher frequencies, making it more stable over distance.

What about aligning at 3/4 depth?

Right! I included a phase offset option to test this and it makes a big difference. In the most recent example, if I use a ⅓ λ offset (120º), the portion of the audience in the coupling zone goes up to 88%.

MYTH: FIR filters always add a lot of delay and are impractical for live sound

By Nathan Lively

FACT: FIR filters have an arbitrary amount of delay based on their design, making their application flexible and practical.

In this interview with Michael John from Eclipse Audio we dispel FIR filter myths and discuss what live sound engineers need to know about their application in the field.

Nathan

So, I definitely want to talk to you about FIR Designer and filters and stuff, but first: when you are doing listening tests, what’s one of your go-to reference tracks?

Michael

I tend to use a lot of Sting and maybe even some Dire Straits, because the production quality on those tracks, especially on some of the older Sting albums, is just exceptional. Maybe the more recent Sting albums have become more highly compressed but if you go back a decade or two… And even “On Every Street,” which I think was the last major Dire Straits release, is just phenomenally mixed and some brilliant musicians there as well: tonally quite broad, dynamically very good, and the production quality is just top notch. It may not be quite as critical as solo piano pieces or solo voice, but they’re my go-tos.

Nathan

How did you get your first job in audio?

Michael

I’ve always played with electronics and audio gear since teenage years; particular assembling small PA systems, running sound at school events, and the like. That kind of led me through a path to do Electrical Engineering at university, and my first audio research job was straight from that. In the 3rd and 4th years at university, advanced signal processing was offered. I found that fascinating and gravitated towards that. That then led to audio research jobs here and in the USA. It felt like a foregone conclusion—starting with a lot of live sound interest and then going through electrical engineering.

I’ve always found live and pro audio a lot of fun. I remember even during school years I was encouraged a lot by my father to do many work-experience stints. I worked a few weeks at Australian Monitor—who used to make some really leading-edge high-power power amplifiers—and other places like Trackdown Studios, a tracking studio here in Sydney. (They later moved to Fox Studios which is now one of the big film production areas here in Australia, where they have space to record orchestras and other audio for film.) So audio, particularly live stuff, has always been in my blood and Electrical Engineering helped fulfil the signal processing / maths side.

Nathan

Nice. So, having started out working on shows, I’m curious if maybe the genesis of your software can be connected back there? Were there times when you were listening and you were like, “Well the speaker sounds really bad. I wish I could do something about it.” And later on you created software to do something about it?

Michael

Sometime in the last 10-15 years I started making a number of cabinets myself, particularly high-power wedges. I wanted to use DSP for those rather than make passive designs. The DSP amps (most of them) had only IIR based filtering, but I could see user-loading FIR was coming in pro; in fully programmable DSP’s (like the SigmaStudio Analog Devices parts), but even in the pro amplifiers and plate modules. But there were really no tools for flexibly designing FIR filters and loading them into those amplifiers. From my signal processing and programming background, I thought, “It’s actually quite, I wouldn’t say easy, but I’m quite capable of building GUI software to do this.” So that set me on a path of developing my own tools which I started using in some of my own small designs. I didn’t sell anything but I was making some nice wedges and some modest size FOH cabinets. Through conversations with a variety of people, including Bennett Prescott, we started to realise there’s something in the software, beyond my own personal use; that maybe we could provide tools to the broader audio community.

Also, a lot of loudspeaker tools and many DSP programming environments involve setting up filters, making a measurement, seeing if the result is as intended, and then iterating. I didn’t see why the work process needed to be that way. I felt you could load a measurement, simulate and apply all the filtering you want, and confidently see what response you’re going to get if you actually put the filters back into the speaker processor. So that became our workflow.

Also, I was really big on wanting the user to be able to immediately see the effect of the changes they’re making.

Nathan

Right. And when you say real-time you mean see the results in the graph.

Michael

Yes, absolutely. And again, many tools don’t do that. You press “calculate” and wait a few seconds, then copy the settings over somewhere else and run a measurement. I intentionally designed the workflow and the compute engine in the software to be able to give the user immediate results. It’s the same for everything in the workflow, including things like the wavelet view.

Which leads me to something else we do a little differently in our tools. Again, rather than setting up processing, measuring, and iterating, we can actually put a target response into the software and actually force each loudspeaker driver towards the intended target. And from the way the targets sum together we know exactly how the (processed) loudspeaker drivers will sum together.

Maybe other software products do this, but we had not seen this before.

Nathan

That’s great. So it sounds like, where maybe people in the past were looking at more of an isolated, like, “Let’s see what each of these filters do electronically,” you started thinking of maybe a more holistic approach, like, “Let’s see what they do together and what’s going to be the acoustic result?”

Michael

Yes, and so the workflow involves loading—particularly in say “FIR Designer M,” although it can be done in “FIR Creator” as well—measurements for each driver and then emulating the processing and looking at the sum. Also, we can take a bunch of measurements for a driver and average them down to a single response before running that through the workflow. “FIR Creator” doesn’t have averaging, but we have a separate “Averager” tool. “FIR Designer” and “FIR Designer M” both have integrated averaging capability.

Some people have asked us to simulate the processing on other measurements in addition to the main measurement and display the responses. That’s something we’re looking to add. This could be used to see the effect of processing on, for example, off-axis measurements. Or we could even show a whole balloon plot and show the effect the crossover and processing have on the overall radiation pattern of the cabinet.

Nathan

I’m interested in this topic because I’ve been showing this game that I’m working on called Phase Invaders to people and one person said, “Hey, if we’re doing this alignment and I tell you about the audience plane that I’m doing the alignment for, you should be able to tell me the expiration date of that alignment,” and I was like, “Yeah, sure. That’s actually a good idea.”

Michael

Yes. I haven’t thought about extending that to the audience plane, but certainly, based on our current workflow we could eventually show that in terms of radiation pattern, which eventually would translate to the audience plane.

Nathan

Looking back on you career so far, what do you think is one of the best decisions you made to get more of the work that you really love?

Michael

I think the decision, and again it was somewhat of a foregone conclusion, to do Electrical Engineering. I have some friends who have gone down the live sound production path rather than going to university first, and they’re doing really well and they’re loving what they do. But for me personally, going to university while indulging my passion for live sound on the side, has helped my understanding of loudspeaker processing, ultimately resulting in the software we have today.

Nathan

Some follow-up questions: first of all, you don’t have any merch on your website yet, but if you did I have a suggestion. A t-shirt with Ned Stark with a thought bubble that says, “FIR is coming.”

Michael

Interesting, I like that.

Nathan

Ok, consider it. Number two, you mentioned several names of software but a lot of people reading this interview have no idea who you or Eclipse Audio are, so could you just go through each of the pieces of software?

Michael

Sure. Firstly, there’s “FIR Designer M,” our flagship product for integrated loudspeaker processing design. It enables the design and simulation for up to 6-way loudspeaker processing. It can inherently show the combined response of all the channels, and it’s our largest and most comprehensive tool.

“FIR Designer” does everything “FIR Designer M” does but for a single channel. With “FIR Designer” it’s possible to do multi-way designs; it just requires multiple projects; one project for each output channel. I think we see “FIR Designer” possibly being used more for creating filters for broad cabinet EQ or for installations.

Both products have unlimited filter capabilities and unlimited auto magnitude and auto phase bands.

“FIR Creator EX” and “FIR Creator” are similar to “FIR Designer” but with some feature limits, including limits on the number of filters and auto mag and phase bands. They’re designed to be more cost effective, including for hobbyists who may want to experiment with FIR and mixed IIR+FIR processing. Also, in “FIR Creator EX” we’ve provided professional export options: output formats for pro processors—such as DFM & Linea Research—whereas “FIR Creator” exports filters only as TXT & WAV files and some broad open formats. So, in “FIR Creator EX” we provide some of the pro capabilities of our flagship products but at a reduced price.

I should also mention “Averager.” We have measurement averaging within “FIR Designer” and “FIR Designer M.” But we also provide averaging as a separate, cheaper tool. It provides four or five different averaging modes. This is maybe more for hobbyist folks who wish to make a bunch of measurements in their space and distill them down to one measurement that they might want to use in some other tools, or in our tools; we don’t mind.

Nathan

You use these two acronyms: IIR and FIR.

Michael

Initially, I would point people to a paper we put on the website, About FIR Filtering, which provides a moderately technical perspective on FIR filters and their uses, as well as IIR filters.

In short, a FIR filter has a finite or limited time length impulse response whereas an IIR impulse response can go on “infinitely.” How does the IIR filter do that? I’ll defer to the paper on our website. The longer the filter impulse response, the more effective the filter is at EQ’ing lower frequencies. Because a FIR filter has a fixed length, a FIR filter has a fundamental limit as to how low in frequency it can start to affect magnitude and phase changes.

An IIR filter is much more efficient at going lower in frequency because of its infinite length. However, with IIR filtering, fine grain control isn’t very easy, and doesn’t have independent control of magnitude and phase. On the other hand, FIR filtering can do some fairly fine grain EQ and has fully independent control of magnitude and phase. That’s the 30 second answer.

Nathan

What are some myths that you would like to dispel about FIR filters?

Michael

FIR’s have often, I think, been associated with linear-phase processing—e.g. linear-phase brick-wall crossover filters. Linear-phase filters are inherently symmetric in their coefficients and so the delay through the filter is half the length. And so often people think, “I’m going to have all this high latency / delay and that’s no good.”

In reality a FIR filter can be anything. It can have minimum-phase behaviour, maximum-phase behaviour, linear-phase behaviour. It can have a variety of multi-phase or, what I call, arbitrary-phase behaviour. And so, the delay through the filter is arbitrary. It’s really how you design the filter. The delay is not limited to the middle point of the tap length.

Also, people often associate FIR filters with things like horn correction. That’s definitely one use, but there are many more uses and we talk about them in the paper.

Nathan

Can you compare linear-phase, minimum-phase, and maximum-phase?

Michael

A minimum-phase filter imparts an EQ profile with the least amount of delay on the signal. IIR filters are, for the most part, minimum-phase and that includes any IIR processing in a speaker processor. By the way, you can measure IIR filtering from a processor (sampling the processing as a FIR filter) and then achieve exactly the same filtering. The measurement is a minimum-phase FIR filter.

A linear-phase filter can impart a particular EQ profile but without any change in delay across frequency. The delay is constant across the whole frequency spectrum.

Maximum-phase is a term not used very often because it’s really something that’s born out of FIR filtering where its length is finite. Imagine you have a minimum-phase filter and you literally time-reverse the impulse response. Rather than every frequency point having the minimum amount of delay added to it as part of the EQ, every frequency point now has the maximum amount of delay added to it, up to tap length of the FIR filter. Maximum-phase filters are not normally used directly. Maximum-phase filter prototypes (at least in our software) are combined with linear-phase and/or minimum-phase filter prototypes to make a single FIR filter that pushes a loudspeaker’s phase towards flat or whatever target you desire. You can even use the phase of another loudspeaker (if the aim is to match two loudspeakers).

Minimum-phase and linear-phase are the two most common filter types.

BTW, it’s probably not obvious, but in some of the leading processors the system EQ is actually implemented as a very long minimum-phase FIR filter. It’s just easier, as the system engineering is adjusting the EQ, for the processor or control software to convert the desired EQ into a minimum-phase FIR filter, rather than to try to emulate the desired EQ with a bunch of biquad IIR filter changes. These implementations make the user experience smoother, in terms of quickly facilitating exactly the EQ the user wants.

Nathan

Great. So now that we’ve defined some of these terms, I think the next thing a lot of people are going to be wondering is, “how do I use them? Where do FIR filters go?”

Michael

In a lot of cases the FIR filters are being created specifically for the speaker, not for the room. And they’re loaded via the control software for particular brand amplifiers.

Nathan

What about you personally?

Michael

For me personally, I have amplifiers – the Lab Gruppen PLM’s – that can run FIR filters (in the “FIR 3-way” module) so I would be loading FIR’s there. I don’t use custom arbitrary-phase FIR for broader system EQ in these or in a separate processor. I tend to personally use the FIR filtering for driver-based adjustments, as part of a loudspeaker preset.

However, to more broadly answer question about where to use FIR filters, we do see some customers using FIR filters in installations. Especially where the system is made up of different makes and models of loudspeakers, each with different phase characteristics. When arraying different cabinets in a larger system, it’s helpful to have the phase matching between those cabinets as part of the full system optimization. If the phase doesn’t match, there’s the risk of getting response holes in overlapping coverage areas in the room. Or at least some lowering of energy in overlapping coverage areas, particular at frequencies where the phase is dramatically different between the two cabinets. Phase matching the cabinets first makes the tuning of the coverage easier in a larger installation.

We’re also seeing some loudspeaker manufacturers phase-match all their products lines, so that end users can mix-match their products without having to think about it.

Nathan

I know that almost every Meyer Sound speaker can play together nicely because of matching phase characteristics.

Michael

Now that we have FIR filters, it makes the matching process a little easier across product lines. Yes, I think Meyer may have been one of the first to do this, but other manufacturers are definitely doing this too.

Some speaker processors and amplifiers are starting to provide dual FIR’s. That is, every output has a FIR filter for the loudspeaker preset, and a second FIR filter for array processing. A line array is the best-use case, where the first filter is used to tailor the cabinet response, and then the second bank of FIR filters (across all amplifiers feeding the line array) is used for coverage optimization within the space. Array processing is becoming a big thing in live. I think Martin’s MLA was one of the early ones and there’s AFMG’s solution FIRmaker. EAW’s Anya is another example of a fully steered array. And they’re not the only ones.

Nathan

Do you think FIR filters have a place in the work of a live sound engineer? Are there some of these things that are field applicable? Or are these only for manufacturers?

Michael

That’s a tough one. I honestly don’t know. In the installation scenario, I think it makes a lot of sense where there’s plenty of time to do many measurements and synthesize the filters to achieve a certain system result.

I think in a touring context, I’m not sure it’s as applicable or useful. If the loudspeakers are inherently phase matched and work well together anyway, and given the time constraints in a tour or a typical live setup, just getting the system EQ to be nice is top priority.

That said, considering the current minimum-phase system EQ in a large system, as you start to make minimum-phase EQ changes – particular a multizone system – you may start to get slight phase changes between regions of the system, or even (short|medium|long throw) sections of the line arrays. So maybe in the future the system EQ will move towards something that at least can maintain phase consistency across the broader PA system. I don’t know. That’s something I think might be worth considering and investigating.

Nathan

I’m sure you get a significant number of support emails. When I first started using “FIR Creator” I had a lot of questions—I was emailing you a lot—so you saw me make a lot of mistakes, and you see other people making mistakes. What do you think are some of the biggest mistakes that people are making who are new to “FIR Creator,” or any of the software or FIR filter use in general?

Michael

I wouldn’t necessarily classify it as a mistake, but I do caution people not to lean too heavily on the auto-correction functions: the Auto Magnitude tab and the Auto Phase tab.

Nathan

But that’s the most fun. It’s the single-button solution.

Michael

Yeah, I completely get it because suddenly everything magically goes flat and the response becomes just the way you want it.

But I guess my caution is because, as you know, drivers change their behavior with level and with temperature, and a measurement taken at one spot in a room is very different to a measurement in another spot in a room. There’s so much variability in the measurement process and in the loudspeaker. A loudspeaker is a mechanical device. It wears out. It changes its behavior over time. If you start to correct for very fine grain structure that’s in your measurement, you may be correcting perfectly for one measurement location in the room on a specific day and time, but you may make things slightly worse at other points in the room or at other levels….

Nathan

Or later in the day; temperature changes.

Michael

Yes. And so, I was even quite nervous putting those features in the software in the first place. I know that’s what people want. They want auto-correction.

Nathan

That sells a lot of software, I’m sure. I mean, the first time I saw someone use it, that’s the thing I remember. Like “oooooh, I need to get that,” and then I did, and then you made some money.

Michael

I completely get it. And yes, it is very satisfying for that measurement and you do notice it. If you’re listening at that spot where your microphone was, you notice it become flatter or clearer, or whatever other perceptive attributes you use.

However, the pro manufacturers know not to over-EQ. They have hundreds or thousands of a cabinet, all potentially with subtle variations (including production variations), and so they know that EQ’ing finely for one cabinet is not necessarily the right thing to do for a loudspeaker preset. That said, I have actually seen a couple of very fine-detail production FIR filters come to me from notable companies who I thought might have been a little more subtle in their correction in their presets.

Nathan

Here comes my second t-shirt pitch to you. How about Spiderman with a big title across the top in block letters “Auto Magnitude”, and then the subtitle, “With great power comes great responsibility”?

Michael

That’s a very relevant one!

Nathan

Ok, just consider it. You don’t have to decide now.

I want to ask you about a lot of things specific to your software. I know that a lot of people reading this aren’t going to care because they don’t use that software right now. But that’s ok; they can skip over this part and then we’ll wrap up at the end with some other stuff.

In the very first step in the Import (and also in the Auto Magnitude tabs) you have smoothing options Complex and Power, and in the Auto Magnitude you have Complex and Mag. What’s the difference between Complex and Power, and when would I use one or the other?

Michael

You’re probably familiar with Smaart and they have a similar option to use one or the other…

Nathan

They have Complex and Polar, right?

Michael

Their Complex and Polar options relate to averaging over time. “Polar” does a dB average over time for each frequency point, which makes the time averaging more stable where wind or other mechanical disturbances are causing fluctuations, particularly phase fluctuations, over time. “Complex” does true complex average over time and is better where the measurement is stable over time. If the phase is fluctuating over time, the complex averaged result can have fluctuations in level due to frequency points that have different phase partially cancelling-reinforcing-cancelling-etc.

Nathan

So if I’m doing TF measurements outside, my magnitude trace in Smaart will be more stable if I’m using polar averaging because it ignores phase, while complex averaging will give me more accurate results when I’m measuring inside in a stable environment?

Michael

Yes, Polar will ignore phase when averaging over time to create the magnitude plot, but I suspect Smaart might be doing full complex averaging as well for calculating the phase for the phase plot. You probably want to clarify that with someone from Rational Acoustics.

The concept is very similar to measurement smoothing in “FIR Designer,” only we’re averaging over frequency (rather than time). Measurement smoothing involves frequency-localized averaging. When frequency components are averaged together, if the phase is changing quite dramatically across frequency, frequency components can partially cancel each other, lowering the energy of the smoothed result. And that’s often more evident at high frequencies. When using complex smoothing, if you start adding some delay to the measurement, you’ll start to see the energy drop quickly at the higher frequencies. However, power smoothing discards the phase completely and smooths just the energy across frequency, resulting in a magnitude that better matches what we’re hearing.

Now, to the difference in labels between the Import tab and the Auto Mag tab, the ‘Mag’ smoothing is power smoothing. I’ll update that.

Nathan

I see. So if 10kHz is at 0º and somehow 10.1kHz is at 180º and they are averaged together, there might be a cancellation?

Michael

Yes, definitely.

Nathan

And could you talk about then when I might need to use one or the other?

Michael

Use “Power” smoothing on the Import tab (or “Mag” smoothing on the Auto Mag tab) if you have a very messy measurement. By that I mean not just messy in level but particularly messy in phase. If the measurement is very messy in phase, you run the risk of losing energy in the frequency smoothing. And messy phase—messy measurements in general—often come from measurements in real rooms, as opposed to measurements done in an anechoic chamber. Loudspeaker preset measurements, often in an anechoic chamber, tend to be very clean. In-room measurements, as you know, tend to be messier, particularly from reflections.

Nathan

In the Magnitude Adjustment tab, how are the minimum-phase filters different from the IIR filters?

Michael

They’re the same. That’s an easy one. So, in the Mag Adjustment tab we provide filters everyone’s familiar with, but we give the option of changing the phase of those filters. For example, you can use a Linkwitz-Riley magnitude response, but with either linear-phase, minimum-phase, or maximum-phase. On the Magnitude Adjustment tab, I tend to refer to those as ‘filter prototypes’ (rather than ‘filters’) because they all get added together to create larger FIR filter response.

The lists are slightly different. There are certain filters on the Mag Adjustment tab that can’t be implemented as IIR’s, like the Keel-Horbach filters.

Nathan

Ah, yes, Keel-Horbach. My favorite. 😳

We’ve made it to the last tab, the Export tab. Sometimes I get a warning that says “Warning. Sample beyond +- one.” I was worried that I might be doing something really wrong, but I was exporting filters and using them and they seemed ok. You told me that it’s only a problem if I’m exporting to a WAV file. Otherwise, if I’m exporting to something like a CSV file that shouldn’t be a problem. Did I get that right?

Michael

That’s absolutely right. WAV files, at least in how they’re used for audio samples, are limited to values between -1.0 and 1.0. (They’re converted to fixed point in terms of the file format, but that’s a separate issue.) If you have FIR filter coefficients that are outside the range -1.0 to 1.0, and you export them as WAV and load them into a processor, there’s a good chance that the response will just not at all be what you expected. The higher value FIR filter coefficients are at best truncated or at worst may be wrapping around digitally. Either way, when you measure the response from the processor, you’ll see it just doesn’t match what you expect. Not many speaker processors and speaker processor control software products use WAV. Most use CSV or proprietary formats to get the FIR filter values over and retain the integrity of the coefficients.

Nathan

A cool thing I saw Merlijn van Veen do in one of his seminars was export to WAV and load it into a convolution reverb plugin in a DAW.

Michael

That’s absolutely right.

Nathan

So, the last thing that we look at when we are trying to optimize the filter length is the total error. Is the ideal total error zero?

Michael

No, not at all. It really comes down to what you can hear and in what parts of the spectrum the differences occur. I guess “error” is a bit of a misnomer. It’s not really an error in the filter. It’s simply a deviation of the response of the truncated filter versus the ideal filter. We call it error because it’s easier to put a label on that plot, but it’s just a filter difference.

When making a FIR filter, particularly from filter prototypes that are derived from IIR filter like prototypes, the FIR filter can have a very, very long impulse response. Too long for a typical FIR filter capable speaker processor. The ideal filter has to be trimmed shorter, and it’s the trimming process that results in differences between the trimmed filter and the original infinite-length filter.

It’s up to the user what amount of discrepancies are important. Again, variations in loudspeakers will inherently be in the order of modest fractions of a dB or larger, so getting the FIR filter, or even the IIR processing, accurate to within 0.25 dB of “ideal” you start to wonder whether filter differences are in the noise, relative to the variations in the loudspeakers themselves.

Nathan

That’s a good guideline. So, you’d say that anything below, probably 0.5 dB is probably amazing?

Michael

I would say, start to question what’s important when differences are that small. 🙂

Nathan

Ok. Since I didn’t understand that until now, initially I thought “I need to get that error number to zero.” And I just started making my filter longer, and longer, and longer, and longer and I noticed that no matter how long I made the filter, even with tens of thousands of taps, I could never get the error down to zero. We covered this already a little bit but just to wrap it up: if it’s impossible to get to zero, how do I optimize the filter length with respect to total error? Is it kind of that guideline that once you get below about half a dB, who’s going to hear that?

Michael

That’s one way. There’re two things that result in really long FIR filters. One is EQ’ing very low in frequency, and the other is EQ regions that have very high magnitude or phase slope. So when you see obvious FIR filter differences in the error plot, it will always be either at the low end of the spectrum or in high slope regions. Making changes in those aspects of the filter reduce the error.

This highlights one of the key differences between IIR’s and FIR’s. IIR’s are just so efficient at reaching low in frequency, whereas it takes a very long FIR to match the capabilities of some simple low frequency IIR filters.

In our paper we provide an example of unwrapping the low-frequency phase of a subwoofer using a FIR filter: to potentially improve the perceptual impact of the sub. The subwoofer inherently has an acoustic roll-off—maybe 2nd or 3rd order—as well as 2nd or 3rd order IIR HPF protection filters. Unwrapping the phase at around 20 to 30 Hz requires more than 8000 taps at 48 kHz and results in significant delay, which may be too long for live applications but possible in cinema, for example.

Nathan

What I’m starting to understand is that some of this is hardware dependent. For example, in the BSS BLU160 that I’ve been playing with, the limit for a filter that I can load is 8000 taps, so I can’t go beyond that. First, I thought, “If I’m using all minimum-phase filters, why would I ever limit myself? I’ll just make a 100,000 tap filter and do whatever I want, and have zero eror.” And it turns out that’s not realistic in terms of hardware capabilities.

Michael

Yes, FIR filters are inherently very computationally intensive and I have some numbers in the paper on the website. That’s one reason you don’t often see user-loadable super-long filters in speaker processors and amplifiers. That will keep changing as the cost/MFLOP drops over time – and we’ve seen a dramatic change in that already in terms of the capabilities of many processors and amps, and even the processors in our phones.

In something like Q-SYS and BSS, the overall architecture is a bit more flexible. FIR filter computation can be traded with other processing. In the more fixed architecture speaker processors, the manufacturer has to make a call on where to put the limits in tap lengths.

Now there are techniques for doing essentially zero-latency but very, very long convolution. They involve different implementation techniques, such as multiple sample rates or partitioning an impulse response into many chunks, doing time domain convolution on the front end, and doing FFT-based computation for all the other chunks.

The Lake Huron, for example, back in I think the late 90’s, was one of the first boxes that could do seconds of convolution. So, there are techniques to do this (particularly for arbitrary phase FIR filters), they’re just more difficult to implement. Speaker processor implementations of user-loadable FIR filters are typically regular time-domain convolution. But no doubt that will change as processing power increases.

Nathan

Taps are the same as samples?

Michael

Yes. I use the terms interchangeably.

Nathan

Back to FIR Creator, I’ve noticed that the window function has an effect on the total error. Do you have any guidance around the window function? Or should it just match the window I’m using in Smaart, where I made the measurements? How do I decide which window to use?

Michael

The only guidance I would give is not to use the square edge box-car, only because you’ll end up with a hard transition at the edges of the filter that may become audible. What the window is doing is crossfading into the filter and crossfading back out again. And the longer the cross-fade region, the more it’s suppressing the coefficients that are providing you the EQ “goodness” in terms of how far the filter can reach down in frequency (and affecting regions of high mag or phase slope). So that’s why some of the longer windows appear to have more error, and the shorter window edges less error.

Personally, I tend to stick with the cosine ramp. The other ones I put there to give options and because people asked for them. By the way, for a fixed window ramp length, the effect of differences in window types can be fairly subtle.

Nathan

What’s one book that’s been really helpful to you?

Michael

Most recently I read an amazing book, The Bond. It’s a memoir by mountain climber Simon McCartney about his climbing life, including first-ascents with climbing partner Jack Roberts of Mount Huntington and Mt McKinley (Denali) in Alaska. Truly incredible and I highly recommend it.

Nathan

What about podcasts?

Michael

NPR “Science Friday” is one I listen to a lot. Also “Fresh Air” (Terry Gross) and “Here’s the Thing” (Alec Baldwin). On YouTube, “Curious Droid” and “Scott Manley.” In terms of sport, I’m a huge follower of Formula 1 motor racing.

Nathan

Where’s the best place for people to follow your work?

Michael

We do most of our notices through Facebook and then directly on the website.

I monitor the “FIR Designer” threads on SoundForums.net, miniDSP forums, and ProSoundWeb. I encourage folks to ask questions on these, so that everyone can benefit from the responses and feedback.

Nathan

Michael, thank you so much for joining me on Sound Design Live!

6 Smart, Proven Methods To Control Feedback Onstage (Without EQ)

By Nathan Lively

sound-design-live-how-to-control-feedback-in-live-sound-featured-image
sound-design-live-how-to-control-feedback-in-live-sound-evil-monkey

There is nothing worse than spending an entire event struggling with feedback demons. You may have been taught to fight feedback with a graphic EQ, but there is a better way. Actually, that’s not true: there are six better ways. Use my guide to controlling feedback onstage and mix in fear no more.

“The feedback frequency is determined by resonance frequencies in the microphone, amplifier, and loudspeaker, the acoustics of the room, the directional pick-up and emission patterns of the microphone and loudspeaker, and the distance between them.” –Wikipedia

Method #0 – Psychology

I had to include this step 0 because the more I thought about it and the more I talked to other sound engineers, the more this came up. When it comes to improving your GBF (gain before feedback), start with the beginning of your signal chain and work forwards.

Example 1: Jason works as an AV tech on city council meetings. He was having lots of feedback problems and asked for my help. After we went through everything in the signal chain and made improvements where we could, the most important change we made was simply explaining to the council members the importance of proper microphone positioning. Nothing else we did made more of an impact than getting that first step right.

Example 2: When Brian Adler works as a monitor engineer in situations where he expects the GBF to be an issue, he will purposely start with vocal mikes way too loud in the mix. This will give the performer a little shock and start the sound check off by asking their mix level to be turned down, instead of what normally happens.

Probably the biggest tip I can give in this area is to be proactive and be a pack leader. You don’t want to wait until the stage is all set up and you are halfway through the sound check before you approach the guitarist about potentially moving his amp for a less face-melting experience. Instead, while you’re giving them a hand loading in, mention that “What we normally do here is put the guitar amp on this stand so that you can hear it well and I can get a better mix out front.”

Or for vocalists: “We’ve found that the ideal position for the monitor is with this microphone in this position. If you want it to be somewhere else, I’m totally fine with that, but it might not be able to get as loud, so we’ll have to work around that.”

Method #1 – Microphone Placement

Close Miking

For loud stages and busy rooms, close miking is generally the way to go. It might not always be the best for sound, but for the maximum gain before feedback, you have to kiss the mic. Remember, with each doubling of distance, sound level is cut in half. Plus, if you’re working mostly with Shure SM58 and SM57 microphones, that’s how they are designed to be used anyway.

For corporate audio this usually means teaching your presenter how to handle the mic. For theatre this means adjusting headworn capsule placement. I have seen sound designers successfully mic a play without headworn microphones, but it’s tricky (see How To Mic An 800 Seat Theatre With Floor Mics).

Polar Pattern

sound-design-live-how-to-control-feedback-in-live-sound-polar-pattern
From SoundOnSound

For concert sound you almost never use an omnidirectional mic. Microphones with a cardioid pickup pattern have the most rejection at the rear of the mic capsule, which should be pointed at the stage monitor.

Don’t cup the mic! This will defeat the directional pattern, turning it into an omnidirectional mic.

Corporate and theatre events require specific and stable placement of the microphone capsule. Some sound engineers argue in favor of using omnidirectional capsules on the grounds that they are easier to place and produce more reliable results with the movement of the actor. My experience is that none of that matters when the audience can’t hear the actor because you can’t get enough gain.

I’ve done a lot of musicals and concerts with omnidirectional head-worn microphones in the past, though, and it’s always a struggle. The performers can’t hear themselves, and if the audience starts clapping or singing along, chaos ensues. Why did I do this? Because it was what I had available. These days I try to let directors and event producers know way ahead of time about the limits of working with certain equipment. If possible, I’ll schedule a test so they can hear the difference in the performance space.

Method #2 – Speaker Placement

Stage Monitors

Floor wedges should be placed on-axis and as close to the performer’s head as possible. I’ve heard people suggest moving the monitor away from the performer for better gain before feedback, but don’t do that. That just creates lower sound levels at their ear level, so you’ll have to turn it up louder. Most live stages are loud enough as it is, so anything you can do to lower the stage monitor level will be helpful.

how-to-control-feedback-in-live-sound-hotspot

Have you ever seen those little Hotspot monitors? I haven’t seen them in a few years, but I love the idea. Put a small monitor on a stand and you significantly reduce its distance to the performer.

Sometimes, because of sightline issues or stage layout, you can’t get a monitor right in front of a performer where a cardioid microphone’s off-axis point is. This happens often with drummers and keyboard players whose instruments take up so much space and lead vocalists who want clear sightlines. This is when you need a hyper-cardioid or super-cardioid microphone and this is why many live music venues have a collection of Shure SM58 (cardioid) and Beta SM58A (supercardioid) microphones, or similar.

If you find yourself stuck with a drummer or piano player whose stage monitor is at a 90° angle to a cardioid microphone, try cheating the microphone out closer to 45° to get more rejection. If an artist requests a monitor position that is less than ideal for your microphone selection, go ahead and do it, but warn them that you may run into feedback problems and need to reconfigure the speaker and mic.

I’ve seen some pretty creative microphone and monitor placement that allow for very high gain before feedback. If you are working with acoustic instruments, ask the performers if they have any tips for placement. I used to work with a cello player in Portugal who placed the stage monitor a little behind himself so that it wasn’t pointed at his microphone but it was still aimed at his head. It worked great.

Stage monitor placement for theatre deserves its own article, but my number one tip is to start the conversation early. Explain your limitations to the production team and discuss ways to best accommodate the actors. You don’t want to realize in tech rehearsals that the actors can’t hear the musicians and that the director won’t allow downstage speakers. I often lobby for small downstage monitors straight out of the gate. I also try to make friends with the set director and builder as quickly as possible, alerting them to the fact that I’ll probably need help hiding speakers around the stage.

FOH

Make sure your FOH speakers are covering the house and not the stage. This means checking the speakers’ off-axis angles to make sure they are not spilling onto the stage or creating strong wall reflections. (See also: How To Tune A Sound System In 15 Minutes.) I’ve heard people say that all microphones must be at least six feet behind FOH, but I’ve seen it done many different ways. Some situations call for more separation and control, others less.

Method #3 – Instrument/Source Placement

sound-design-live-how-to-control-feedback-in-live-sound-drum-shield

If you are working with a loud rock band and you place the lead vocalist right in front of the drummer, guess what happens? Your vocal mic will be full of drums and your vocalist won’t be able to hear. This happens all the time, and explains why you see the bands on Saturday Night Live using a drum shield on that very small stage.

Your goal is to balance every source input for the performers and audience. Now let’s talk about the most frequent offenders.

Drums

Drums are loud. Some drummers are interested in harmony and balance, and will change their technique, use brushes, and dampen their instruments. Those drummers are in the minority. Why? Well, have you ever played drums? It’s fun as hell to play loud, and boring as shit to play soft, or so goes my personal experience.

If you’re on tour, you’ll need a rug and a drum shield. If you’re full-time at a venue, put absorption everywhere. Two of the noisiest venues I’ve worked at have pulled the same trick and covered their ceiling and walls with black semi-rigid duct insulation or vinyl that screws right into the wall. It made a big difference.

For more on this topic, see 5 Pro Drummers Explain How to Make a Drum Kit Quieter on Stage.

Electric Guitars

I’m a guitarist, and as such I’m fully aware of how hard it is to hear myself without the amplifier blaring. The only way I was able to handle this in my band was to learn to play without hearing. In the real world, getting a guitarist’s amp as close to their head as possible will help. Put it on a chair or milk crate. Most are open-back, so put a bunch of absorption back there.

In my interview with Larry Crane he mentions a guitarist who built a Plexiglass shield for his amp that redirected the sound upward at an angle so that he could play with feedback and do fancy things with his amp without blasting the stage. Pretty smart.

sound-design-live-how-to-control-feedback-in-live-sound-guitar-amp-doghouse

I worked on a show last year where the guitarist made a shield for his amp from case lids and jackets. This helped it not bleed into other microphones as much.

Buford Jones is famous for doing whole tours mixing from inside a truck outside of the venue. (He’s even more famous for mixing some band called Pink Floyd.) These were large venues where they had little acoustic sound coming from the stage. The guitar amps where all in dog houses off-stage and all of the performers were on IEMs (in ear monitors). Most of us won’t experience that, but it gives you an idea of how far people will go to control sound levels on stage. If you are worried about approaching a guitarist to discuss changing their setup, just remember that asking them to turn down their amp and put it on a stand is nothing compared to removing it from the stage entirely.

Method #4 – Mix

Stage Monitor

Most performers these days are wise to the challenges of microphone feedback on stage and will make specific requests for their monitor mix. I’ve made it a practice to not add anything to a stage monitor mix until expressly asked to, except for vocalists who almost always need reinforcement. When musicians walk in the door saying, “Just give me a mix of everything,” they likely don’t know what they need. Smile and nod.

I’ve made it through entire shows without adding anything to some performers’ stage monitors because the stage layout allowed them to hear everyone. I’ve also worked on shows where the band has skipped sound check then walked on stage expecting a complete mix. I try not to work off of assumptions and I give people only what they need, because the lower your stage volume, the better your FOH mix will be, and everyone will be happier.

FOH

In small to medium venues, you aren’t “mixing” in the classical sense, you are doing sound reinforcement. You are balancing the acoustic energy in the room for a more pleasant musical experience. From my interview with Howie Gordon:

The other thing I hear a lot about [is] guys setting the whole mix base from the drums, and in my opinion that’s the last thing you should do because the thing that immediately suffers is vocals. It’s the one instrument that can’t control its own stage volume. -Howie Gordon

And from my interview with Larry Crane:

How many times have you been blown out of the water by the mains because you’re trying to keep up with the stage? It’s like, “No, no, no! That’s not necessary.” You’re not building the mix up from the kick drum at that point. You’re building the mix down from what’s happening on the stage, and you’re filling in what’s missing, just a little bit. -Larry Crane

If you need definition on the bass guitar, roll off the low end and mix it in. If you are missing the melody from the keyboard, bring up the right hand. If the guitarist is too loud then invert the polarity and lower his volume in the house with deconstructive interference. That’s how noise cancelling headphones work.

(Just kidding! You know I’m kidding, right? If you actually try that and it works, keep it to yourself.)

Compression

Normally, I love compressors, but they raise the noise floor and reduce dynamic range, and therefore reduce gain before feedback. I would really like to use compression on lapel mics during corporate presentations, for example, but I’m often on the verge of feedback and can’t spare the gain.

Method #5 – The Holy Grail

IEMs, e-drums, synths. Done! 😉

Method #6 – Don’t Give A Fuck

“These setups that we’re working on, there’s EQs everywhere. If there’s still feedback, it’s too loud. So lower it or let it ring all night. I don’t give a fuck.” —Dimitris Sotiropoulos

I laugh every time I read this quote, but there is plenty of truth to it. Half of what I write on Sound Design Live is about psychology. People don’t trust sound because they can’t see it. That also means they don’t trust you because they can’t see what you’re doing. Letting the monitor feedback for a second before you bring it down communicates to the artist that it has reached it’s maximum level and that you are turning it down.

Q: But you do use EQ, right?

A: Um, yeah, most of time. At least to attenuate some low end.

#ObligatoryBonus – EQ

This is your last tool in the war on feedback. Use high-pass filters to remove the rumble from guitars and the proximity effect from vocals. Use narrow-band filters on a parametric EQ to surgically remove problem frequencies. Although it’s your last step, it’s also necessary. Temperature, humidity, and performance changes throughout the night will require compensation.

I recently worked with a sound engineer who would cut the low end from all of his vocals up to 200Hz in the stage mix. That’s a lot! But it worked. A few years ago I worked on an outdoor event where everything would be balanced during the afternoon sound check, then explode into feedback at night because of environmental changes.

So I think we can agree that some amount of EQ is necessary, but watch out for assuming too much. There is a process that we sound engineers call “ringing out the monitors” that often takes place before any artists have arrived. We use this process to lessen the amount of time we will need to chase feedback during sound check. I gotta tell you that over years of working on live events I do this less and less. Why? Because if you do it before sound check then you are making a lot of assumptions about the sound that can all be ruined by changing a mic or its placement. You’re also making changes to the speakers’ performance and sound quality without due cause. A better technique is to test for feedback, make note of those frequencies, but hold out on makes changes until you need more gain.

I sort of hate the fact that “ringing out” is supposed to be a normal part of our job. Under normal circumstances, with high-quality equipment and a properly optimized system, you shouldn’t have to do this. The fact that it is a normal part of our job makes me realize that there are a lot of sound systems out there that need your TLC.

Pulling half the bands down on a graphic EQ is like removing a tumor with a wiffle ball bat.

sound-design-live-how-to-control-feedback-in-live-sound-graphic-EQ

If this is the first article you’ve ever read from me, you may wonder what I have against graphic EQs. For system EQ, their fixed frequency, bandwidth, and logarithmic spacing make them unhelpful. They maintain popularity until today because they seem to give you a visual (graphical) representation of the changes you are making. Unfortunately, the visual is misleading. While you appear to be making surgical incisions, you are really making ⅓ octave tonal changes. You can prove this to yourself by measuring one. Here’s a step-by-step guide.

I hate graphic EQs. I don’t use them unless I don’t have a better choice. You’re talking about ⅓ of an octave. That’s like a C to an F on a piano.

Michael Lawrence – Fighting Microphone Feedback WITHOUT a Graphic EQ While Mixing Monitors from FOH in a Reverberant Room

Basically, the only things that graphic EQs are good for are ear training and maybe use in the battlefield that is Monitorland. For more, see my interview with Dave Swallow, my interview with Bob McCarthy, and my review of McCarthy’s book.

Another consideration is where you will insert these EQ filters. Your first idea might be to insert them on the master output buss of the mixing console. Consider that this has global repercussions on the entire mix. You are affecting the system response and mix balance. If possible, scale your changes back to the smallest local change possible. Is the feedback originating from a single microphone to all outputs? Insert your EQ on that microphone’s input channel first. Is the feedback frequency present to varying degrees in all vocal microphones? Insert the EQ on the vocal buss.

In the world of my dreams, I would be able to insert filters on a per-send basis from each input channel for maximum transparency. Unfortunately, the only way I know to accomplish that on modern mixing consoles is to create a duplicate input channel for each send, which is overly complicated.

Ambient Changes

Humidity and temperature changes throughout the night will require compensation, especially if you are outside. My first big lesson in this came will working for the band O’QueStrada in Portugal at an outdoor concert at the Centro Cultural de Belém. I had all of my monitor mixes set just on the edge of feedback, which seemed fine during soundcheck. We came back that night to start the show and as soon as I unmuted the band I also unleashed a storm of microphone feedback. 

At the time I didn’t understand that a rise in relative humidity at that location would result in less high-frequency air absorption. I could have compensated for the change in humidity with a high-shelf filter.

The lesson: Don’t mix your stage monitors to the edge of feedback if you expect a rise in relative humidity and be prepared to compensate with a high shelf filter.

Temperature changes are less obvious. It would take a a 20ºF change in temperature to produce a 2% change in the speed of sound, which may be only enough to shift your acoustic crossover point by one seat. Unless you are working outside with some very large changes in temperature, I wouldn’t worry about its affect on microphone feedback.

Other Tricks To Try

Feedback Eliminator

If you look up reviews for feedback eliminators they are almost equally bad and good. You never see them on professional productions. Part of the issue is that sound engineers don’t like things to be out of their control, but the main problem is that these units just don’t work that well. Everyone who has used them has horror stories.

That being said, sometimes pro audio feels like a war zone, and I will never judge you for using one. Especially for corporate events where you have several lapel mics walking around a stage and you only need to stop one frequency from feeding back on one microphone for three seconds at a time. Or small setups where you are very limited in the way of EQ.

Frequency Training

Imagine the show-stopping seconds you could save if you could identify feedback frequencies immediately without using an analyzer. There are some nice apps out there that will train you to identify frequencies. This is not the same skill as having perfect pitch. It’s pitch memory and anyone can learn it. Most of them train you using the 32 bands of a standard graphic EQ, which isn’t ideal, but is a great place to start, helping you avoid a frequency-wide sweep. I’ve used Audio Frequency Trainer and Quiztones. Read more about my experience here: My Results from 30 Days of Ear Training.

Microphone Splitter

sound-design-live-how-to-control-feedback-in-live-sound-y-cable

Don’t have a digital mixer or a separate mix console for the stage? Try splitting a few channels for more control. Let’s use the lead vocal microphone as an example. Right before it comes into the mixing board, connect a splitter or use a Y cable for the most basic passive version. This will give you two copies of the lead vocal coming into the mixing board. Mix one for the house, mix one for the stage. This will allow you to roll off way more low end then you normally would and make other adjustments to the stage mix without significantly affecting the house.

Polarity and Delay

It has been suggested to me that you can invert polarity or add small amounts of channel delay to get more gain before feedback. I’ve never had success with this. It just moves the feedback to a different frequency, makes it attack slower, or makes no change at all.

Separate Speakers

From the Meyer Sound Design Reference by Bob McCarthy ©1998:

One solution is to double the number of stage monitors and separate them into music and vocal systems. This has the advantage allowing for separate EQ and, in additon, the musicians find it easier to localize their voice and their instruments since they come from different positions.

Thanks to ra byn for tipping me off to that one.

Turn Down

While on tour with Ringling Bros., I found that in some arenas I had GBF for days and in others I could barely get the main vocal up above the band. Our system and performers being the same, I had to accept the fact that my headroom changed from week to week. The audience didn’t know it was different, so as long as the balance was good I could adjust the overall level as necessary.

Conclusion

Your best tool for controlling feedback in live sound is stage layout. That means microphone placement, speaker placement, and instrument/source placement. Then you can work on the mix and if you still can’t get enough gain before feedback, use EQ. If you’re lucky, you’ll work with a synth-pop band (call me, Active Child!). If you’re unlucky, challenges abound, everyone’s a dick, and you just let it ring all night, cause fuck it.

What are your best tips for fighting feedback on stage? Comment below!

  • « Previous Page
  • 1
  • …
  • 7
  • 8
  • 9
  • 10
  • 11
  • …
  • 54
  • Next Page »

Search 200 articles and podcasts

Copyright © 2021 Nathan Lively