Sound Design Live

Build Your Career As A Sound Engineer

  • Podcast
  • Training
    • My Courses
  • Archive

When will this ‘immersive’ fad be over?

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live my guests are the Director of System Optimization and Senior Technical Support Specialist at Meyer Sound, Bob McCarthy and Josh Dorn-Fehrmann. We discuss the perception of immersive sound systems from marketing nonsense to power system design tool.

I ask:

  • When is this fad going to go away?
  • How is it possible for each audience member to receive uncorrelated signals? If every array is source is covering the entire audience, won’t every audience member experience a 5x comb filter?
  • From FB:
    • Robert Scovill: Is Galaxy, when it is used in immersive systems considered a “spatializer” by a given definition? I know Meyer are incorporating delay matrixing with in the unit to achieve the spatial aspects of their SpaceMapGo application, but I’m curious if units like Astro Spatial and L-isa, Timax etc., are functionally – or mathematically – different than what Galaxy has to offer. How does Meyer define an “object” – is it a speaker output? Or an input source to the spatializing device?
    • Aleš Štefančič: I was wondering how far into the audience the immersive experience can be achieved before all those separated signals become combined and does that then cause cancellations in the back if the room?
    • Lou Kohley: When will this fad pass? 😉 Seriously though, Does Meyer see Immersive being commonplace or as a special thing for specific spaces.
    • Gabriel Figueroa: What do you see as the pros and cons of immersive in theaters that cater to both music and spoken word? Especially rooms with difficult dimensions, where traditionally you would add a speaker zone/delay, but now you could theoretically add not just coverage but imaging as well!
    • Robert McGarrity: Total novice for immersive programming, but where do you delay to? Is there a 0 point?
    • Angelo Williams: Where Do we place audience mic’s in the room for capture as objects?
    • Lloyd Gibson: I thought 6o6 was against stereo imaging in live sound because of the psychoacoustics and delay/magnitude discrepancies seat to seat. Does this not apply here or is there a size threshold where it can be successful?
    • Sumeet Bhagat: How can we create a good immersive audio experience in venues with low Ceiling heights ?

It’s a totally different experience mixing in a wire versus mixing in the air. That’s the beauty of immersion, but you have to be able to pull it off.

Bob McCarthy

Notes

  1. All music in this episode by LXIV 64.
  2. Spacemap Go
  3. Quotes
    1. Noise is the number one complaint at restaurants.
    2. There’s no upside to unintelligibility, but…intelligibility isn’t the only thing. We’re willing to give up some of that approach [of mono center clusters] in order to get some horizontal spread. People are willing to give up perfection and intelligibility in order to get that horizontal experience.
    3. Spacemap is a custom panner, basically.
    4. Can I use smaller arrays if I use more of them? The answer is yes. Consider the Fender Twin Reverb. It does only one thing: reproduce the guitar, and it can ruin the experience for everybody because it’s so freakin’ loud. So how do those two twelve-inch speakers out-do our whole $100,000 PA? It’s an object device that only streams a single channel, while [the sound system] is reproducing 32 channels or something like that.
    5. Time doesn’t scale.
    6. It’s a totally different experience mixing in a wire vs mixing in the air. That’s the beauty of immersion, but you have to be able to pull it off.
    7. One place I throw up a big red flag is people wanting to play matrix games with their under balconies and front-fills. It’s like, stop it stop it stop it.

Transcript

This transcript was automatically generated. Please let me know if you discover any errors.

Welcome to Sound Design Live, the home of the world’s best online training and sound system tuning that you can do at your own pace from anywhere in the world. I’m Nathan Lively. And today I’m joined by the director of system optimization and senior technical support specialist Bob McCarthy and Josh Dorn-Fehrmann. Bob and Josh, welcome to Sunday Design Live.

Hi, Nathan. Welcome. Thanks for welcoming us, I guess.

Yeah, good to be here.

Okay. So I definitely want to talk to both of you about Immersive system design. That’s what we’re here to talk about. A lot of people sent in questions. It is an exciting or polarizing topic, depending on how you look at it right now. But I hope by the end of today’s conversation, you may have some more information about it, and you may feel differently about it. We’ll see I may feel differently about it, but before we do that, I would like to know from each of you what was the very first concert you ever attended.

Can you remember whoever can remember first for me?

That’s easy. If you consider a concert at my elementary school gymnasium, that was Charlie Fer and his band, and they played and I don’t give up blank about a green back dollar. And they literally did that because it was in the Catholic school auditorium, so they couldn’t say damn. And I thought, wow, this is really cool. We’re all at this concert together and everybody’s cheering and they’re playing Peter Paul and Mary songs just like my records. And I didn’t even know such a thing was possible. This is really cool.

Oh, man, that’s way better than mine. Mine was a Christian artist of some sort. I was really involved in the Church when I was a kid, and I think it was Rebecca St. James. Maybe it was like a first Christian concert was like, what I did first.

And then you were both steeped in religion from a young age.

Yes. I grew up in Louisiana, and I moved to Texas.

The thing about my first concert was it was people that I knew that I went to school with their younger brother. So it was like, real people. So that set the seat in me that real people can play music for an audience. And that was like, okay, this is awesome. I want to be part of this. And there you go. Right there.

Then, of course, my first thing go ahead.

My first big rock show was Grand Funk Railroad.

Oh, yeah.

Iem your captain.

I’m glad you pointed it out because I think it seems like a magic trick for a long time. Right there’s, like, these magic things that are happening on stage that are making us feel feelings. And it kind of seems distant. We put the artist up higher, like we’re down lower. We’re disconnected from them in a way. So when you start to meet those people and see that they were once like you and also maybe knew nothing about music or how to play music or audio or physics or anything.

And then they learn that stuff, then your brains, you start to see, like, oh, maybe I can get involved.

Yeah.

Totally. I also got into theater really young. I remember watching shows and just at high school productions and being in elementary school and going to go see Anne Frank or whatever. And it was funny. We saw Anne Frank in Lafayette, Louisiana, at the Big Performing Arts Center. And at the end of the show, we got on the school buses. And the person playing Anne Frank was smoking a cigarette outside, and it totally ruined the magic and the spectacle. And that was probably the first memory I have of, oh, this is something that people actually do that are human beings.

It’s very interesting.

There’s a person inside that mouse costume.

Yeah.

Well, another seminal event like that for me was when John Huntington wrote his book on control networks and control system. Exactly. And it’s like, I had known John for ten years, and it’s like, Well, Gee, whiz, if John Huntington can write a book, I can write a book. Seriously. It was that much of a join on the head. And that was a big piece of pushing me forward to right above that’s fun. Yeah.

It’s really helpful when we see our colleagues doing something like, oh, this person can do it. I can do it.

You don’t have to be a College Professor or have mixed the Beatles albums to write a book about audio or to be Harry Olsen. You can write if it’s got something to say.

Yeah.

And I remember when he then went on to self publish a future edition. So he’s been a good role model for a lot of us who want to, like, publish and stuff.

Exactly.

So, Josh, when you’re coming out with your book.

Oh, man. I wrote a thesis for graduate school while I was on tour, and that was hard enough. And that was about 50, 75 pages. And it’s on restaurant sound design. So it was a great excuse to tour around the country and eat at great restaurants and talk about noise and how to elevate the dining experience.

Would you mind sharing a couple of pieces from that? Like, what was one of your biggest takeaways from looking into a lot of restaurant sound design?

Well, yeah. So noise is the number one complaint in restaurants. Right.

And they tend to just make that worse by putting sound into the space.

Oh, yeah. And it comes down and we deal with this all of the time and installs churches, theaters wherever. But the same thing happens with restaurants. And one of the cool things about noise is it sort of activates a certain SPL. It starts activating your fight or flight sort of mentality. And they see that as things get louder, the rate of consumption and food and drinking actually goes up. And I think it’s somewhere near, like 20% in some of the studies I was looking at.

That’s actually good for the bottom line.

So imagine something like a Chipotle.

People are stressed out.

Yeah. That’s why you go into a Chipotle, and it’s just concrete walls and glass everywhere.

Really. They did that on purpose.

Partially. I don’t know. You can walk into these fast casual restaurants and that’s the architecture. And then that architecture trend is carried over. And so there’s all sort of anesthetic synesthesia type research going on on how frequencies affect, taste and all sorts of different things. It was very interesting thesis. I went to grad school at UC Irvine in California and for sound design for theater. But, yeah, I was very interested in that. And then it sort of all came together list right before the pandemic. At a restaurant called Verse Restaurant in Los Angeles.

Manny Mariquin, who owns Laraby Studios. Very famous mix engineer, took over a restaurant space right next to his recording studio, and we put a Constellation system in there for full acoustics. You can use space Mapp, go, and you can also have PA. There’s an X 40 system of PA on sticks. It’s basically my thesis in a restaurant, and it actually exists now, and they actually have a fiber line connecting to the recording studio. And the RT time of the room is like 0.5. So it’s like a studio inside the restaurant.

And we adjust the acoustics for whatever bands are playing. And then we also use a Constellation technology that allows us to we call it voice masking, so we can sort of isolate the tables and make them that way. You’re having a nice conversation with someone and you don’t have to yell across the room or hear other people’s conversations.

I feel like we should do a whole other podcast on this, because now I’m wondering I was thinking that I can sort of pitch customers and clients on my work and on sound systems in general by saying, hey, the better the sound, the more money you’ll make. But it sounds like that’s not always true. So really, we should make the sound worse to help them make more money. But then their customers are also going to be stressed out. Like, Where’s the connection there?

Yeah. I think it depends on the goal of the restaurant. It’s like a good system design. What are your goals and what are you trying to accomplish? And then physics gets involved as well. Everyone will love a better acoustic up into a certain point of a room. If your room sounds too dead and you don’t energize it with Reverb and it sounds like an almost anacoic or like you walk into a cinema and you’re eating. That’s not a good dining experience. But if it’s got a little bit of an uplift to where it elevates and it has a little bit longer Reverb time and more early reflections, then you have an energetic room.

The problem with restaurants is you have too much reflections, you have so much hard surfaces, so many things. And then you have people that start trying to talk over each other.

Got it.

And it just creates white noise. So, yeah, there is a balance and you have to find it architects. That’s the job of the architect and acoustician. The cool thing about constellation is you can build a dead room, and then we can make the room whatever you want it to be and change it at the push of a button. So you can do the tables when people are dining and at the push of a button when the band gets on stage, it now can be a concert hall or a theater or whatever you want it to be.

And Bob, would you agree that there is this balance where it’s like the sound needs to be good enough in a commercial or restaurant space so that you feel safe and you want to stay there. But then not so dead. I guess that you’re not interested in being there and you don’t want to like, I guess, drink and eat. Have you seen that in the wild?

Well, I think that if a restaurant has overly absorbed of acoustics, which is so rare, where do you find that? Maybe in the old school? Well, this is an old school Steakhouse with the furry booths kind of thing. So if it’s really dead, you’ve created an environment that’s very you better have people far apart, because if it’s crucifixely dead and people are close, then you’re literally hearing everything exactly clearly that everybody else is saying. So the dead restaurant in the booth, those sort of go together because you got separation then.

But what I find is that you have this situation where as the background noise that tries to fill up that void to making people feel like they are not alone, that the place is alive. But some of these places will have these sensor mechanisms that raise the background music to make sure that it’s over the talking and that’s, of course, in my mind, a reverse. It should go down. If the place is already so full of people talking and have a good time, don’t send the music up because they’re already having a good time, just bring that thing down and de escalate so that then people don’t have to have the shouting experience and the what and the what?

And that feeling where you’re with a party of eight and you really are only able to talk to the person on your left or on your right. And that’s really all why.

We’Re talking about immersive experiences. And a restaurant experience is an immersive experience. You’re surrounded by people. You’re dealing with the various acoustics in the room versus restaurant in La, and a couple of other restaurants have a ton of speakers, and they do a lot of other crazy things with it. But the experience of being surrounded and experiencing what’s happening in the restaurant is key. And so we do things like we’ll raise the acoustic and make it a little more vibrant and energetic in the bar area. So it’ll be more vibrant by the bar.

And then the rest of the restaurant will be a little more quiet, less Reverb, so that people can have a better conversation. And you can sculpt all of this with the technology constellation. But that’s one of the many tools of Immersive audio. And I think Reverb and reverberation and room acoustics is a side of immersive audio that people are starting to get into more and more. But then you have the other side, which is more helical of speakers across the stage, speakers all around you moving sounds around and doing things like that.

And so what is Immersive audio? That’s a big question to me. It’s a marketing term, and whatever term you use, it, whether it’s hyper, real immersive or whatever, it all goes into the same bucket of it’s an experience for people live and in the real world.

So for me, the ultimate restaurant, plus, Immersive auto experience has got to be Chuck E. Cheese man.

Exactly.

To those animatronic cheese balls on the stage.

There you are, so in it, there’s no windows. It’s just a warehouse, everything’s blacked out. So it’s just this experience they’ve created.

That’s probably my first concert experience, actually robots at Chuck E. Cheese.

Okay, Josh, thank you very much. You’re a great co host. We needed a transition into Immersive from restaurants into Immersive. And the first thing we need to talk about is sort of the tough stuff because there are a lot of people listening right now like me who are thinking, when is this fad going away? Why do I need to care about this? And those people who are like me typically try to ignore things until it’s something they have to know tomorrow. So I’m not going to look up the directions to the airport until I have a ticket to leave tomorrow.

And so I’ve been ignoring all this stuff about Immersive for years. So a couple of years ago, I was in Orlando for the name of a conference I can’t remember, and everyone was showing off their Immersive systems, and I thought, this is really fun, but I don’t need to worry about it, because this will never be a part of my life because it’s so escalated in terms of complexity and expense that I’m never going to work on something like that. Fast forward to this year’s Lifetime summit.

And we’ve got Robert Scoville presenting about why he thinks immersive systems are so cool, why he’s trying to pitch them to producers, event producers for tours that he has coming up. And it turned into kind of this polarizing thing where we had it felt like we had people who had drunken the koolaid or were on this side of the fence. And we’re like, this is so cool. And then people who like me, who are still kind of on the other side of the fence or on the fence and are like, but wait, is this just marketers trying to sell me more speakers?

So we’re all friends here. So I know you guys don’t take any offense to me saying things like that, but I feel like we kind of need to go through this conversation before we get into some more of, like, the fun system design stuff. So I wanted to give each of you a chance to say, I guess I want to give each of you a chance to say what excites you about this idea of immersive sound, what we can do to sort of allay people’s fears, what we can do to take the fear away, that this is something that is going to be forced on people.

Is that a weird thing to say?

No, I’m there with you, man. And I was there with you up until about two years ago. And, yeah, what’s interesting is from our company’s perspective, we’ve been doing Immersive audio for 30 years. One of the first products John Meyer made was a subwoofer for the touring production of Apocalypse Now in Quadraphonic. That was the first cinema world has been doing it for a long time. Theatrical sound designers have been doing it for years and years and years. And so this live audio world where we have a stereo environment, and now we’re moving into something different or even a mixed mono environment.

It’s scary. And I think people have every right to be scared. But before I talk, let’s bring it to Bob because he’s been working in stereo and mono systems for all of his career. So Bob, go for it.

I am not an immersive evangelist. It’s not my role. What I do is try to give you guidelines to if you’re going to make an immersive system to make one that’s going to work and achieve your goals and not ruin your other goals. So for me, the laws of physics still apply. The laws of human perception still apply the acoustic realities, interactions with speakers. All those things still apply. So now you’ve decided you’re going to be immersive. So here’s the rules that you have to go by or the guidelines.

I more likely look at them as guidelines and rules because nobody wants rules, whole trade of Cowboys. And so what you want to do now is if you’re going to do this, these are some guidelines to help you succeed. So to me, I go back to the easiest thing to think of. Okay, if we’re going to make a successful system, the first thing it has to be is intelligible enough for people to understand the material in the world of theater, they have to understand the words in the world of House of worship.

They have to understand the words in the words of rock and roll. It’s pretty helpful to understand the words, although a lot of times it’s not sung with the kind of clarity and that you can bend on that. But it is really helpful to have that there’s no upside to unintelligibility. But if you look at why we go and we don’t have mono center clusters all around the world doing all of our shows, it’s because intelligibility isn’t the only thing. We’re willing to give up some of that perfection.

And the absolute bestness of that approach in order to get some horizontal spread. And I think a big piece of it is that most people are born with two ears, two functioning ears, and you want to hear a horizontal panoramic spread because that tickles your brain in a really positive and engaging way to have things spread over a horizon. So left and right is here to stay. It’s not going away. And people are willing to give up perfection and intelligibility to get that horizontal experience. And then that brings you to the next big chunk going to three channels.

Lcnr, the world of cinema crossed that road a long time ago, and they were very troubled by if you just go left and right, as soon as you’re one seat off the center that you image to that side, anything that’s panned to the center. And it’s an insolvable equation, no matter how much somebody tells you, they’ve just invented a new magic filter that time. Smears and blah, blah, blah. I don’t want to hear about it. You sit one seat off the center in an arena and everything mixed mono is on the left side.

Okay, everybody knows this. We don’t want to admit it, but everybody knows. So the deal is if you want the vocal or some center image to stay in the center, you need a center channel. That’s why you have a dialogue channel. But you have to now not go and put everything in all three channels where you can sort of put a lot of things in left and right. But when you start putting in left center and right now you’ve got a problem because they are going to have all sorts of fights.

The correlated comb filter fights that we all know about my life’s work is screaming and putting up flags about this subject. Once you go to this, you’ve crossed the line, and now you need to take a decoration approach. That is, I got to put different things in the center, then I put in left and right. And if I’m going to take that approach, that center channel has to reach all the seats. If it’s going to carry the big voice, the big star, the lead of the show, if it’s going to be theatrical, it’s going to carry the vocal content of the show.

It can’t just be a 90 degree speaker that covers one half of the room, which you can get away with on your left and on your right. So now you have pretty hard and fast rule that if you’re going to make a channel as a standalone to cover the whole room, it has to a cover the whole room. And that is the key thing. Once you’ve crossed the three and you’ve got left, center and right. Well, now the crossing over to adding surroundings on your sides and on your rears and on your overhead.

Those are just more versions that follow a similar set of guidelines.

Yeah. Bob and I joke a lot about, okay, you’ve spent all of your career separating coverage and making sure that everything is separate but equal. And now we’re doing the exact opposite and just overlapping everything. And people are like, well, what about the comb filter? And then that’s where the processors are doing all of the magic. And yeah. So on your question, is this a fad? I think it’s a tool. It’s not the right tool for everything. It’s not the right tool for every situation. And for the exact reasons that you laid out cost sometimes is prohibitive.

There are arguments from different manufacturers of why one is better than the other and how you can save money. Some people say you can have a smaller line array. Some people say, since your headroom is spread out amongst your five across the front or whatever, you can use smaller speakers because you’re distributing that through multiple loud speakers. And there is snake oil in the industry. As a mentor once told me, Audio is a series of compromises and snake oil salesmen, and you have to figure out what is true and what isn’t.

And there’s a lot of snake oil in our industry, from the gold platinum cables of power to all sorts of other things. And marketing is a thing. People are trying to sell speakers with this, and I don’t think they’re honest if they aren’t trying to do that. But with Immersive Audio Systems, what we did with Space Map Go is this is a technology that’s been around for almost 2025 years, and what we said with Space Map Go was okay, let’s just make it free. So it’s a free update to your Galaxy and where we get in system design, where we get really what’s really happened from a marketing perspective is these new up and coming Immersive systems require you to have a lot of fixed loudspeaker locations, and they say you must have five across the front.

You can have seven across the front. You can have this many on the sides. You can have this many above. You Dolby has a spec on how to design sound systems for cinema. And so people are used to these rules and that they’re like the static. I have to do this, and I have to have this many speakers in order for this to work.

All right, let me pause you, Josh, because we’re about to bust a myth. So let me introduce the miss, which is something that I believed until a couple of months ago, which is that Immersive meant five times the expense and five times the complexity, because you take your normal mono system, and then we’re going to upgrade to Immersive. Then everything gets multiplied by five. And that makes it really easy for me to ignore and say, oh, this is a fad, because no one can actually support this kind of expense and complexity.

We can barely get mono and stereo systems, right? How can we do this? And so you have been a big proponent of pointing out to people how flexible this is. And it’s a container for new system locations and system designs and not rules. Okay, so continue.

Yeah, not rules. The only rules that we like are physics. And those physics rules still apply. Pick the right speaker, put it in the right place and point it in the right direction. Now, that’s different for mixed mono and stereo systems than what it is for immersive systems. Those are the only three rules. Pick the right speaker, put it in the right place, point in the right direction. Now we at Myersound and Space Mapp. Go has a lot more flexibility in terms of what you can make an immersive system because our algorithm and we can get into the weeds about this.

But the space map algorithm and what a space map is is a custom panner, basically. So you can make a space map system out of one speaker, and that’s a Panor that you make. And the difference between space map and what everyone else is doing is that we allow you to make the panner. So let’s say you have a theater and you spend a ton of money on a five across the front system on the sides and around you like, you have a full 360 degree shell of loudspeakers where most of these immersive systems are failing is they only let you drive that system one way.

So if I use their GUI in my object panner and I move that object of my guitar to the top left corner of that panner, it’s only going to come out of the top left side of the sound system. The difference between that and space map is that we can say, draw the space Mapp to control the loud speakers however you want. So it’s like having a Ferrari and driving it like a Prius because you’ve spent all this money on loudspeakers, but then you’re only allowed to move sound around in very certain ways.

Whereas if I can draw a space Mapp for that room and have a sound zigzag and zip around every other loud speaker, send to all loudspeakers, send to just the vertical and then cross fade down to the sides. You can do some incredible things with the space Mapp technology, because space maps are abstractions of loudspeaker layouts that you draw. So instead of having one fixed location, you can draw the space map to be whatever you want it to be, which is very different than what this is.

But ultimately, the technology that all of these companies are using, including us, they’re big cross point matrix, and they’re using either level delay and delay or just level. And they’re also using level and delay together or just delay. And then there’s all sorts of other algorithms that people do and do not tell you about. Most companies don’t show you what’s going on behind the hood, whereas you can see the matrix values in Galaxy. While this is happening to see what math is actually going on. So, yeah, this is something we can get into, but we can make a space Mapp system, an immersive system out of three speakers, put them in a triangle.

And if you’re in the middle of that and those speakers can be on sticks, you can pan sound around those three speakers. It’s like a sandbox of system design compared to and the reason for that is very particular, because when space map first got started, it was designed in a geodesic Dome. Back in 1979, Steve Ellison was in Australia, and he had to work on an Apple two computer. There were speakers all along this geodesic dumb, and he had to figure out a way to mathematically move a sound around to each one of these speakers.

And it was inspired by the geodesic Dome. And then a couple of years later, he and Jonathan Dean started a company called Level Control Systems. And the first show that the technology got deployed on was for an arena touring show called the George Lucas Summer Spectacular Adventure. Yeah. And so there was over 150 people in the audience for the first show that they deployed this technology on. That was like in the 80s, like, early 80s. And so since then, what we’ve done is worked with sound designers, really in theater and big spectacles and started adding to these tool set that’s needed again, audio is a series of compromises and live sound.

What we do as live sound practitioners is incredibly difficult. And so we need to have a system that is flexible enough to overcome the challenges that we face on a day to day basis. Oh, I can’t put my speaker there because there’s a wall. Okay, well, just draw a virtual note and space map and make a virtual speaker there. So all of these tools that have been added to the space map over the years have really evolved with the mindset of it’s a live sound tool.

It needs to be flexible and scalable and easy to deploy. What we didn’t do for years and years and years was make it easy and accessible to use. It was very expensive. And some of the new Immersive processors out there from other competition and companies are incredibly expensive, and they require you to have two, and they almost handcuff you. So you buy this Ferrari’s worth of loudspeakers for your room, and you buy this processor, and then you can only drive it like a Prius because they make you only be able to move sound in the way that the room will look.

You’re getting all worked up.

I know it’s just frustrating because the rules, it’s a marketing thing that the companies are saying. These companies, these companies. What’s cool about the Galaxy is we can it’s just marketing.

You want to make something that people can reliably make work. So you put some guard rails on it and what their approaches is to make a thing that shoots down the shuttle down the middle of the road, and it would work in the middle of the road applications, and it would be repeatable, and it stays in this safe kind of repeatable thing. What we have done, because it goes back to the start of this creative place is to make a non guardrail version, but it comes in a kit form that you have to assemble yourself.

So you have to say, okay, here it is. There’s a pile of stuff on the floor. It’s like a bunch of Legos. You can build it into anything, but you have to build it. You have to conceive of the sound design. So it’s not something. It just pops up into your brain. And as far as that one size fits all sort of mentality. Now that runs into realities, such as the shape of the physical room of where you can put speakers. So if you make it so that it’s always just that it’s just for a standard arena shape.

Okay, there you go. But we have taken a thing that is ready to go in whatever shape that you do. Whatever you’re in. My first one was in the literal Planetarium. It was under the sea, the little Mermaid. And we had speakers around the circle, 360 degrees of laterals. And we had speakers in the center and speakers up in the Dome. And the Mermaid flew up and down, swam up and down. And the sound image came up with it as you turn on the lower speakers or the upper speakers.

And the characters all ran and swam around the Dome. You could image to these things. And this was in 2001 at Tokyo Disney Sea, and we literally built that thing for that place. And those trajectories are only for that application. So it’s not universal. It’s a custom fit. I don’t want to take the approach of really of talking about or disparaging other platforms. My thing is that we have a platform that can make a five channel with laterals and things that can also make six channels or four channels or two mains and 19 surrounds or whatever it is.

We’re ready to go give me an application. I’ll bet we can do what you’re looking for. That’s what I have to say. I bet we can do it. It just might take a little time, but we can build something to that shape.

Yeah, and we can shape the Playdoh, however we need to. If we need to make it look like the room, the Panor to behave the way all of these other Panters behave, then we can do that. But that’s just a fraction of what a space map can do. And it’s really a creative imagination. The other day we had someone come up to us and talk about a need for an escape room and a maze to sort of guide people along. It’s a very intricate move zippering around type of room with loudspeakers everywhere.

But the way you would do that with most Panters is very difficult. Well, with space Mapp, since they’re abstractions, we drew the layout as it would look with loudspeaker notes. But then we use what are called virtual nodes, and we just made a linear Fader. So as you drag your finger across the bottom of the space Mapp, it activated the speakers in a linear order that you want the user as they’re walking around that room to experience. So this abstraction is really cool because you can move beyond just the plan view 2D representation of the loudspeakers that these other products have.

You could do that. It’s still fine, and it’s totally useful, especially when you’re first grasping how to deal with immersive systems. But then you can do things like I want this sound to play out of the speaker in front and then the speaker completely behind me and then above me and then Zigzag. And you can make these really fun space Mapp. And I have one that’s called a randomizer that I show in some of our work. And the randomizer was designed to emulate crowd noise in the Stadium during the pandemic.

And what it does is it just randomly sends level to about six loudspeaker locations, and it adds random level changes to an existing room. In this case, we used it to represent Stadium audience sound with a mono signal with a mono signal.

We made it sound like it was all surrounding you, coming from everywhere, 100,000 people.

That’s cool.

I want to address for one moment. Can I use smaller arrays if I use more of them? And the answer is yes. Think about that. If you want to know an object lesson of that, consider the Fender Twin Reverb twin Reverb has only one thing. It reproduces the guitar and it can ruin the experience for everybody because it’s so freaking loud. Okay, so how does that one just 212 inch speakers can outdo our whole giant $100,000 PA because it’s object device that’s only streaming one single channel, and we are reproducing 32 channels or something.

Okay, so if you go to five mains and you partition your band into fifths, well, okay, now each of those has headroom available because of the decrease complexity and density of the waveforms that they’re reproducing. And I can tell you from the experience going back to 1019 and 74 listening to the Grateful Dead Wall of Sound, which is truly an object based sound system. Each instrument was separately had separate columns of speakers, and if you put them all together, it would have been a big giant blur.

But as separate events blended and mixed now in the air instead of mixed in the wire, there you go. Now you have the ability to spatialize and you can still fill the same amount of acoustic energy into the space. But of course, when you scale the thing and get too big and get too far apart. Now you’ve started to offset time and you have a band when you put the guitar that’s 100 milliseconds away from the piano. Now you’re starting to get the experience of listening to the marching band at half time at the football game, which let’s face it, it’s not tight.

And Marcy Bannon is not tight. So the thing about scale is that time doesn’t scale. So you get this thing overly large. You get it into stadiums and things. Time doesn’t go proportional. It goes in milliseconds and Hello. Hello. It’s a real issue issue.

Yeah.

I was watching something with Robert Scott, actually talking about when he first did Rush and Quadrifonic, and he tried moving Neil Pert’s drum kit around the room in the arena. And he said Neil stopped. Neil Fert stopped. And Robert will have to tell you the story. But he said he stopped and was like, what is that? And it was the propagation of time of the symbols or whatever going back to the arena. And of course, he said that Neil partt was good enough to where he had figured out the time off at set and adjusted drumming to match what was happening back coming back from the other side of the arena, which is amazing.

Yeah.

Thanks for bringing that up, Bob. I think that speaks to my question of isn’t this just a five X in expense, or is it more just like redistributing complexity and expense? Maybe there are some examples that each of you could share because I think the application for sound design when it comes to theater and circus events is really clear. The sound designer says, I want this, this and this to happen or it’s in the script. It says this happens and the sound moves around. But have you seen successful applications?

Are there interesting applications for concert corporate, some of these other places that a lot of us work in and might be wondering, is there an application that I should know about as an option for me, a sound designer, system designer. And have you guys seen that? Could it be a tool in those environments?

Oh, yeah, absolutely. We just did the AES Nashville event, and there was a spring training event. And one of the experiments that I personally wanted to perform was take someone who’s worked in stereo most of their life and just give them as minimal training as possible and put them in front of a fully immersive system and see how easy it is for them to work. And so we hired Pooch, and we invited Pooch to come in and work on it. And we did five across the front.

There’s also existing line arrays. So we tied into those as well. We did a full surround system only running on two galaxies, so really processing wise, it was two galaxies worth of outputs, 32 outputs, I think, and speakers. And that experiment seemed to work pretty well. What Pooch found was he had to reduce his dynamics, the amount of dynamics and compression he was putting on things he had to use less EQ, and he could space things out the way he wanted. One thing that also came from that was instead of using five across the front, we found ourselves wanting a little more width on the outside of the stage, and so we could easily have done a left center right and then had two sort of mid hangs to really bring out the width of the image.

To understand how all of these systems work. Let’s talk about what we’ve done in stereo, which is we’ve had inputs. Those inputs have gone into something like a console. And then out of the console. We’ve always had either one or two channels, stereo or mono or mixed mono. And those have been then distributed to loudspeakers amplifiers, whatever across the stage. Now, with Immersive systems, what’s happening is you have your inputs, they go into a console still, but then out of the console, instead of having one channel or two channels, you now have 32 channels, sometimes 96 channels worth of outputs, whether it’s buses, you decide, Auxes buses, whatever.

And so all of those new channels can be sent different things. So you now have 32 pipes that are going into the loudspeakers. And so there’s 32 separate pathways in the instance of Space Map go. So my drum kit could be on three channels. Maybe my kick snare is one, maybe my overheads are a stereo channel. And now I can move my drum kit around in a group of things while that sound is moving around those pipes. What’s happening is these Immersive processors are adjusting a level of a matrix row, and sometimes they’re adjusting delay of a matrix row as well.

That’s what’s called cross fading delay. So that’s sort of the basics of how Immersive audio works. And then everyone’s got their marketing term and secret sauce of what math they’re using for how to do it. You’ll hear terms like Mapp, Mapp, wavefield synthesis. We’re using space map, which is manifold based amplitude panning and barrier centric panning manifold base. Yeah. A manifold is a map, and you can actually look this up. There’s a white paper AES white paper on it called Me app manifold based amplitude panic.

So if you think about what a space map is, Bob, it’s a map of the room. A manifold is technically a map, and the Mapp that goes behind that is all there. So that’s all of that is to say that I think the expense of this is really in the processor. And the expense of this then carries over to other things. You’re now dealing with element for output. So in a system that has amplifiers, not in their speakers, you then have to have a lot more speaker cable, which is way more expensive than XLR, up into each line array.

Well, you have to have separate channels if you’re doing side surrounds. There are six surrounds going along the wall. If it’s cinema style, the old school that can be run off of three speakers per output, you can run on two channels, one, two channel amplifier. But if you’re going to do a full Immersive, you’ve got six channels. It’s going to take you three times as many amplifiers, and there’s no jumbo ring of speakers and cables to the next thing. So it’s all home runs. It’s all individual channels.

If it’s a two way now, there’s a crossover involved, all of those things, it all adds up. So you want to go and you want to make things move all around. It’s going to cost you channels to do it. You have to have a discreet audio location.

Yeah. And there’s this other concept in Immersive audio that Bob and I talk about a lot is granular movement versus sort of more wider movement. And the way to think about this, let’s say you have four speakers and you put those four speakers in each corner of the room you’re at. If I want to sound, to move around, it will move around. But depending on how far my speakers are spread apart, how close they are together, my ear brain mechanism and my internal FFT transfer functions that are happening will determine where that sound is.

And we have some fudge factor. They call it the audiologists. Call it the cone of confusion, which is like right after your, like, 180 degrees, your peripheral vision, it’s about 15 degrees. You can locate one degree in front of you, but then it starts going like, 15 degrees, and then behind you, it’s a little bit. We, as mammals basically visually, can really locate on the horizontal. But anyway, that’s the Sidebar. But you move it around. You have four speakers, you move it around. Now, let’s add three speakers on each side.

That is more granular. And if I move the sound around, I can locate a lot easier to where that sound is coming from.

It seems like you’re going from course to fine.

Yeah, coarse to fine gain.

I kind of look at it. As do you have hours on the clock? Do you have minutes on the clock or do you just have the Cardinal directions? Is it just East, north, southwest? That sort of thing? You look at your basic old school cinema surrounds your 5.1. That’s just the Cardinal directions. There are left surround, right, surround, rear surround. So it’s North, South, West. And then there’s the front, which is three channels. So the front is more granular, but the sides are not. Whereas as you break into more discrete channels, you increase the granularity and your ability to move and locate things individually and to have a separation.

I think it’s a really important thing to consider just from a creative point of view. What are we trying to do? Because that was what originally was Nathan’s question here is in order for this not to just be a fad. What’s the creative drive behind this? And so one thing is the ability to place audio content in locations. And those can be static, so you can separate out and you can go five mains across the front and you can separate out the band. You can hear a bluegrass band, and you can hear them all separated and then mixed and blended in the room very much like a magnified version of what you would experience if you were standing there with those musicians in your living room, like enhanced realism.

But I don’t need that mandolin player to be running around into the ceiling. That’s not really part of the creative event. Okay, so there’s moving things, and then there’s static separation in the left and right is not enough because we end up with that perpetual problem of as soon as you’re off the center, everybody pans things in their brain differently. So the panning things are just for somebody that’s exactly on the center, and everybody else is governed by the physics of your binaural listening system, and it’s never going to be solved no matter how much somebody tells you, they’ve solved it.

So then when it comes to motion, there’s a whole lot of stuff, but you’re getting into creative content and special effects. Now there can be things that are like in theme parks, like stunt shows or animation like Pirates of the Caribbean and animation where it’s basically this gigantic projection screen in front of you that’s 360 degrees. There’s a full Planetarium Dome that you’re in in your little boat that you’re in. Well, you can place the sound image all along that Dome and there’s video that is flying across.

You can make that movement of the Cannonball coming. All that fantastic usage of this median to make motion link up to video. Now we’re asking, like, is this all just a fad? I’ve concluded in terms of the five times expense, it’s video that’s the fad. And once video is done, people are tired of it. They’re going to give all that money that video people normally had to us. And then we can do our five times. So all right, the dream. But seriously, you have this capability to move things.

Now, what are you going to do with that? You have to have something that makes sense if you’re doing a classical music concert, moving things is stupid. But spreading them out is fantastic, because when you listen to a real Symphony, it doesn’t have it to be where all of the violins all come together with the Obama. It’s not that way. They are coming from separate locations. So it’s a beautiful thing to hear that I can tell you one of my most really, truly exciting immersive experiences.

I’m talking full goosebump experience was at Natasha and Pierre and the Comet of 1812, which is a theater production running on Broadway that had ten sound systems distributed through the room, each of them capable of covering the whole room. And the actors would come out not only from the state. They would actually have parts where they were on the balcony and singing to you from the balcony. Well, there’s this great wedding scene, and everybody is the whole cast of 36 or whatever is spread out over the room.

And then they sing this song together. And it’s this very gospel kind of chant thing, and it’s coming out of all ten sound systems, but it’s a coral blend that’s not all down into left and right or down into one tube. It’s literally blended in the room, which is what you get when you stand in a Church with a choir. And it was just like head blown. It’s like that’s through a sound system. That’s the thing there’s using the ability to mix in the space because it’s a totally different experience, mixing in a wire than mixing in the air.

That’s the beauty of immersion. But you have to be able to pull it off and have the things scale, right? Yeah. As a coral blend with a long sustains, it’s a beautiful thing. The same thing they tried to do was do a super tight Intelligible hiphop Hamilton wrap coming from ten sound systems spread all over the room. What do you say.

About the corporate and then the Church? You mentioned those two examples for why this tool could be important. I think corporate is very useful. We have a couple. The Audi Experience Center, I think, is one that just opened up sound art museums. But let’s think about these corporate car shows. That’s a great place for this. When you have your CEO, that’s about to walk out and they need a spectacle of sound and movement and stuff, that’s great. But when they start speaking, we need them all to focus on our presenter.

Let’s say you’re doing a big presentation of a product and your CEO is on a microphone and is walking around a system. Well, you could put them on a tracker and have them walk around. How distracting that is depends on how you feel about it. I find it extremely distracting sometimes when the sound is moving as the person is walking around, but it’s totally possible. But if there’s a band on stage, we can spread the band out and make them sound like where they’re coming from and be very realistic and add just depth to the feeling corporate that’s one way and the same sort of rules apply for churches.

It helps out with houses of worship to really bring in the focus to the pastor wherever the pastor is. And then during worship during service, spreading out that music and spreading out where the choir is and where the drummer is and where the bass player is, it really just helps immerse. And then on the other side of that with things like space map. If you have a couple lateral speakers that are out into the room, you can then goose in some Reverb from your console there.

And now you’re enveloping and using the Reverb on the outer and the dry on the inner. And you can really start mixing the room as a room. And that’s the thing we’ve been putting things down. This one or two very large pipes for so long, and those pipes are great. Stereo systems are great. Mono systems are even better in most live sound applications. But those pipes can only be so big. And what we’ve done is what our whole careers as mix engineers has been is carving out space and the only minimal frequency spectrum that we have for every single instrument.

And so what’s cool about this is you don’t have to do that as much, because now you have 32 pipes instead of two pipes. And now you can sculpt just by separating the pathway into the loud speakers. And that’s the most important thing is you’re no longer frequency masking. What you’re doing is overlapping your speakers and separating your signals.

Mixing in the air. But when you look at one thing, I just want to mention about the houses of worship and things is we need to talk to architects, because when you make that they love that fan shape, that super wide fan shape room. And then they close the volume down with a fairly low ceiling and those two things, then you want an immersive experience. Well, how are you going to do that? It’s a shape that really defies immersion because your audience spread across the super wide thing.

You’ve got 160 degrees of audience and to get from the far left all the way and reach the far right, you’ve got to go across the whole middle. And it’s a really difficult thing. So you have to be realistic and calibrate your expectations, balconies those create a real thing. And then there’s the other really important thing is let’s say, okay. They say you’ve got the budget for five mains, except that here’s this one little proviso. They have to have clear sight lines. You used to be able to have your left and right down nice and sweet in the right place in the room.

Now you’re going to have five mains, all just at the same height as you would a center cluster, which has to be so that the people on the third balcony don’t have their block sight lines. Right. And so now everybody is 100ft tall. And to me, that’s a trade off. That is really you have a hard time telling me that that is a good trade off because you’re so disconnected from the show, you can’t beat the physics that you’re late. You are to the floor where all your prime seats are.

The sound system is arriving tomorrow with today’s newspaper.

Well, that’s a great transition. And maybe we can look at Gabriel Fiero question, and his question is a little bit long there. But basically he’s working in a Church and he’s wondering, is this an opportunity for Immersive? He says right now there’s only two arrays and a couple of side fills, and some balcony fills. No delays or proper center coverage. So I’m looking at the differences between a new, correctly deployed system versus Immersive for our next PA. Now, I should point out that Immersive also has to be correctly deployed, but I actually have some pictures of his space, and I can send them to you if you want, if that would be helpful for you to talk about this.

But the important thing that you just mentioned is that the ceilings get lower and lower as you get to the back. And so to me, that seems like that’s probably not going to work for them, or at least not for these people in the back if they don’t have a good space to consider immersive right.

When you have a low ceiling in the back, you have to take an inverted delay approach. So you basically have a little speaker on the back that does a non granular approach. The more traditional surround approach in the back, and then maybe six rows forward, you Mount a larger speaker that’s high enough to make a granular surround to cover the main part of the room. I’d have to look at the exact physics of the room, but essentially those get linked together by space map as derived nodes as linked signals so that you could pan the signal around and it would light up in a granular approach, the big surrounds and then the ones that are on the outside perimeter.

Those light up as groups, so they perform a non just sort of an overall rear, whereas people in the center. I did a Church design recently. It’s a fan shaped Church that has a very popular shape. It’s a fan shape, and then it’s a fairly flat floor. But it has these ramps on the side that go up. So the ramps go up, and then it’s a balcony over the 160 degrees. Okay, so there you go. What you’re left with is the ability there’s the dog. So you’re left with the ability to do a full granular surround on the floor center and then a non granular Cardinal directions on the upper balcony and on the ramps, one on the side.

You’re forced by the physics. You’d have to kill people in the rear to get that to fire all the way to the front. And it’s not complaints always are that’s gain to stop your surround fantasies.

So, Gabriel, I think you should definitely take a look at there are three recent videos on the Myerson YouTube channel about system design with Josh and Bob going through some of the stuff, and that should answer some of your questions, because as Bob’s talking about here, you’ll see that all of the sources need to cover all of the audience, and if they can’t and they have blocked sight lines, as is the case with your people going deep under that balcony and the ceiling getting lower, then there’s going to need to be some reinforcement somehow.

And so as you’ll see in these videos with Josh and Bob, they explain how you solve all these problems. But it does start to generate some complexity as you have blocked sight lines and portions of the audience that are not visible to all sources.

Yeah, one thing that the Church market under balconies are another big thing, but there are tools to deal with with these systems built in the most immersive sound systems. I agree that I feel like the five across the front and a fan shaped room almost as a marketing dream, especially on those extreme sides. But there’s a way to do it in space map to have a left center right across each section of seating and then control each section as a left center right together from our front of house perspective, or even just stereo systems, but not stereo in the traditional way.

Stereo is where the left and the right of both that’s covering one section of that fan is overlapped. The one cool thing about this is like, let’s say you do have a smaller budget, something like a Galaxy. You have 16 available outputs. So if you did a traditional PA up front like you normally would, and then you did for your Christmas Spectacular production, you brought in a couple extra loudspeakers. Well, if you have extra outputs on your Galaxy, then just plug those XLR into those speakers, and now you can use those and send some sound for your special Christmas Spectacular sound effects around, as well as still maintain the mix that you’re using.

So yeah, there’s tons of options really depends on back to this course versus granular what do you want to do? What is the goal and the intent of the sound system?

Okay, so let’s get into some of these and let’s just see how far we can get. And then maybe we’ll even come back to some of my questions. But people are so nice to sending questions that I want to make sure we get to those. So Robert Scoville, I asked him, what do you want me to ask them about their system? And he said in Galaxy when it is used in Immersive systems, considered a spatializer by a given definition, and he doesn’t give the definition. So I’m hoping someone can say something about what a spatializer is.

He says. I know Mayer incorporating delay matrixing within the unit to achieve the spatial aspects of their Space Max application, but I’m curious if units like Astro Spatial and Lisa Tmax et cetera are functionally or mathematically different than what Galaxy has to offer.

Hey, Robert, the first question. I hope you’re doing well. First question, spatializer Space Map Go, and the Galaxy itself is a loud speaker processor. So the cool thing is, the Galaxy will still tune your PA and do all of the things that Galaxy has done for years now, with the free updates to Space Map Go, you can now use level changes. I don’t know what the definition of specializer would be, but it is an Immersive audio platform like all of these others. In addition to be a loudspeaker processor, and we’re not using delay, Galaxy does have a delay matrix, and you can set static delay times on a queue basis or snapshot basis.

But we’re using level based planning very similar to what all these other companies are doing. And the difference between the three companies that you mentioned is, yes, their math is different. They’re not talking about what math they’re using, and timeax is delay based. Lisa, I believe, is only level based with a little bit of delay. And then Astral spatial, I don’t know enough about to really, I think it does both. I think it does delay and level based, but there’s ways around it. I’m working on a project right now that is going to be using a sort of static cross fade delay matrix to move someone from an A stage to a B stage, so as they move, the delay time changes and steps for the outputs.

But Space Mapp Go is level based. It’s not controlling the delay matrix of a Galaxy. You can still control it with Compass.

I hope that answered that I can’t comment on the Astral spatial because I got Pfizer.

Okay, Robert says. Secondly, ask him how Myer defines an object. Is it a speaker output or an input source to the spatializing device?

Yes. So an object in general terms represents a channel. So if I have an output or a bus from my console that feeds that’s plugged into a Galaxy, so I have an XLR from my console and that XLR cable plugs into the input of a Galaxy, whether it’s an input. So an object is that input. So then that object moves around the space Mapp, and the space Mapp is the custom Panor that you design. So if I have 30 loud speakers, I can have a space map that has all 30 in it.

Or I can have a space map that just has four of those 30 loudspeakers in it, and you draw the space Mapp. Then on top of that, we have what are called trajectories and trajectories are pathways that you draw, and they control the objects automatically. So you can have tap tempo. So if I want to move a trajectory around at a certain BPM, let’s say I want a sound of my drum kit to go side to side. My symbols need to move in time with the music.

I just tap in that tempo, and then there we go. It’s moving left and right, and you can draw them to be as complex or as fun as you want. For example, I show an example all of the time called sound to source. Rex and my wife basically just drew a tree that is a trajectory, and I can load that on a channel and it controls the object and move that sound around whatever it is in the shape of the Trex. And since it’s on an ipad, you can expand it, contract it.

And this all happens in real time. That’s one thing that no one else can do in the industry right now that makes space map really fun. So that’s an object object is an input.

Okay.

So Alice Defancies has this question that I think we’ll need a little bit of unraveling because I think it expresses some assumptions, but I think it’s good to get into because probably some assumptions that a lot of people have. So most people are familiar with this phenomenon that as you move farther away from a speaker, that his coverage seems to get wider unless you have an asymmetrical Horn. So I think this is his sort of thinking where he asked this question. He says, IEM wondering how far into the audience the immersive experience can be achieved before all those separated signals become combined.

And does that then cause cancellations in the back of the room? Now, right away, we’ve already talked in this conversation about how a little bit about the system design that we actually want all of our sources to be covering the entire audience. So I think he’s thinking that we actually want them to be all separate signals. So, Josh or Bob, do you want to try to speak to this question a little bit?

Well, of course, a speaker from an angular point of view stays constant over distance, but as a width in terms of meters or feet or whatever, it’s getting wider. That’s the simple physics of it. So when you’re too close in you’re going to find yourself that you simply are prohibitively close to something because the inverse square law is going to prevent you from you’re just too close. If you get up on a ladder and stand next to the sides around. Yeah, you’re not going to have an immersive experience.

What we do is we define the room sort from a design point of view. We have this thing called the go zone, and that gives you a fairly good guideline up to where you’re going to have a 100% immersive experiences inside of that go zone. And then from there, it’s a Gray progression out of full immersion. There isn’t a place where it suddenly just locks in and you have it. As you get closer to the perimeters, you’re necessarily getting closer to those laterals and farther away from the others.

And simply the physics are going to catch up with you eventually. So the signals themselves, the more that a signal is individuated, the more you will have everybody be able to if they were all blindfolded, would point to the sound source, where is the frog coming from? And everybody would point in the same direction to the frog direction where you placed it in a space map. And that’s the key thing are people consistently showing you experiencing the same localization content. And if you then Mapp the things out so that you have immersed them into a swamp full of frogs and cicadas and all these things around them, then everybody could point to this and that the locations that’s really the goal and the more that you are towards the center, the more sure that experience is going to be.

Yeah.

And I would also say one thing that people get wrapped up on is like, okay, well, what do I do about fills? What do I do about all my subsystems and five across the front in a lot of rooms won’t cover the whole entire room, regardless of how pretty it looks in the prediction software and the subsystems are still real. So if I am sitting underneath a linear and for whatever reason, my five across the front are very high up, I could maybe have two front fills in front of me and do using what are called derive nodes.

Do a stereo mixture of what’s happening up above me. One thing that I use derive notes for a lot is, let’s say, for whatever reason, you can’t have your console in the room. So what you do is set up a stick of a five one surround system in the booth where your mix console is. And then that uses derive nodes. And so whatever happens out in the room translates and mixes down to a five one mix for your room. We do that with under balconies as well.

And so you’ll have this sort of main system that is covering as much of the room as possible. But then you’ll have these subsystems that are doing immersive mix downs, whether it’s down to Monos, stereo or whatever. And a lot of time. That’s very helpful for especially when these speakers have to get hung so high across the proscenium. Front fills become really important for imaging. Just imaging that voice down.

I’m just gain to mention, though, is this that you can’t get stupid about these things? Okay, front fills are only going to cover two rows and about five people wide. Right. So they are not part of your spatialization system. You’re not going to be zinging things around the front fields and have everybody go, wow, look at it across the front. That’s not going to happen. That’s not going to happen in your under balcony speakers, because if the under balcony speakers are designed correctly at all, they are designed for correlated signal for minimal overlap, because their job is to bring up Intelligibility.

They have a very clear mission. Do not go and start screwing with playing one of the places I really throw up a big red flag is people wanting to play matrix games with the matrix delays and silliness under balconies. And in front of us, it’s like, Stop it, stop it, stop it. Those things are combat audio. You must make them simple. And Intelligible let them do their job and don’t screw them up. Yeah.

And now with the 32 pathways. What’s cool is that speaker now becomes a multi use tool. It could be that delay doing correlated signal for the mains, but it can also be used in a separate pathway. For some version of a mixed down.

It can become an overhead and shoving. People are looking up, because now it’s not merging with what’s coming from the front. It’s suddenly all by itself is a Peter Pan over your head saying, look at me. Yeah.

And so under Balconies, of course, and above upper Balconies, you’re going to have less of a granular immersive experience, but you can still design a system to have an immersive experience.

Okay, cool. Let’s get to Robert McGarry’s question. He says total novice for immersive programming. Where do you delay to? Is there a zero point? And just for some context, IEM going to make an assumption here about what Robert’s talking about. I think he’s thinking of a practice in theater where we might have a center point on stage, in theater, where we want our voices to Sonic image source back to or we may have a concert sound stage where we want where we kind of time back to the drums.

I think that’s kind of what he’s thinking of. As I’m learning more from you both about immersive systems. I’m thinking that this question is actually not applicable to this, but yeah, what do you have to say about where is the zero point?

The same would be if that was a left right system or if it was five systems across. If you want to make it timed to events on stage and you don’t already have enough delay because you’ve got a digital console and a digital this and a digital that have already stacked up your latency. So if you’re going to actually add a little bit more, then sure drum kit is a usable place. Or if it’s theatrical, you can go to a point on these, but those become essentially in our world that’s a static event, or it can be set up through that delay matrix as a set of presets.

If you wanted to make it so that you had a moment where an actor was downstage left for some dramatic moment, you could have a separate delay matrix timing for that, but that’s a static part of the tuning process, and then the emergency facilities movement would come on top of that would be changed by that.

Yeah, I think of it as an immersive systems. I think of two different delay types. There’s system delay, which is what we’re going to need to do for time alignment of systems, whether that’s main subs relationship frontfill relationship that all gets handled, you can use the delay matrix on the Galaxy or the outputs for each speaker on the Galaxy, and then there’s artistic delay. So if I have an actor moving from proscenium downstage to upstage, I can fire a snapshot that changes that inputs delay time, or I could just do it on the console and have a snapshot on my console and adjust their input when they’re not singing that instantly swaps their delay to a new zone.

This is very typical of what we would do in musical theater, having three zones or four zones across the stage. There are fancy devices that are very expensive that do that automatically for you. But with a Galaxy, it’s a free update, and you can just do a snapshot change.

Cool. Let’s try to squeeze in two more questions here and then we’ll start to wrap up. I don’t know if you have anything to say about this, but Angela Williams says, Where do you place audience mics in the room for capture as objects?

I kind of don’t understand the question, but let’s talk about how do we capture surround information that’s happening in a room? There are two different scenarios. One scenario is I have an artist on making a recording or I just want to have some microphones laid out to capture the audience noise and send it back in. That could be wherever you want. And if you wanted to, you could put them on face map and then send them to all loud speakers or just some loud speakers. The laterals you can make them an object and move that audience sound around.

I think the question is for the analysis of the object placement. That’s the impression I got of that question.

Yeah, it could be. I also don’t totally understand it. And IEM realizing now I should have asked them to clarify a little bit, but it did make me think about mixing those in, but I don’t know how you would mix those in. So yeah. Do you want to say something about that, Bob?

If it’s mixing things in, my answer is no, I don’t do that. That’s Constellation’s job. And that’s what you’re getting into. If you want to start recirculating audience mics ambient mics in, that’s a whole nother thing. But if it was to in order to analyze, you place a virtual mic into a real mic to analyze the localization, my answer would be anywhere you want, anywhere you want to know the answer.

Yeah. And there are tons of different mic styles to do that. You could do that with a binaural microphone headset. You can do that with an Ambisonics microphone. You can do that. Whatever. If that’s just for capturing the recording of what’s happening in the room and in Map 3D.

We do it through virtual SIM mics. And I do that as part of the analysis. I’ll go and place a mic when I’m designing a mic in Map 3D, and then I’ll run the different speakers and I’ll be able to see as I lay one trace over the others. Like, okay, IEM consistently seeing within three DB. All of my laterals are all reaching within three DB in this location. Okay. That’s cool. I know that this has a really consistent specialization there. Okay.

I wanted to get to Lloyd Gibson’s question because even though we’ve already talked about this, some at the first part of the interview, I wanted to do it again because I would just want to make sure this is clear for there’s probably other people out there who have this question. And I want to give Bob a chance to maybe correct some misunderstandings about his own teaching. So Lloyd Gibson says I thought Bob was against stereo imaging and live sound because of the psycho acoustics and delay magnitude discrepancies seat to seat.

Does this not apply here, or is there a size threshold where it can be successful?

Okay, so stereo in live applications, let’s get into the semantics. There’s a left main and a right main. You can call that stereo. I call that left right, because stereo is something that happens when you put on headphones. Happens when you sit there in your living room because you’re inside of the five milliseconds that you have to play with in the world of physics, of your brain and its ability to make a panoramic stereo image. There’s very little of the room that is inside that five millisecond window.

In our world of PA, it doesn’t have to be a big PA doesn’t have to be an arena or Stadium. It’s like even in a small theater, there’s very little that fits inside that window so everybody else can call it stereo all they want. But I design systems left and right systems, and I design them to have no more overlap between left and right than they have overlapping into the walls. So that’s my thing. Basically, I don’t want to invite the wall into the thing any more than the virtual wall, which is the correlation point of where the two speakers meet in the middle and all physical acoustics modeling as a wall.

So that’s where I aim systems. I don’t aim your left and right deeply inward. Unless you can promise me that you’re going to put left completely separate material than in the right. Like if you’ve got left, center and right and they are now discrete and separate channels. Now I’m going to turn that thing inward. Now I’m going to cover the whole room with left and the whole room with right and the whole room with center and the whole room with left, center and right center and 17 whatever they are.

I’m back to the I’m the whole show. So if I’m the whole show fine. But if we are left and right and 99% of the material outside of the littlest bits are going to be pushed this way when all the really stuff that matters, the Fader with the big star on it is going to be mixed center. I’m going to make your left and right system so that it does the best performance that it can as correlated signal. Okay, so that’s my simple answer to that.

I haven’t changed on that. But if you go to a full multi channel as soon as you introduce two multi channel and that’s what happens when you add that third one, that center register functioning center channel. Now when you’ve gone to full multi channel, if that’s the way you want to address it. Now we can go and play decorating, but a lot of times what you really see in an LCR is you’re going to do LR are still going to be a very LR system. Very little gets panned out, but the center is its own thing.

So now you have a decorated center, but a semicolated left and right. So I hope that was the answer. That wasn’t too unclear.

I thought that that was clear. Yeah, that’s great.

I don’t tell people how to mix. Right.

And Immersive is a new way to mix. Instead of sending things down two pipes, you now have 32 or however many channels you have. You no longer view it as LCR, and you view it as my canvas that I can put objects on. And that’s really the way I have to start viewing it is IEM looking at a stage. Okay, now I’m painting where I want to put my artist or where I want to put my objects.

Okay, Josh and Bob, thank you so much for all of your time today. And I should end by asking, where is the best place for people to go who want to learn more about space, map, go and Immersive systems on.

Yeah. So Myersound. Com. Well, Myersound dot com is a great location for all information concerning Myersound. We also have the thinkingsound YouTube channel that’s our YouTube page. We’ve done about 6 hours worth of Space Map Go content as well as Map 3D content. There’s tons of information there, like every other company we participated in Webinar Wars.

And I never heard him call that. That sounds so violent.

Nick from DMV called it the other day while we were hanging out, and I thought it was hilarious. So shout out to Nick. But anyway, Webinar Wars was what happened. But anyway, there’s tons of content, not only just about Immersive audio and everything. And then last resource for Space Map Go is the Space Mapp Go Help website. That’s basically the operating instructions for Space Map Go. And the cool thing is, this is all free, so you can download Compass and download Space Mapp, Go onto your ipad and mess around with it.

Play with it. You don’t need hardware to start looking at what this can do.

I want to just throw in one more thing. Hope I don’t get in trouble, but there’s also some physical places where you can go to experience Space snap. There are some locations where there are, at least at the moment that we’re making this recording operating systems. There’s one here in New York. I believe there’s one in one still in Nashville.

Nashville.

Yeah.

At our office at Soundtrack in Nashville and then center staging in Burbank.

Yeah. So we have an LCR system left center and right there for the United States, and I think there might be one in Europe. I think there’s one in Europe.

Yeah. All across the world. Really? We have a touring Roadshow Space Mapp Go road show, which is happening across the US road Show.

When is that coming to mind?

I don’t know, man.

I think it should be called Space Mapp of Gogo.

Yeah, it should be called Space Mapp of Gogo, but yeah, if you look at the website on our website, there’s an article about it, and you can reach out to sales at Meyerson dot com to find out when it’s coming to a city near you. They’re thinking about doing one in Europe very soon. Australia has been touring around and New Zealand have been touring around Space Map Go systems for a while now.

So you can’t go to Australia.

They won’t let you leave that exactly.

Yeah.

And then our dealers and distributor network across the world, some have set up Space Map Go system, so reach out to sales at Meyerstown dot com. If you want to hear this, you want to hang out and then we’re open to give you a demo. And the New York room is really cool. And Bob might meet you there.

Oh, wow. Just throwing Bob’s hat in there. Great. Thank you.

The other thing is we will be at Infocom this year, and so there will be a Space Mapp system there. That will be we can’t talk about it. Really too much yet, but it’s going to be cool. I’m excited about it.

Well, Josh and Bob, thank you so much for joining me on Sound Design Live. Sound Design.

Subwoofer Alignment at The Redmoor Cincinnati

By Nathan Lively

Recently I had the opportunity to help my friend Nick work on the calibration of some new components at a great looking venue in Cincinnati called The Redmoor. We met on TeamViewer and recorded the entire thing so that it could serve as a combination of consultation and training. If you’d like my help on your project, you can schedule an appointment here.

In this post I’ll walk you through some of the EQ and crossover alignment steps we took.

Pre-production

First, gather materials. I checked Tracebook for the HDL10-A and STX828S. No luck. I found the GLL file for the HDL10-A on the RCF website, though. I opened that in the GLL viewer, built an array with the settings I expected to see, calculated the balloon, opened the transfer function, and exported it to a file.

The get the subwoofer data into my audio analyzer I used VituixCAD2 to convert the image on the spec sheet into response data. Then I imported everything into Crosslite.

The sub’s native response will allow me to experiment with different low-pass filters.

Next I needed to choose a target. Since we are looking at anechoic responses, it makes sense to use a flat target, but recently I have started using a +6dB slope in the low end because I have found that that will push the crossover region up, which is a better representation of what will happen in the field when someone inevitably turns up the subs.

I’ll start by inserting those filters recommended by the manufacturer.

Next I’ll apply some initial EQ and gain to make a better match of the target.

Should we design an overlap or unity crossover? Let’s do both!

Before I we even look at the phase graph, let’s measure some slopes. We know that the sub’s slope is 24dB/oct because it was a flat line before the LPF was inserted. Switching to view Data Pre in the bottom window, we can look at different HPF slopes on top of the HDL10-A response and find that it is 48dB/oct. This is a clue that the phase response of the main will be steeper than the sub.

Switching to the phase graph we find that that is in fact the case.

Let’s try adjusting the sub’s LPF to match the response of the main.

Now the sub is too steep. Let’s split the difference with 36dB/oct. Now we have a nice match with only about a 30º maximum phase offset between them.

The filter change required a small gain adjustment. Here’s the sum.

What should we do about that bump? We could leave it alone and say we like it, but let’s insert symmetrical filters to restore the response to the target.

Now let’s try the unity class crossover.

The HDL10-A already has a steep HPF so I am reticent to make it even steeper, but I need main and sub to meet at -6dB. I know that the DSP at The Redmoor is a Venu360 so we’ll only have access to basic EQ filters. Let’s try the least steep HPF available, 6dB/oct. I’ll adjusted the delay by 0.5ms for a slightly better phase alignment.

Let’s try a 12dB/oct option.

Let’s zoom in and compare them. Which one is better? I don’t know, but now we have some options.

Now let’s see how this actually played out in the field.

Production

We started by verifying all settings and taking eight measurements of the HDL10-A through their coverage area.

We applied EQ towards the target and took more measurements to verify the EQ and prepare for alignment. We exported averages from Smaart and imported them into Crosslite. Here’s the phase graph.

At this point I realized that we could have made our work a lot easier by starting out with a ground plane measurement very close to the speakers and without any processing to get cleaner data for alignment. But, we were running out of time so I decided to simply apply 3ms of delay to the sub and move forward with this solution.

Here’s what the final measurement looked like.

Post production

Let’s see if we can improve on the alignment I came up with in the field.

Interestingly, measured in the room, the HDL10-A appear to have a 24dB/oct slope, not 48dB/oct as expected. Maybe this is a result of one of the user definable settings on the back.

The STX828S appear to have an 18dB/oct slope, even though we used the recommended 24dB/oct slope on the LPF.

How can we equilize this relationship? 24 – 18 = 6, so we can add a 6dB/oct LPF to the sub, right?

But that will add another 3dB of attenuation at the cutoff frequency, which we don’t want because we are trying to simulate what would have happened if we would have used a different LPF from the beginning.

One option is to simply switch the LPF to zero magnitude. That will give us a steeper slope without affecting the magnitude. Of course the magnitude won’t be accurate, but we can still research the phase alignment.

The result is better alignment without any additional delay.

I should make it clear here that a zero magnitude filter is not something you would normally find in a DSP. It is a special kind of simulation that Crosslite offers for research purposes. The closest thing you would find in a DSP is an all-pass filter or within a variable architecture FIR filter.

How do we know if we are making an improvement? We can see the phase come into better alignment and we can see the sum go up, but I find it helpful to have a goal of perfection to compare it to. In SATlive you would load the Perfect Sum trace. The workaround I used in Crosslite was to simply import the data a second time, but this time without the phase.

In this graph you can the the perfect sum target with two delay options. Both of the options include the new zero magnitude LPF.

How do you prepare for crossover alignments?

EQ Your Vocal Reverb Return

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live my guest is producer, engineer, and FOH mixer for artists such as Counting Crows, Goo Goo Dolls, and Avril Lavigne, Shawn Dealey. We discuss finding your sound, tips for mixing live vocals, and sound system calibration.

I ask:

  • What series of events lead to you mixing FOH for the Counting Crows? How did you get the gig?
  • What are some of the biggest mistakes you see people making who are new to FOH mixing?
  • Finding Your Sound in the Studio
    • “Walking into the studio isn’t the right time to decide what your record is going to sound like. I know the feeling of being a kid in a candy store is enticing, but this is the time where having a clear idea of what you want your recording to sound like is really key.”
    • Can you walk us through what a conversation like that might look like for a live event and how that would influence your decision making process related to the mix or the sound system?
  • Tips for Mixing Live Vocals
    • “A loud stage would be a more appropriate choice for a dynamic microphone.”
    • How do I set the HPF on a vocal?
    • How do I set the pre-delay on a vocal reverb?
    • How do I EQ the reverb send to help it sit well in the mix?
  • System design and calibration
    • Tell me about the system design for the Clyde Theater.
    • Is there some way to guarantee that the client will be happy? Is there some way for me to get inside their head to predict the characteristics of good sound to them and produce a result that they are happy with? Or is the only way to sit next to them in the audience and audition changes until they are happy?
    • Do you worry at all about system latency? Does total system latency influence your decision making when setting up the console and specing the sound system?
  • From FB
    • Bobby B: Best practices for building a rider
    • Gabriel P: how was it working with Avril Lavigne?
    • Greg McVeigh: Ask him when he is going to ditch the “real job” and mix touring acts again.

If I’m watching a show and I see someone play something and I can’t hear it it’s like, where is that? Why is it not in the mix? What’s going on?

Shawn Dealey

Notes

  1. All music in this episode by Glitch.
  2. 34 mic snare drum shootout
  3. Audio Test Kitchen
  4. Work bag: adapter kit, Ultimate Ears, Universal Audio interface and Apollo satellite
  5. Books: Recording The Beatles, Chairman of the Board
  6. Podcast: Working Class Audio, Under The Skin
  7. Quotes
    1. Play with sound every day.
    2. I’ve seen people mix a show while they’re watching Smaart and it’s like, this is not working.
    3. If I’m watching a show and I see someone play something and I can’t hear it it’s like, where is that? Why is it not in the mix? What’s going on?
    4. The best way to fix a problem is at the source.
    5. A condenser microphone in front of a wedge is a completely unstable situation.
    6. Raise the HPF until it sounds bad.
    7. I just didn’t want to have a PA that sounded bad for the show.
    8. I have a career trajectory that I can link to 3 people for fifteen years of work.

Transcript

This transcript was automatically generated. Please let me know if you discover any errors.

I’m Nathan Lively, and today I’m joined by producer engineer in front of house mixer for such artists as Counting Crows, Goo Goo Dolls, and Avril Lavigne. Shawn Dealey, welcome to Sound Design Live.

Thanks for having me. Okay.

So, Sean, I definitely want to talk to you about this topic of finding your sound tips for mixing live vocals and sound system calibration. But before I do that, after you get a sound system set up, what’s one of your favorite pieces of music to play through it to get more familiar with it.

So I have a few different pieces I listen to obviously have somewhat of a tuning playlist, but there’s one specific song that I use pretty consistently that slightly obscure. A band called Spy Mob. They’re out of Minneapolis, and the record came out in the mid 2010. Lord Algae mixed it, and it has an extensive amount of mid range information. So when I was mixing shows for Kennedy Crows, there was a lot of mid range information. Adam had a lot of mid range in his vocal. There’s three guitar players, keywords, and so it was a really dense sort of Sonic landscape.

And so this song is 2040 by Spy Mob, something that I would be able to listen to after we’d done some tuning on the PA, and I could throw that up and make sure that I was hearing all of the stuff that I needed to pull off a show. So that was kind of a song that I would get a lot of questions from a lot of guys that were around or people in the venue be like, hey, what is that song that you played? Because it’s an interesting song.

It’s pretty catchy, kind of quirky sounding, but really kind of defined. If a PA could kind of handle what I was about to put at it as far as definition and detail in a mid range, heavy live music mix. So that’s kind of my go to. There’s a couple of others. Thomas Dolby song that was from, I think, Aliens at my Buick that I picked up from another engineer that was mastered in the 80s. So it’s actually quite quiet, but has a lot of really nice high end high frequency information where I could sort of tell if there’s the crispness of a rig and then some really nice sub information, that’s not overwhelming, but you can really hear the definition if it goes all the way down, so specific songs for specific things.

And that’s kind of something I try and share with people, too, as developing a playlist of songs that really kind of establish what you need to get out of a PA and listen to those. You can kind of pull out the elements that you’re looking for and understand if it’s not going to work for you and you need to go back to the drawing board or back to Smart and your Lake and hack away or add. Those are a couple of songs I would dive into.

Other than that, I would play the intro for those about to rock you, and then I would always stop it as the song started, which would always make people quite angry that I didn’t play the whole thing through. There’s a couple of kick drum hits. It really kind of reinforced the fact that the PA can rock and understand that it’s functioning, and it was a test procedure for me. It wasn’t enjoying the song, even though I do like to listen to that song quite loud on large scale pas, but I didn’t get too deep into it.

I didn’t listen to it all day long. It was pretty quick. I could tell if I was getting what I needed from the boxes, and if it wasn’t, then it was into fix mode, and then who knows what things we would uncover at that point. But those are kind of the few songs that I would really rely on to get me to where I needed to be with the PA. Nice.

Sounds like it has some important milestones in there that quickly you can queue into and know some things, get some actionable data, like, Is this going to work? Where do I need to do some more work?

Yeah. And the Spy Mob track the Snare drum on it has this, like a little bit of a mid range knock, lower mid range thing. And if it was to pronounce, I knew the boxes were going to be a little bit boxy sounding, and if it was gone, I knew that I was missing some of that load made information, which I like to introduce into my drums. And so there’s all these little things that I could pull out of the songs because I was so familiar with them on so many different systems that it would establish how much fun I would have later in the day.

So very cool Minneapolis natives. All right. So Sean, as I was preparing for this interview, I was looking through your Instagram feed, which is pretty fun. Lots of cool pictures in there of gear and shows and stuff. Also of you going to restaurants and what to me looked like breweries or potentially bars. And so I saw this photo that looks like a bunch of beers. And so I was going to ask you, what’s your favorite beer?

Well, I don’t drink, but I think the photo you’re referencing is there’s a shop down in Huntington, Indiana. I moved to Indiana a few years ago to work at Sweetwater. And in Huntington, Indiana, there is a soda shop called Antichology, and they have 700 flavors of soda. It’s like a vintage ice cream and soda shop. So I haven’t been there much, but it was an impressive wall. And, yeah, I tend to gravitate more towards soda than beer, but that was an interesting spot. Moving to Indiana, kind of getting used to the Midwest.

There’s a lot of interesting stuff. I know there is a lot of breweries around here, and there’s a lot of good food. I mean, Fort Wayne is kind of exploding. We’re growing as a city. So there’s a lot of really cool restaurants that are popping up and stuff. So, yeah, I do enjoy that. I enjoy cooking, and I enjoy eating. So, yeah, that’s one of my favorite pastimes. I mean, that’s something I missed from my touring days is being able to get out and get around and try different restaurants and stuff like that.

Tell me about either a couple of things that you tried at this 700 soda shop store or one of your favorite sodas. Just curious, what are your tastes?

Yeah. Big fan of Black cherry. I actually had a friend that drove down to Kentucky. No, he went up to Michigan and went to some sort of cherry Orchard and brought back six or eight different kinds of cherry flavored soda, ginger ale, gingerbread beer. All that stuff. So I got some treats from my friend Lynn Houston, who is someone I work with here. Lynn and I work on a lot of very cool marketing projects. He is the manager of written content here. But if no one is familiar with Lynn Houston, they should be.

He’s a total geek. And we do a lot of shootouts together. And so he travels a lot on the weekend. So he brought back a bunch of super cool flavored sodas, which we got to try this week. But we’re also in the middle of doing a 34 Mike snare drum shoot out at the moment where we’re shooting 34 different mics. So in Studio A right now, we have a pretty interesting photo shoot going on with a lot of microphones around one drum. But we do a lot of cool stuff like that.

If you get a chance. Google in. He did a lot of engineering in Nashville for years, and he ended up here, and we do a lot of really cool content that makes it on Sweetwater’s website.

Are you familiar with the audio test kitchen?

Yes, I am. Okay. Cool.

I interviewed Alexawana a while back, and his project was so interesting, seeing how they did all of the recording. So are you guys having I’m assuming you’re having a human actually hit the snare drum in this case.

So, yeah, I think that Lynn’s ultimate goal in life is to find someone to develop the robot drummer that actually can hit a drum consistently. But yeah, we have our in house session musician and content creator Nick Deaver Gillo hitting our drums for us. And so we try and keep we’ve done this on a lot of different things. We’ve done speaker cabinets, ribbon mics, vocal mics comparisons to virtual mics, a lot of different things, and we try and keep the control factors to a minimum. So everything is really consistent.

We like to use lasers, take precise measurements, calibrate things. So we try and take into account the most amount of or eliminate the most amount of variables from any of these processes. And so I think we had 19 mics around the snare for the dynamic microphones, and we had twelve condensers, and so individually we lined them all and tried to get them where they failed. One of the best we could in actual placement around the drum in the same distance and the same height from the head.

And then we captured one pass of the recording of the drum. So the variable of the performance is not taken into account, but the placement is the variable that we had to kind of give into. But it should be cool. I think it’ll give people a good understanding of what different microphones sound like on the same drum. So that’s something that’s going to come out in the next month or so when Sweewarters drum month comes out, there’s going to be some cool content with that cool.

Yeah. I would love to see what my own tastes are and do like a blind can AV test. And I’d also love to hear what you would end up picking if you listened to a bunch of them and not knowing what they were and just pick the one that you thought sounded the best for that specific situation.

Yeah. And I think I want to do that, too. I gravitate towards things that I know and things that I trust and from repeated use and success. But that’s one of those things where you can sit back and listen. And it’s been funny the few times that I have blind taste tested microphones, I usually end up picking the ones that I go to, so I feel like I’m usually using my ears and not my eyeballs for listening, which I think is a good thing. But yeah, the snare drum thing.

I think it’ll be pretty cool, but there’s some mics I would never even think to put on the drum or just haven’t had a chance to in a long time. So it also gives an opportunity to refresh my memory of things I like or haven’t used in a while or finding new things that would be a different flavor than what I’d go to consistently, because I’ve been known to be fairly set in my ways about certain things, but I’m trying to have more open mind about gear these days.

So, Shaun, you worked with the County Crows. It seems like at least for me and a lot of people who love The Counting Crows in my imagination, of course, never having experience working with them as humans. But just in my imagination, it seems like a dream gig. So I would love to just talk about sort of how you get work. So in this specific case, how did you get the gig? Like, what sort of relationships and series of events led you to work for The Counting Crows.

So I feel like I’m going to tell a pretty unique story about that. I started with The Counting Crows as their drum and keyboard technician after I had toured with them for a summer, working with the Guggodols as drum and playback technician for them. I really hit it off with a drummer. He loved the way I tuned the drums. Side Note I started as a drummer as a teenager and spent a lot of time playing and then a lot of time tuning drums. So I’m very much a nerd about all of that stuff.

But I started with Kenny Crows, I think in 2008 as a drum keyboard tech. Then that developed into recording their shows, archiving them for them. And a couple of years into that, I had stepped back over to the Guggenheim, was guitar teching for them for a tour. I got a phone call from the guitar player from Count Crows, Dave Bryson, who reached out and said, hey, we’re going to go to the studio. Would you like to come record us? And I was like, Well, that seems like a great idea.

So I jumped on that. And I had a pre established relationship with the band. I got along with everyone really well. And we got into the studio and it went really well. And I had a really good handle on, I think what the band sounded like and what I felt they should sound like. And I think we gelled on that in the studio. So they really kind of trusted what I had brought to the table as far as Sonics in the studio. So that translated into an offer to mix the band live.

And so just as a quick backstory, I had dabbled in live sound my entire career. But I’d never had a large format gig mixing a band. I had always worked as a backline technician because those are the gigs I was getting. And then when I was at home, I was mixing life sound at clubs and bars. And when I was on the road, I was mixing opening axe. And so I was getting a lot of experience. And I was always the annoying guy for a house asking the engineers like, hey, what’s that do?

What are you doing? I was always lurking in the audio Department, even though that wasn’t my job. So picked up a lot of that stuff along the way. So when I got the opportunity to mix Accounting Crows. Obviously, I jumped at that because that was sort of where I was aiming to be, but I hadn’t had the opportunity to get there. So I had put the groundwork in to get a gig like that. But I just hadn’t been working in the field as a touring front of house engineer.

So I went from co producing a record with them to mixing Front of House after being a backline technician. So I feel like that’s a slightly strange trajectory for anybody in the touring world. Usually you sort of start somewhere and work your way up. So I landed what I feel like it was a dream gig to have in sort of an obscure way, but at the same time, built on trust and engagement with the band and being able to communicate with them and get them what they were looking for.

What’s interesting for me, an interesting point about this story is that you didn’t start out in your life saying, I want to be front of house mixer for Kelly Crows. You were just working on shows and you were a drummer. And then I guess at some point there was an opportunity for you to become drum tech. And so it sounds like you were just sort of open to learning all things. And then as you were around and you build relationships, like, sort of opportunities came up, it doesn’t sound like to me that you were sort of lobbying for any particular position.

You weren’t calling that guitar player every day and say, hey, when are you going to give me that front of house mixing gig? When are you going to give me that recording gig? And then he finally called you.

Yeah, and I think that with a positive attitude. I work hard. I try and stay engaged with people. And I think that being around and obviously having a good attitude towards things is really beneficial to that. But I jumped on the opportunities that were presented with me straight out of high school. I hit the road. I had a bit of a helping hand for my father, who owned a road case company. And so I had already been established working in the industry, got a job at about the age of 16 doing backline and stuff at a local backline company.

And so I already started engaging with people. I met in front of house engineer who took me on my first tour when I was 18. And so even with him, he kind of took me under his wing and showed me some stuff. And I started mixing some of the opening acts on that first tour. Even so, I was always very interested in that. I mean, I established that I could build a career on being a backline technician and that the paychecks would come if I would do that.

And it gave me the opportunity to really learn a lot and engage with a lot of people. One of my second tour was with Average Levine, and I was working with her as a drum technician. I was taking care of the playback rig on that tour, and I met one of my mentors on that tour, Jimmy A. Kobuski, who’s a world famous sound engineer, also another Canadian, and he was really helpful. And I was really able to get a lot from him. And funny how it worked out as it went full circle on my last tour with Counting Crows, he was mixing Matchbox 20, and so we were able to mix side by side and have a fun two or out on the road.

But those opportunities that I got myself into, I tried to take advantage of and try to get as much information and get as much knowledge from the people I was working with because there’s a lot of really talented people I cross paths with. And that was something that I kind of realized early on. It’s like, these people know everything that I want to know, and if I’m nice enough to them or ask them enough questions, I’m sure they’ll share some of this knowledge. And so I was able to extract enough stuff out of that to kind of put a skill set together for myself.

That’s great. And that’s like a whole lifetime of learning. I just wonder if I look at this. Is there anything I could take away from this for my own career? So if you were my mentor and we had a mentor mentee relationship and I was asking you, Shaun, I want to get to a place where I can be mixing some of my favorite artists and doing these kinds of tours. Is there any sort of anything I could be doing in order of, like, taking action? Is there anything that I can do, or am I just kind of waiting for the phone to ring and hope that those opportunities come up for me.

Play with sound every day? I don’t know that’s something I feel like I don’t ever stop trying to improve my skill set and try to learn that’s something that I see if there’s someone that’s waiting around for a gig. I don’t think those things happen very quickly if you’re waiting on something, if you’re pushing yourself to improve your skill set, to expand your Horizons, to learn new things, to get engaged. I mean, the only way that you’re going to have a good handle on mixing a show in a bunch of different venues, as if you’ve mixed a bunch of shows in a bunch of different venues.

So I have to say to me, one of the most important things is to get yourself into a position where you get an opportunity to do some of the work. You like to do as many times as possible in as many different situations as you possibly can, because once you get into, I would say the bigger leagues when you get an upside down situation and you’re kind of painted yourself into the corner, you need to have the skills to get out of that and still put on a good show.

So the experience is really what I think establishes people that are successful because they know how to deal with all of the problems or at least have a skill set to adapt and overcome, which is something I think is necessary in our industry.

So making mistakes and having the skill set to adapt and overcome. Speaking of that, what do you think are some of the biggest mistakes you see people making who are new to front of house mixing. So you’ve been around, you’ve been starting out, and then you’ve been mixing the headliner and seeing other people come up who are just getting started. And now you are even in a position where you’re doing even more education. So could you talk about maybe one or two of the most common mistakes you see people making who are getting started?

Yeah. Probably kind of a twofold answer on that. I feel like there’s some of the skill set needed to be the audio engineer that’s based in science and some of it’s based in art. And I think that the blend of the two of those is really the key to success. I feel like a lot of people look at audio a lot. I think they rely on real time analyzers and measurement data, which is obviously that’s going to tell you what’s going on and trust the information as long as you know how to measure things properly.

But I’ve seen people mix a show while they’re watching Smart, and it’s like, this is not working. And so I think that there’s a reliance on the science side of thing. And then the other side of it is like, you need to have an understanding of what music should sound like. I really had an uphill battle with the County Crows, where we had seven guys on stage, seven people singing, and everybody was playing a bunch of different instruments. And I think that my goal whenever there was a song being played was that I could look on the stage and I could hear everything that was being played.

And I think that that’s something that one of my biggest pet peeves is if I’m watching a show and I see someone play something and I can’t hear it, where is that? Why is it not in the mix? What is going on? And that’s something that takes the art of understanding how the music should be represented and then also knowing all the parts, like, are you missing cues that you have? Are you not unmuting instruments? Things like that. So that kind of is something that I think is a mistake people make when they’re, like, still worried about the drum sound.

And it’s like, nobody cares about how the drum sound right now. If the lead vocalist isn’t over the mix. Or if there’s, like, a lead guitar part or some sort of something going on, that’s interesting. That’s integral to the song, and you can’t hear it. The fans are used to the record. They need to have those sort of elements in the mix so that they can enjoy the show. So I think that those are kind of a couple of things that I see that bother me when I hear people mixing where I’m not getting engaged by everything that’s going on.

I’m like, Man, I wish I could hear what he’s playing because I can’t hear that right now and then also the reliance on visual stimulation instead of using your ears and kind of making that judgment call of like, okay, yeah, it looks bad, but sounds good. So we’re going to move on. And that’s something I’ve seen people do in the past. And for me to enjoy a show, that’s something that really I don’t know. It’s kind of a bummer, because with having high standards like that, it’s hard to go to a show that’s not mixed well and enjoy it.

Totally.

Okay.

So we’re going to talk more about you mentioned vocals. We’re going to talk more about that in a second. 1st, let’s just talk about finding your sound. And you mentioned just enjoying the show, looking at the stage, things sort of makes sense as they’re happening on the stage in the audio. So you wrote this article called Finding Your Sound in the Studio, and I wanted to see if maybe we could use the same topic, but for talking about live production. So I’m going to read this quote from the beginning of the article that says, Walking into the studio isn’t the right time to decide what your record is going to sound like.

I know the feeling of being a kid in a candy store is enticing, but this is the time we’re having a clear idea of what you want your recording to sound like is really key. And in another interview, I’ve heard you talking about being in positions where you get to spec the sound system. And so when you get to that position where you can say, oh, these are the microphones that I want. This is a mixing console that I want. This is the sound system that I want.

That is kind of the similar thing as a recording studio of being a kid in a candy store, right? You could pick all these things. So you might just say, I want all the most expensive microphones and all the most expensive figures. I don’t know. I’m just like, you go into a restaurant and you’re, like, bring me the most expensive wine. I don’t know. So I wondered if you could talk about how that conversation might go when you are first getting into a live production. Was there a time, for example, with some of the artists that you’ve worked with where you said to them, or how did you figure out what is the sound quality that’s going to make them happy and make the show successful and make the audience happy?

And how does that influence my decision making process? So how does that kind of conversation go?

So I feel I’m a pretty big proponent of getting the sound right at the source. And to me, that’s something that has to be a conversation with the musician involved. And that’s something that I established with the County Crows early on, when I was working in the studio with them, we worked on guitar tones. We worked on sound, drum sounds. I was already working with the drummer, and he was really happy with the way I was making the drum sound. And so all of these components that I was working on with the band were building blocks for a great sounding show.

So we were working through different guitar amps to achieve different guitar tones. We got to different base dies. We found one we really liked, and we ended up getting some for the touring rig. And so I was able to work hand in hand with the band. They had my trust to give my input on some of the band’s equipment that would establish the way things sounded. So that then my job from that point on, from the microphone out made my life easier. But then also, those tonal choices early on allowed me the flexibility to add microphones that I could get the best and get what I was looking for out of it.

So there’s a lot of things that I did in the live realm that I think some people would shy away from. I used a lot of studio microphones. I used a lot of ribbons. I used a lot of vintage microphones, but I was capturing sources that I was familiar with and sounded really great. So I was using that to my advantage, so I could then take it up another level by using a microphone that I liked, and that tone imparted on that instrument would translate to my mix.

And so I think if you have the opportunity to work with an artist who trusts you, getting the right sound, getting guitar tones and drums that work for whatever style of music or whatever artist you’re working with is going to establish everything else down the line to be more effectively mixed. And then, yeah, I was a kid in the candy store, and I did have tons of gear, and I toured with a bunch of outboard stuff, and all of it was kind of based on things that I’d like to use a lot of experience with some of the more esoteric outboard gear in the studio.

But all of that stuff I felt helped translate my vision of the show to the audience. So I was trying to get an album quality mix in a live setting as best as I possibly could. The flip side of that, too, is that all of the shows I mixed with The Counting Crows got released. So Livecannoncrows dot com has all of my mixes that go out to the world and get sold. So I was kind of trying to bring as much to the table as I could from a Sonic standpoint, but then establishing what I was working with and making sure that that was helping me get the results I was looking for.

Now, the flip side of that being here at Sweetwater for the past three years, I was head of audio at the Client Theater, which is a venue that we own in town, and I got a chance to mix everything under the sun for a few years, which is a lot different than having the ability to start from scratch or choose from a library of vintage guitar amplifiers and make sure the drummers using symbols that aren’t too loud. It really kind of gave me the opportunity to learn how to deal with anything that got thrown at me.

And so the best way to fix a problem, though still to this day, is at the source. If the base amp is too loud, the base amp is too loud, like go up to the base player and say, hey, if you’d like your show to sound a bit better, I’d love it. If you could help me out here and turn down your base amp, and that’s something that I think is always going to help you. If you can establish a relationship with the artists or at least get their trust, even if it’s a short term relationship, even if they’re an opening act, if you can do what you can to help them sound better on stage, then you can make a better show for them, and hopefully they can get some more fans and keep coming back.

And if they trust you and you’ve given them your best effort, hopefully that relates into a relationship that can be maybe long term, maybe not. So especially when you’re working in a venue and being able to engage with touring artists and sort of make friends, give them a good experience and show that you care. I think that really helps out in that final product of being better.

Do you think that comes through with the artist? Do they hear you say that? And they think, oh, Shawn’s here every night. He’s been here for three years. He knows as much as anyone about this room, so I should listen to him because I want to have a great show tonight or they think, oh, no, this guy’s going to try to make me turn down. I need to fight him off somehow because I’m worried about my own performance. How does that usually play out?

Well, I mean, there’s preconceived notions of the sound man being the angry sound man coming in and kind of pushing people around. And I try and establish that I’m here to help. And that’s something that I think when you deal with professionals, I think that you can hopefully, as a musician, let your guard down a bit. But I feel like there’s a lot of musicians that have their guard up because they’ve had bad experiences in the past or people trying to control them, not maybe for making the show better, but just making their life, quote, unquote, less miserable.

And I think that that really kind of establishes if you’re like, hey, I’m on your team. If you guys sound good, that makes me look good, vice versa. So all this stuff hopefully works in, like, a symbiotic relationship with the artists, where if I give them my good effort, they’ll give me their good effort, and we put on a good show, and then hopefully the fans enjoy it. And then they can come back and play more shows and have a larger fan base. And I think that on a smaller scale, when you’re not working with a headliner, putting in the effort that you would if it was a headliner like, I had a mixed template that had all of my bells and whistles.

Even if I had three inputs, I had everything ready at my fingertips on my console so I could give them if we got into something that was like a large scale thing. I had all this stuff there. If it was an acoustic guitar and a vocal, I still had some effects and things I could do. So I had my palette at my fingertips, and I was working on an avid SXL, which is not a common house desk. I was pretty spoiled with that, but at the same time, mixing on an X 32 or an M 32, you have the ability to get something going for any of the artists where you can give them a little bit more than the bare minimum.

And I think that hopefully translates if you show that you care, hopefully they appreciate that. And in the end, hopefully the audience appreciates that.

Okay, Sean. So you made this great video for Sweetwater called Tips for Mixing Live Vocals, and you and I have known each other for a long time now, I think about 38 minutes, and so I hope you don’t mind if we potentially disagree a little bit, because one of the things that you say in this video is that a loud stage would be a more appropriate choice for a dynamic microphone. And this kind of caught my ear when I was listening to it because I interviewed Philip Graham from Ear Trumpet Labs, who make some really cool looking, condenser microphones a few years ago.

And I basically told him all of my ideas about why dynamic microphones are better for live events in concert stages. And he disagreed with me on all of those things. And so I wondered if you wouldn’t mind just sort of defending your statement here about dynamic microphones.

Yeah. Happy to do that. I feel that dynamic microphones are the go to choice for a live situation. Now, there are situations where a condenser microphone may be appropriate. I would say 99% of the shows I’ve mixed in the past ten years have been on dynamic microphones. There is one specific occasion that I mixed a show with a condenser microphone, and it was spectacular. It was a singer and an acoustic piano on a stage, and it allowed for me to have a microphone with a more sensitive pick up on that stage because there was no real noise floor.

It was a piano and vocal, and I wasn’t mixing loud and everything kind of fell into place with that. But dynamic microphones, if you have wedges on stage, which some people still do that. But especially if you’re dealing with local artists or opening acts on tours in a house sound person environment, condenser microphone in front of a wedge is a completely unstable situation to try and manage, especially if someone has bad mic technique. So, yeah, I would probably take it to my grave that I would put a dynamic microphone in front of a vocalist on a stage almost any day of the week.

A huge fan of the telefuncan microphones, the M 80 and the M 81 are extremely amazing microphones, and I’ve used those for years and had amazing success. Things that come into play with that is that those microphones have an extremely tight polar pattern, so the pickup is very directional, so you don’t get a lot of bleed on the deck. So that’s something that I fought for years, especially the County Crows, having a bunch of guitar amps, seven people playing drums and all that stuff going on.

We had a couple of people still on wedges, and so there was a lot of noise going on. So finding a microphone that was pretty much the cleanest anything where I could get the most amount of direct signal without a lot of interference and bleed. Those microphones really kind of made my job easier. But you get into situations where you have artists that maybe using in ears and have a strange in your mix that if you’re using a dynamic microphone compared to a condenser microphone, the ambience in a venue really changes.

I mean, it’s pretty surprising if someone has a really loud vocal in isolated in ears, they pick up a lot of ambience of the venue, even on a dynamic microphone. So if you get into the realm of having something that’s even more sensitive that picks up some of the cavernous sounds of an arena that can put your artists in an unfamiliar place, and the performance may suffer. So to me that I mean, I don’t know. It would be hard pressed to find a solution with a condenser microphone that would make sense for me in a live situation.

And even I can’t think of one that would make me happy in an audio world. So I think I’m going to hold my ground on that one.

Sean, you’re making me realize that to pick a microphone, just talking about vocals. I guess you can’t really just audition those in a studio environment. You would really have to try them on a show because there’s so many different factors that are going on in a live stage. So I guess you just really have to try it on a show and see if it works. Is that kind of how you’ve picked vocal microphones over the years? Like, you tried something new and you did a whole show and you’re like, you know what?

For many different reasons, this really works or for many different reasons, this really doesn’t work in this situation.

Yeah. I mean, I went through a few different microphone changes with the Counting Crows. We landed on the telefuncan stuff, but we had an opportunity to do it, and it had to be something that both ends of the snake agreed on. Me and the monitor guy had to sort of be like, okay, we’re going to try this today. He would have to get it done, and he would have to be happy with it being something that he could work with. So that was as important as it was for me to have a good sound up front.

And so there was times where we disagreed. I mean, sometimes it’s hard to pry a 58 of someone’s hands and give them something else just because of familiarity, especially for an engineer that’s been worth a band for a long time. Changing things like that, that’s sort of like, you don’t want to take the blankie away from the singer or something like that. So I think it’s like, you have to establish that there’s a reason to change. You have an opportunity to try something in a somewhat controlled environment.

You have to know that the venue is not terrible. I’ve done it in the past. We’re like, hey, let’s try this. We try it. It just sounds super weird. And it’s like, today is not the day to do the change. And also the psychology of dealing with a musician. If a singer rolls in and he’s totally out and not part of engaging with a sound check or just not giving us all, if you’re switching microphones on him and he gets into a show situation, and he’s like, what is this thing?

And why does it sound so weird? You don’t want to be the one that gets blamed for that. So I think that establishing those changes has to be someone something that’s, like, sort of a team effort but has to be justified. And you have an end goal of success, and it’s sounding better or working better for whatever you need. And I think that, like that being said, a lot of stuff that I would do in tuning a PA and even implementing PA design on a tour situation would be to keep the center vocal position as clean as possible on stage.

So there’s a lot of work that we did to kind of keep the stage as quiet as possible, to leave some ambience at least amount of low and rumble and stuff like that. So that would establish sort of a consistent space for the singer to work in, too. And then the microphone reacts more efficiently with less interference of all those other factors. So it’s a weighted question, but there’s a lot of different variables that go into that.

Sean, how do I set the high pass filter on a vocal high as it’ll go until it sounds bad?

Yeah, I know.

When does it sound bad?

Yeah. And I think that I would probably go higher than lower on a high pass just to kind of keep that clarity. I mean, it’s easier to get a thinner vocal above a mix anyways, but, like, 150 is somewhere that I would Hover around between 120 and 150. And then if I needed to go, it dependent on things that like, if I was pushing the gain on something, I would tighten it up. I mean, I’m not scared of using the high pass filter or lots of EQ.

There was a point in time where I was actually using a channel of a Lake processor to EQ my vocals. So I was getting, like, surgical into slicing things out and cleaning it up so that I could get the most amount of gain before feedback. And so it really depends on what you needed to do, what the vocalist sounds like, and that’s sort of the thing got a really low voice, and you bypass too much of it. You lose all the character, but it’s that balance of what works for the singer, and then what works for your mix, too.

I mean, what you want to do is hopefully have people hear the vocal. I’ve been told to turn the vocal down or bury it before if someone’s not super competent. But for the most part, I think that people want to hear the voice, and you need to be able to get it up there. And so, yeah, I pretty much get it till it starts sending thin and they roll it back just a little bit, just so it has some body. But in a live situation, you wouldn’t be adding, like, 100 Hz to the vocal, like any of that sort of stuff that gives you some girth that would be in our studio recording just doesn’t need to exist.

I think in most of the live situations, but obviously that’s stylistically dependent in room and system depend.

I mean, if there’s a bunch of wedges on stage and side fills and they’re standing near the sound system, then you have loads more low frequency build up than if they just super quiet.

The monitor guys really hate when you go to the console and then roll their high pass up too. But that is something that happens when you got a bunch of big wedges on a deck when vocals on stage aren’t high past enough, which is sometimes modern engineers that are trying to get a lot of level try and get that chesty feeling out of that. But the blowback from that to front of house is sometimes pretty gross, and that actually affects some phase relationship with the microphone and with low frequency stuff that’s coming out of wedges.

So that’s always something, too that if you can stay friends with your monitor engineer and hopefully work together and be like, hey, man, that’s, like, really booming out here or, like, really thumpy, and that’s going to compete with the mix, too. So I think that that’s something that as much as you high pass a vocal, if it’s still, like.

Super.

Chesty or super stumpy on stage and there’s wedges up there that’s going to fight you all night long. So I think that’s something to be aware of. But, yeah, I’m not scared of getting rid of that stuff. And I mean, that kind of goes for everything. The high pass filter is your friend when you’re mixing.

Sean, how do I set the pre delay on a vocal Reverb?

Well, I like to keep it fairly tight. I’m not a big fan of really upfront effects. I like to kind of make them sound like I’m creating space around a vocal, but I don’t like to hear Reverb. I’m not a fan of long, sort of lexicon sounding things that are very apparent. So I end up using a few different reverbs sometimes, or, like, some short delays. I like to keep my pre delay on a Reverb usually under 20 milliseconds, anything longer than that, sort of, like gets it too far out from the rest of the vocal.

And really what I’m trying to do is I’m kind of creating a sound stage with my vocal effects. I’m not creating, like, a really prominent effect, but just like giving it a place for the vocal to sit in the mix. And that’s something that I’ve kind of been doing for a long time. I just either scared of, like, really loud effects or I just tastefully don’t like them. So there’s nothing wrong with that. But I like to make it so that it sounds natural so that I’m literally just, like, kind of pushing away some stuff so the vocal can sit in the middle of either the Reverb or, like, some short delays.

So, yeah, that sounds great.

It sounds like you’re really balancing on there between the two sides of making an artistic choice, which would be to have some big effect that’s really visible, really audible. And then over here where you’re doing sound reinforcement, which is what needs to happen here with this effect to help this mix work.

Yeah, I take that into the studio work. I do as well. I’m always hesitant to go overboard with effects. I’m not a big delay throw kind of guy, and that obviously with the County Crows, there’s not a lot of affected vocal. It’s pretty prominently trying to hear what Adam is singing or saying and bringing the lyrical content out. So I’m not trying to make it sound artistically affected. It’s trying to represent what’s going on. And I think intelligibility in any kind of situation where the lyrics are important is a really important thing, which goes back to the high pass filter and having the intelligibility of a vocal be there in the mix, I think really is important.

Just so people that are there, as fans can understand what the singer is saying. I don’t think there’s anything worse than showing up and being like, what did you just say? I can’t hear he’s not singing. So those things when you’re not overly affecting the voice, not pushing it too far into the mix, and then also keeping it clean with effective EQ that allows for the clarity and intelligibility of a vocal.

So let’s talk about clarity, intelligibility and Reverb return in that video about tips or mixing live vocals. You also mentioned that it’s important to EQ the Reverb return so that it sits well in the mix. So can you tell me more about how to do that?

Yeah. I think that my approach to EQing a Reverb return or any effects return. Is there’s a certain amount depending on your preset that you pulled up? I would say I’m not quite digital artifacts, but like, nonrealistic sounding space that comes back from a Reverb a lot of the time, and what I try to do is usually carve out some of the harsher, higher mid frequencies. I take off some of the top end. I high pass some of the low frequencies in order to fit that space into what I’m doing.

And I think that that is another piece of the puzzle where it allows me to create the space around the voice is that I’m tailoring that to support the vocal rather than just be like, hey, there’s the Reverb, all of it’s. There usually take off anything that picks up s is where you hear the s in the Reverb. So sometimes they’ll even DS a Reverb before they put a DSR in front of a Reverb plugin to take off some of the SS on the vocal. If I’m keeping them in the actual vocal sound, then at least it’s not hitting the effect as hard.

And then when it comes back, just taming some of that high frequency information, there’s a lot of that that just doesn’t need to be there. And I think that that’s a mistake that people do is when they leave, like, a full frequency Reverb in a mix. And you’re like, that’s a lot of Reverb. And it’s just because it’s all of those frequencies all of the time. When you tailor that to the sound you’re looking for, I think it gives you a more natural sounding Reverb. So that’s usually my approach to that.

And the same thing with if you have, like, a delay, a filtered effect on that where you high past low pass and find a spot where it kind of accentuates that voice and kind of makes it something that can sort of be a little bit Ghosty in the mix is a little bit more tasteful than having a blasting delay that is full frequency range.

Okay, let’s talk about the Clyde Theater, so I’ve never been there. I just look at some photos online. Let’s just start with some of your favorite things. So what’s one of your favorite things about working at the Clyde Theater? And maybe what’s one of the most challenging things about working there?

Well, my favorite part about the whole Clyde project was the fact that I was involved early on before the theater even opened up. So myself, along with stage manager and the modern engineer that we worked with there, Drew Consolvo, we work together to adapt and sort of deal with any problems that we foresaw before we even opened. So we weren’t part of the design process. But then when we got in there to do the final install and to Commission the system and get everything going, we modified a few things in order to make it so that people on the road rolled in there and were super happy with everything that was there.

We made things accessible. We made things flexible. It was clean. There was all of the cables adapters, everything was ready to go. Everything was dialed in, and that was really kind of a nice thing. And we got a lot of feedback from a lot of different artists that rolled through like, this is one of the most awesome venues we’ve been at. Cool, because we’re in the Midwest. We’re stuck between a lot of people coming from different established venues. That may not be the most fun places to do shows.

And so we really kind of made people have a great day at the office, and that’s something that I think we’re both really proud of having a venue where people could show up and just have an easy day. It was like a ramp load in our truck docked. It was 20ft from the stage. There was no stairs. There was no stupid things to make your day miserable. It was like we had everything we needed and it was accessible, and there was a nice facility, sounded really good.

We had all of the things that just make a day on the road easier for someone that’s been out for six weeks or whatever when they roll in and you can kind of give them a bit of a rest because everything’s covered. And that’s something that we kind of strived for. And that made I don’t know. It made it great for doing shows. It was an awesome experience with pretty much everyone that walked in, and yeah, I don’t think there’s really anything that I didn’t like about that.

I mean, it was a learning experience dealing with being a house guy for a change. I had a lot of great experiences mixing a bunch of random bands that never really would have got a chance to mix and having fun with that. So, yeah, it’s a great venue. Hopefully you’ll get to see it at some point.

Yeah, that’d be great. Well, I just want to say thank you. And I have so much gratitude for people who care about this stuff, because I have been on tours where we’ve showed up at tiny places, where we’re figuring out how to turn the electricity on. And we’re carrying cases of tiny stairwells and giant places where you are pushing things up and down giant ramps. And there’s not enough forklifts and all these problems. And then when you show up at a place that’s just easy to work at and seems like it’s designed with this kind of work in mind, like, oh, God, it’s just tears start to come to your eyes.

But between Drew and myself, we’ve been to every venue on Earth that sucked. We’ve been to all the good ones. And so we were able to bring that experience. And I was advancing some shows with some people. I was like, yeah, I know. We got you. And I could tell they didn’t believe me when I said, yeah, I know we got you. We’re in Fort Wayne. It’s a hard thing to believe, but it’s like, when you tell someone, yeah, this is a great venue. It’ll be an easy day.

And they’re like, Are you sure about that? And I’m like, yeah, it’s great. And so it’s nice to be able to do that because I know how that it wears on you when you’re on the road and you’re back to back and miserable venues and up the stairs and you can’t fit things. Oh, there’s an elevator, but doesn’t fit in any of our cases. And like, all these sort of things that just pile up. So when you can roll into a venue where there’s hot water and clean showers and good food and a nice PA and clean backstage and lots of storage, all of those things, I don’t know.

It’s just a reprieve from Slugging it out. And I feel like there’s more venues. And I hope through the pandemic that my understanding is a lot of venues put a little bit more effort into doing some upgrades and getting some stuff together. So hopefully when touring is fully fired, back up, which is looking sooner than later that there’s more awesome venues on the circuit. And I think in the end, the fans appreciate a band and a crew that’s happy because they hopefully put on a better show when everyone’s in a good mood as best they can be.

Okay.

Sean, you mentioned the final install. You mentioned having everything dialed in and you mentioned making people happy. So let’s talk about that. Recently, a friend of mine installed and calibrated a sound system in a small Church, and I actually came up to observe a little bit and do some tests of my own. So I happened to be there and I could see the whole thing go down. And when I asked them about it later, it turned out that the client was unhappy with his work. Now the client wasn’t there while I was there.

My friend just did all their work and then left. And then later on, the client listened to it. And so he just heard through secondhand that they had asked for it to be improved. So the company sent up another tech, and my understanding is that that tech basically reset everything just back kind of to the manufacturer defaults. And the client was happy with that. So I don’t think there’s really a right and wrong here, but I do wonder if there’s some way that I could potentially end up on the side of the client liking it more often than not.

So is there some way for me to get inside of their head to kind of predict the characteristics of what is good sound to someone and then try to produce a result that they’re happy with, or is the only way to really guarantee that to just sit with the client in the room and basically audition changes until they’re happy?

Well, I don’t know. With them being so subjective, I feel like it is a personal decision. Now, people can like things that sound bad to me and think they sound good. I also feel that even on an install, when we Commission specifically the Clyde system, we had one of the texts from JBL come out, Raul Gonzalez, who’s an awesome systems engineers all over the world doing JBL stuff all the time we went through it, we tuned it. He did some of his tricks to kind of get it to where he thought it was cool.

We got into it. And then after mixing a few shows on it, I felt it needed some changes. So we kind of modified it. But I never really stopped modifying it in a way that if an engineer came in, who I could tell it was a good engineer, we’d talk about the system. I’d be like, hey, what did you like about this? What didn’t you like about this? What could we improve? And we had limitations of, like, we had a sub cavity under the stage, and we only had so much space, and we could only space.

Specifically, the subs were always kind of a tricky thing, but we had physical limitations, so we can only do so much with those. And so with that venue, it was always a constant. Can I make it, like, half a percent better? Can I make it, like, a little bit better? Can I tweak this out and so it was kind of a work in progress. But I can see that someone imparting a personal taste on a sound system could backfire, because if it’s not something that the end user or the client is accustomed to or likes, or if someone is mixing on it and they don’t really understand what you’ve done to it no fault of yours, but just the general concept of tuning PA or a room specifically.

And they’re just kind of not understanding what you did. I could see that going sideways, but I feel like in a venue or an install, I roll into a lot of places and they’re like, oh, yeah, this is done. We had to install.

That’s.

Fine. Don’t touch it. But hold on. What is going on here? You ask a few why questions, and then you’re like, hey, can we improve on this? And that’s something that it’s a never ending process. I mean, even still in the Studios I am missing with. I got new speaker stands last week for the studio so I could move the subs that we’re trying out underneath the speakers. So they’re in phase with everything. And I totally change the dynamics of this room. So I’m constantly tweaking stuff, even in a studio capacity.

I’ll pull out smart, do some measurements, I’ll move some mics or some speakers around. And so I always feel that there’s an opportunity to improve, even on an install, even on something that someone says is amazing. Like, with so many people having so many good ideas and so much technology to improve stuff. I mean, I don’t think any one person can get everything completely perfect. And with the subjectiveness of it, it’s like, if you got someone that comes in, they’re like, Well, I need more sub.

I’m mixing a DJ gig, and this just doesn’t have enough sub. We need to bring in more subs. And like, that’s crazy. There’s tons of stuff for that guy. There’s not. And so those are the kinds of things where it’s like, you have to adapt. But at the same time, I always feel that there’s room for improvement, even on something that I think might have been some really good work on my end. Someone’s like, hey, this is what I’m hearing. I’m like, oh, cool. That gives me a new perspective to listen to it and then possibly approach either modifying or adapting what I’m doing to accommodate a fix on that or being like, You’re stupid.

I don’t need to listen to you.

Let me try to hear that through your ears.

Yeah. And I think that mixing by committee a little bit on that, I think kind of helps. I mean, not too many cooks in the kitchen, but some trusted years. I always appreciate even on mixes or come out to a show. It’s like, hey, what do you think? And actually, I would like to hear some feedback. It’s nice to be able to get a collaboration on things. So that being said about install stuff. I think there’s a lot of room for improvement on a lot of different situations, but I’m sorry that you guys didn’t win with your client there, but you could have someone that has a completely different approach to understanding or enjoying sound the way that you do, and when they’re paying, then they’re in charge.

So there’s those dynamics as well. But yeah, I don’t know. I’m always up for trying new things or trying to improve on a situation.

No, this is great. As you’re talking, I’m realizing that if you’re my client, probably a better way to go about it instead of me saying, hey, Sean, I’m really special, and I’m really smart, and therefore I’m going to knock it out of the park on the first try. Probably a better approach would be to say, hey, I’d like to build a relationship with you, so that with this first pass, I’m going to get it to a place where I feel like it’s consistent across the space. And then I want to get you in here and get you to listen to it.

So then I can make adjustments, and then I don’t know what the right way to say it is, but I’d love to have a relationship where I can help you improve this over the next X days or X weeks, and I’ll come maybe we have a relationship where I’m going to come back in after you’ve had a few shows and take your notes and make changes. That’s probably a better way to approach it than try to pretend like, I know what is the best for everyone in the world.

Yeah, and everyone has a different workflow, especially like House of Worship stuff is tricky because you have a lot of volunteer sound people, too, that are trying to make the best of what they know how to do. So having something that’s easy to control without a lot of bells and whistles is always a better approach in those sort of situations. But yeah, like checking in three months down the line, like popping in and checking the mix. Hey, How’s this working for you? What can we do? I mean, I don’t know those sort of improvements.

I’m always looking for that. And even like, every time I hear something, the more I listen to things and the more I experience things and the more information or knowledge I gain, I think the better equipped I am to sort of either make suggestions or, like, question, what did I do last year? Is that the end? I’ll be all of what I’ve done, or should I improve on that? Or should I look for a solution? Not that I’m trying to create work for myself, but just baby steps on stuff, but yeah, with a client relationship, like I mentioned before, dealing with artists, I think a lot of success in audio engineering is based on relationships, whether it’s with the artist, whether it’s with the venue, whether it’s with your a two, whether it’s having a team of people or having a good set of relationships with people, I think it leads to success.

So I think that shows that as well. It’s like being able to go back and continue working on something and removing a little bit of the ego that gets in the way of like, hey, I did the best job I possibly could. It’s like, well, for you, it might be, but maybe in that case, what the client needed was something else.

Sure, Sean, let’s talk about System latency. For a second. I’ve had a couple of people talking to me about this recently. I’ve seen Robert Scoville talking about different things going through the console and console latency a lot recently, and at the last Lifetime Summits. Were you looking at System latency when you were thinking about what would go into the Clyde theater? Is the system latency influence your decision making when you’re setting up the console, specking the sound system, that kind of stuff.

I think that the only time that I really get super concerned about is when I start getting into, like, crazy plug in world on a digital console, and I start pushing the mix so far away from the band that they can hear it. And I think that’s a dangerous situation to get into where you’re inducing more latency than you need to to sort of throw some more bells and whistles on your left right bus and things like that as far as, like driving a PA, trying to keep things as efficient as possible.

Everything we were driving to the cloud was AES, and so tried to kind of keep that as tight as we could. There wasn’t a lot of latency induced Besides the processing I would do. And I got into that deep on, like, when I was mixing on a venue and I had a ton of plugins and stuff like that. There was workaround to try and get delay compensation to work in that. But I feel like, for the most part, with a lot of the networked audio systems, I mean, we’re down, like, audio is moving pretty quick in the digital realm these days, so I’m not as concerned as I would be if I was just piling up, plugins on the console and creating my own problem at front of house.

I think that’s where it becomes an issue for me and becomes an issue for me when I start affecting other people’s performances when I’m doing things that are like, why is this, like, I can’t hear? And we actually talked about it the other day where we’re mixing in one of the Sweetwater theaters here. There’s an older venue with a bunch of processing because it’s split out for, like, broadcast and split up for, like, hearing assist and all these different things. But then we also mix wedges off that for people on stage, and it’s like, I can’t play on it.

It’s so weird. I’m like, we have to turn all the stuff off in order for you to get sound that’s effectively quick enough for you to perform properly. And so someone brought it up the other day, and I was like, yeah, well, not always in all the rooms doing things like that, but I’m aware of that. Like, if you’re mixing monitors from a console, it’s got a ton of plugins on it. Someone’s going to be like, this is so strange. So when you’re adding 100 milliseconds of latency in processing and stuff, it’s not fun for anyone to deal with that.

But yeah, other than that, there wasn’t much of a concern. Most of the system at the Clyde was previously speck before I got brought in. So through Harmon with the JBL and Crown stuff that was brought in. Sweetwater JBL has been a big part of Sweetwater for many years. And so this is, like, the logical choice for us to get that rig into that room. And so that was my first time really working with the A twelve S and all this stuff. It was impressive. I mean, I had mixed on a lot of rigs and spent a lot of time working on them to make them sound, how I was hoping they would sound and hearing the A twelve out of the box.

It was a pleasant surprise. And from there it was a really enjoyable experience. There was a little bit of learning the BSS world and stuff like that and some of the processing that I hadn’t been so familiar with. I’ve been pretty much strictly on the Lake processing and all of that sort of stuff. But that being said, all of those things latency wasn’t ever really an issue for us. It was just if I was causing problems by trying to do too many happy things, okay.

It’s fixable. You haven’t put on all the system and realize that you’re stuck now.

Yeah, there’s a few things, and I always know that there are variables that you can’t control in every situation, and I have to be accepting of those things. Otherwise, I would lose my mind trying to think of everything that could possibly not be perfect. So adapting to what those are is fine. But, yeah, the latency is my own doing for the most part, if it’s ever going on.

So, Shaun, you’ve mixed up some of the worst places in the world, some of the best places in the world. You’ve just had a lot of experience, and I’m sure made a ton of mistakes on the way that you learned a lot of lessons from. So I wondered if maybe you could pick out one of them to share with us, maybe something that was especially painful or was an especially big lesson for you and just kind of walk us through what happened.

I think probably the most embarrassing audio mistake I made was we were doing Pink Pop, a large festival, and I think it was Belgium, maybe five or six years ago, and we had a decent slot. There was probably 40,000 people in front of the stage, like, sizable European festival. And we had, like, a rotating front of house zone. So everybody had, like, the console on wheels, and we’d get pole position, and we’d like to roll the consoles into place. So I built my rig out there and got forklifted out there.

And it was like bouncing around. And I had lots of outboard gear and a whole bunch of different things. So I got it all together.

Wired it all up.

Strapped it, like, with big truck straps altogether. So I had this big rolling island of gear that just went into place. And I had a couple of friends and a couple of people for some other bands come stand by me while I was mixing. They wanted to hear the band, and I was all excited. It was getting broadcast. So I was mixing a broadcast feed as well. So everything was, like, pressure is on for that. And so I line checked everything through headphones. So I got the PFL and everything.

All my inputs are good, but I never checked anything through the PA, which is kind of my big mistake, which I probably would never do gain. So they start a song. Guitar intro goes on for a while, and the drums kick in. I got no drums. Shit, no drums whatsoever.

Do you think a tiny person in the distance hitting things.

But you’re not playing drums? I’m like, oh, my God. Where’s the drums? And I’m looking down and I look over and I got, like, outboard compressors on the drums, and they’re both, like, dark. And I didn’t even look over there. And so I ripped the strap apart, ripped the cases open, throw all the lids, and the ICS had just fallen out of the back. And I like, plugging both back in. The drums came to life. And I think by the chorus, I had everything together, and that was kind of like one of my more embarrassing.

But also, I should have checked all of this stuff. This is my fault. And everyone was kind of impressed at how fast I moved to try and fix my own problem. But when you get into a complicated situation and you have a lot of complicated extra bells and whistles, make sure that you have them functioning, because that’s not cool. When someone’s like, hey, why didn’t that work? Well, I had all this stuff, and then it wasn’t plugged in, and that’s my fault. So I feel like that was probably, like, in the most amount of.

And I really doubt that anyone really noticed. I mean, it would have broken my rule of, like, seeing someone play something and not hearing it through the PA. So that kind of bummed me out. I don’t know. I would say probably be the biggest, clumsy thing.

Why does the PFL work, but not going through the system work?

Well, no, I didn’t check it. I didn’t open PA up. I just played some music through it. I was like, okay, yeah, my outputs work, and then I didn’t pass signal routing through all of my messy routing and things that I was doing for fun and should have maybe been a little more straightforward. So simplicity probably wins and live sound for that. I would say that taught me some lessons and double checking or just being more simple in my process. And I think that was something that kind of opened my eyes and said, Well, I think it’s more important that everything just works rather than having all of these cool things better have a show than no show.

Sean, I have a few questions here that came in from the Internet. Bobby B says best practices for building a Writer so kind of a general statement question.

There any tips you want to give Bobby B for building a writer put on things that you can use and then make sure you very much highlight what you’re not willing to use. I think that that’s pretty much the biggest takeaway from that. I mean, I was pretty spoiled in the sense that we traveled with control all the time. So we always had console and processing and stuff like that with us for a whole tour for the most part. But there was some pas that I just refused to kind of mix on after a while of, like, repeatedly bad shows and bad coverage and more so obviously, I could mix a show on it.

But in the case of a few of them, it was like, it really gives the audience a bad experience because their coverage is so inconsistent. So that was something that I was really kind of adamant on. If there was a speaker system that I didn’t like to use, I won’t name names.

Wait, no. What is it? Are you not going to tell us which speakers you’re not willing to mix on? Well, I guess I have to have a show with you then get your writer. And that’s how I find out.

Yeah. And I really feel that in the development of sound system in the past ten years, if anyone has a PA that’s, like, 15 plus years old, it’s going to be beat down. So anything that you can do to make sure that you have new functioning speakers, and you know that they’re like, I’m not doing a show until someone signs off that all the drivers are functioning and everything’s working as it should, because that point large format, point source, pas and stuff like that where it should be a line array and things like that.

Or like, if it’s a venue that brings in a PA, making sure that you get something that fits your needs rather than whatever the cheapest option is is always something that I think is always a fight with a promoter and a venue, too. But other than that, knowing what you need to pull off your show, I think that there’s been a large transition of people being reliant on house consoles to people traveling with the X 32 rack for ears and maybe an X 32 in front of the house.

And that seems to be middle level touring. That is so many places. I see so many bands have their own gear, it’s all dialed in and ready to rock. So, like, there’s less of a reliance on a venue to provide consoles and things like that. But that being said, if you’re really particular, then bring what you need. I think that that’s kind of the take away from relying on other people providing you things. If you need it, you need to bring it with you, because if you get into touring the world and you need a specific piece of gear, it’s going to be tough to find.

People are going to blow past that part of your rider when they’re not super concerned about the weird audio gear you need to make your show happen. But a functioning PA that represents your mixes, I think a paramount piece of the puzzle. I think that outlining what you need to do your job and figuring out what that is like. If you’re mixing an acoustic Act, I need a Di and a 58 and some speakers that function. If you’re mixing an Orchestra and you need, like, 60 DPAs, then you’re going to have to make sure that those people provide what you need.

So I think that is figuring out what your limitations are of providing your services or doing your job. Like, what’s the least you’re able to do your job with? And then what are you comfortable doing your job with and making sure that your production manager or whoever is advancing the shows fights for that for you. Yeah, I think that kind of covers that as far as it went into details on that stuff. Really, the most important thing is I just didn’t want to have a PA that sounded bad for the show, and that was kind of it depends on the size of venues you’re working on, too.

I mean, all of that is scalable to some extent. Okay.

Gabriel P says, how is it working with Avril Lavigne?

It was awesome. I spent about 18 months touring with her. I think I just turned 19. We went to 49 countries in those 18 months, and so did a six week tour of just Japan, bounced all over the place. Pretty much did every TV show that was being broadcast at the time, all of the daytime and nighttime shows and MTV and all that stuff. So it was an amazing experience. I saw a lot of places, made a lot of really great people, made it to a bunch of different venues and a bunch of different festivals.

And so it was a really awesome experience. The guys in the band were really great. She sort of changed up her band after the tour that we did. But the bass player, Charles One, as he scooped a gig with Brutal Mars just after that and has been working with him since then as an engineer. I think he’s got five Grammys now. So there’s a few people that kind of went on to do some great things. And that was the tour I did with Jimmy Akubuski, who’s obviously an industry legend and learned a lot of things and still stay closely in touch with him.

So, yeah, it was a great experience. I learned a lot of things. There was an opening act on that tour. Butch Walker was opening up, who’s a pretty successful producer and solo artist himself. But his engineer, Paul Hager, is also a super talented live sound and studio engineer. And so I learned a lot from Paul, and that was a really cool experience to be around him. And he would go into Studios on Days off, and I would tag along and stuff like that. So it was an interesting thing.

And that continued on. He was actually the front of house mixer for the Google Dolls when I worked with them. So I made some long term relationships with some people on that tour. I got to see a lot of places and learn a lot of things. So it was very cool. Okay.

Greg McVeigh says, ask him when he’s going to ditch the real job and mix touring acts, gain delivering that with as much sarcasm as possible. His work with Counting Crows was just fantastic.

Yeah. Greg’s a great guy. And yeah, he came out. Actually, I’m trying to think I think the last time I saw him was in San Diego, and I had a Sandstorm console. Incident, there might have been another real one. We were in a poolside venue in Vegas and monsoon rolled in, and I went and ran onto the stage to save all of our guitars and left my console uncovered, and it ended up with, like, a sand drift over it. And so we went to the I think maybe it was the Orange County State Fair or somewhere near San Diego, I think.

And so anyways, I had a console full of sand, and so I was swapping that out. And I think that was the last time Greg was out. So, yeah, I love doing that. I’m really enjoying having employment through the pandemic. Sweetwater is amazing. It’s super cool things here, and stability is something that I’ve never really experienced before in my professional career. So the hustle of finding the next gig, finding the next record, finding whatever is something to be, don’t take that for granted. I quite appreciate my opportunities here, and it’s just a great company that is endlessly growing.

And we’re doing a lot of cool things. I get to be involved with a lot of stuff, and I have no idea what the future holds for me, but right now, this is super awesome, and I’m really happy to be here.

Shawn, what’s in your work bag? Are there one or two unique pieces that you have to have with you on every show or something interesting that might be fun to share with our listeners.

A bunch of iLOX.

Oh, really?

Okay. Cool. I don’t know. Like I mentioned the telephone and microphones are something that I really like to use. I do have a few of those I travel with. I, like, on my drums, guitars, vocals, all this sort of stuff. So if I take a couple of things to a gig, probably that I mean, adapters and all that fun stuff. I like to have a good adapter kit. I like to buy cable if I need some more options of a mixing on an analog console or something, I’ll have y cables to Bolt the snare and some stuff like that.

I kind of ditched headphones a long time ago in the live realm I use in ears for checking anything when I’m mixing, I think the isolation you get from them is really helpful. So I have a few different sets. Greg was very helpful in getting me sorted out with some ultimate years uerms back in the day, and those are really great pieces that I used and trusted to do some of the mixing and listening for the Cannon Crows live stuff. And then I would go back to the tour bus after the show and master the show on my in ears and a set of headphones to double check them.

Those kind of things I travel. Usually if I’m traveling somewhere, I have a universal audio interface and some Apollo satellites to have some of their processing to do remote stuff. Or if I’m working on a project, I can take it with me. A laptop. Usually if I’m going somewhere like, oh, yeah. I need all this stuff and then I like, don’t use any of it. I drag around a lot of things I toured for years with too much, like, hard drives and all these adapters and stuff.

And I mean, the thing these days is like my laptop now has USBC, and I need adapters to get anything plugged into it. So it’s the adapter farm of that sort of situation. But, yeah, it’s constantly evolving. I don’t have, like, a set gig bag these days. It’s just sort of like, what I need to do, whatever I’ve got going on. If I’m coming to and from work, it’s usually I’m taking my locks home and then I’m bringing them back to work. And that’s about all that.

I move back and forth, which is nice.

Sean, what’s one book that has been really helpful for you?

Well, I don’t know. I appreciate a lot of different books. I’ve been reading more lately. One of the books that I found had the most amount of indepth information about audio engineering is a book called Recording The Beatles that I got probably ten or twelve years ago, an acquaintance of mine, Sky, Brian Kwh, who works with the who is their keyboard tech, but also as a studio engineer remixing. Besides and outtakes, he goes through the archives and different record labels and does these weird releases. But put together this, like 20 year project of researching every single recording The Beatles ever did and what gear they use and how they bounce the tape down and all the gear from Abbey Road.

And it’s like this Bible. It comes in like a sleeve of an old tape reel. And so that’s like one of my prize possessions. And it’s like an amazing book. And I’m like, if anybody’s ever geeking out about something like here, check this book out. And it’s like, supposedly they’re worth a ton of money because they stopped paying them and they’re kind of pricey when they came out. But that’s one of the coolest books I’ve ever seen. And I love that reading chairman of the board, which is Bill Schnee autobiography, really famous producer and engineer.

I don’t know. I try and dig into some audio related books and some self help and leadership books and stuff like that if I’m trying to get motivated to do something. But other than that, if I see something that comes out or if I hear someone talk sort of chase down what they’re talking about and then read about it usually. Okay. Cool.

Sean, do you listen to the podcast?

I do. Okay.

So I want to know what are the one or two that you have to listen to every time a new one comes out in the audio field.

Working class audio podcast that interviews a lot of mostly studio engineers, but super interesting because it sort of removes a little bit of the technical side of things and then talks about how people have navigated their career path and more of like, how did you get through this financial situation? How do you deal with having a job and doing this? And it really is a really interesting perspective on it. I mean, obviously, I love gear and I love geeking out about stuff, but it’s also nice to hear how other people survive in this industry, how they work in this industry and how they get work.

So I usually am pretty religious of watching that one. And then Russell Brand got a really interesting podcast called under the Skin, and that’s a weekly thing for me. I usually get into that, man. It’s all over the map who interviews on that, and the topics could be anything. So it’s super interesting. So that one is one that I listen to every week as well.

Maybe there’s something you can help me with.

Yeah.

When I listen to working class audio, one thing that sticks out to me or struck me compared to my own podcast and compared to everyone else’s podcast is how he manages to have really honest discussions about money. And as soon as I heard that the very first time I was like, oh, I want to do that in my podcast never had the balls to do it. I don’t know if I just can’t figure out the right language, or I just come from a background where I have a tough time talking about that.

But I don’t mind asking you, like, how did you get that job or how did that thing work out for you in your life? But to say, can you tell me about the economics of being Sean Daly? I’ve never really figured out how to ask that. Well, so do you have any ideas? Like, how could I ask live sound engineers? How would you appreciate starting a conversation saying, like, how do you put together your financial life so that you can sort of survive and have the things that you want?

Yeah. How do you have somewhere to live and eat on a monthly basis?

I want to say, Where do you get money? But this is really interesting for people, right? Because we all put our careers together in different ways, and that’s actually a really interesting topic. Some people have other jobs where they are, like, landlords or they’re selling stuff on the side or whatever. And that’s all really interesting. So it’s like, how much of your life is getting checks from doing an audio gig or something else.

Yeah. Man, to engage with that subject, you kind of have to have an idea of what you’re getting into. I feel like on that podcast, he does a good job of sort of dancing around and also engaging. So it’s not like, well, how much money did you make on this? But it’s like, were you successful in this? And how did it come together? And I mean, I have never really worked for a company. I’ve been my own company. I’ve been an independent contractor. I’ve always done stuff where I’ve had to negotiate and ask people for money and chase them down invoice and all that stuff.

And so this is my first time having a job where I get a paycheck every two weeks. And trust me, it’s awesome. But that being said, the amount of full time positions in the audio engineering field that are consistent salaried positions with health benefits and all that stuff are few and far between. So I mean, it’s a touchy subject because I think some people are really they got to grind and grind and grind to make ends meet. Some people have some success. And I mean, I think that’s where the relationship aspect of all of this stuff comes into play is like, who do you know that can get you your next gig?

How does that sort of play into all of that and I mean, without actually asking about money, I think those are the kind of things like, how do you get your next gig? How do you make sure you get paid and those sort of things? I don’t know to me, I’ve always kind of been bad with that, too. I’m not a great businessman, and I’m not super inclined in that situation. So for me, I like the fact that I have a safety net of a company that’s supporting me and behind me and paying me, that’s a really amazing thing.

Whereas when you’re an independent audio engineer and you’re like, okay, cool. So I was in the studio today. I made, like, $300. And then I went and mixed at a bar, and I made $100 and some of its cash. And some of it goes to my business and all those sort of things, like the accounting for being an audio engineer is a mess. It’s tricky. You try and buy gear, you try and write stuff off, you sell something, all these sort of things that kind of come together to allow you to live and do what you like.

I think it’s a fine balance, but, yeah, I don’t know how to breach that subject in general capacities. I mean, some people are super transparent about it. Some people don’t like talking about it, but I think it’s a struggle. And I think that that’s something, too, that I see it’s difficult for me to speak at a couple of local colleges and stuff where people are in audio engineering programs. And it’s like, if you’re not already hustling, you got to start hustling, this is not going to work.

You’re not going to get work in this industry. And I think that that’s something that it’s not an easy field to get into. There’s a lot of competition. There’s a lot of people that want in on this and doing what I do. I think that it’s amazing. I love my job. There’s nothing I would I can’t think of anything else I’d rather be doing than what I’m doing right now. So that being said, I’ll fight for my gig. I’ll fight for what I need to, but people want to do this, and people are competitive with it.

I think a big part of it is the journey. And maybe the identification with you’ve gone through the same struggle that I have. And so maybe I could approach it by asking people about their business journey or their financial journey, because I know that for me, I spent most of my life kind of going through cycles of going broke and just sort of living from paycheck to paycheck. And it was only in the last few years that I started getting enough money coming in that I’m not just sort of constantly worried about money.

And so maybe I could approach it that way and sort of ask people, like, tell me about your financial journey because this is not a job where you can survive without understanding the business side of it and where you can survive without understanding kind of the economics of touring and the economics of shows. You kind of have to have a grasp of that. And I think there are potentially other jobs where you may go through your entire career and job and never really understand how the company makes money.

You just understand that you do your job and you get a paycheck, but it doesn’t really work that way for us. So you’re giving me some good ideas. I think that would be a thing to ask. And maybe we’re already going pretty long in this interview. But if you just had, like, a short answer, could you tell me about maybe a time in your life when you started out? And there was probably a period where you were just kind of living from paycheck to paycheck? And, like, will I have enough money to buy food and pay the rent?

Was there a transition where just you had enough work coming in or you had enough money saved where you weren’t? So that wasn’t the dominating fear of your mind of having enough money?

I’m still waiting for that to happen.

This is where we introduced the GoFundMe for Shawnee.

No, I’m good. But that being said, there was always a motivating factor for finding work, my touring career. I don’t think I ever regressed in my pay scale. So every tour that I moved to, I was making more money. So I was, like, growing my value, and people were recognizing that. So at least my touring world was consistently growing. Now, in the recording side of things, I was always an uphill battle of figuring out how much money people had and what I could get them to pay me to work for them.

And that was always tougher to negotiate. At least with the tour. I could be like, hey, I’m going to work for you guys. What is the pay? We can negotiate a price. And then I’m locked in with that. And depending on what happens, I got pay raises and stuff like that, depending on what was going on. So that was always cool. But my driving factor is that it was always like, okay, cool. Well, I need to pay rent at the studio. I need to pay rent at home.

I need to eat. What am I going to do? And I got to find work. And that was kind of the thing that drove me. It’s like, I got to buy gear. Okay? I bought more gear than I should have, and I need to eat, and I still need to pay rent. So what am I going to do? And those sort of things just always pushed me to kind of keep pushing and always trying to find the next thing and be like, okay, well, then selling gear is what we’re going to do this month and move a few pieces of few pieces of gear that I don’t use or that I actually didn’t need, and that’s something else I learned later on in life that you don’t need to own every piece of gear on Earth.

But those are the kind of business decisions that I should have paid attention to early on when I was trying to build a studio and buy a bunch of gear and own more stuff than I needed to, where I could have saved money and established my future rather than putting it all into things that really didn’t get me anymore business, which is something that I learned later on in life that it wasn’t as much about the gear you had, but more of the attitude you brought to the table.

But that was always my driving thing that drove me was that I needed to continue to live, and I needed to continue to find work. And so I networked and made friends. I went to shows, found bands to record, and all that stuff sort of put me in the situation that I am in today. But through a connection I got this job at Sweetwater that I met through being on the road and all these sort of things. I have a career trajectory that I think I can link to three people for 15 years of work.

I met the people from the average tour when I was doing my first tour, that tour manager tour managed to Googles and the production manager on the average tour did The County Crows. And so those people got me my work that lasted 15 years. And so those kind of connections where you establish something and you can do a job and keep in touch with them. That’s pretty invaluable to have a few connections to trust you and are willing to put their name on the line for you.

But, yeah, that kind of is not very short, but that’s kind of where I’m at.

Shaun. Where’s the best place for people to follow your work?

Sweetwaterstudios. Com under the Team tab. Check in with me there. My email is there. And then also, the only social platform I’m active on is on Instagram at Sean Dealy and lots of photos of beer. Yesterday it was eight at I was transferring ads, and I was hooking them up through Dante, and it was kind of an interesting throwback, but I engaged with new technology and old technology, and it was actually quite simple. And so that was kind of fun, but yeah, mostly some geeky pictures of some studio stuff and people that we’re working with here, but yeah, no, it’s good.

So I like to try and share going on.

All right. Well, Sean Daly, thank you so much for joining me on Sound Design Live.

Appreciate it. Thank you for your time.

Single channel RTA targets to improve your mixes

By Nathan Lively

How good is your live stream mix? What does it sound like on the audience end?

How do you quantify that? Loudness metrics are helpful. I discovered YouLean meter like a lot of other sound engineers who have been doing more broadcast gigs during the pandemic. I monitor the live stream on a mobile device on ear buds to experience what the target audience might be hearing.

I still found myself wishing I had another form of reference.

For live in-person events I normally leave Smaart running in two channel transfer function mode to keep track of how the sound in the room might be deviating from the mix I’m pushing out of the console. In broadcast, there is no second channel to compare.

Now that I think about it, I suppose there might be a clever way to set up a transfer function that references the console output and measures a streaming output, but would they stay in synch? Is it even coherent anymore? Anyway, I find myself mostly looking at RTA and Spectrograph graphs these days. If an annoying resonance comes up I can find it quickly, but otherwise I’m not sure how to take action on a single channel measurement.

It occured to me to create a target using some reference material. I can measure a quality broadcast that I enjoy or find something recommended by the client. Since having this idea I have had time to test both. Here’s a screen capture from a recent gig.

Create the target

Creating the target is relatively simple. Just play your reference material, measure it, and store it.

But if you’re like me, you enjoy a certain level of complexity. Maybe you’ve noticed.

Here’s how I did it.

  1. Record a WAV file of the reference material.
  2. Import it into Tonal Balance Control.
  3. Convert the JSON file into three separate spectrum measurements.
  4. Import them into Smaart.

Someone on FB recommended Later with Jools Holland from BBC2. I recorded some long clips of male dialogue with lavalier mics and male hip-hop with hand-held mics. You can imagine the many variations of microphone, mic placement, instrument, and style available.

Tonal Balance Control is a plugin you would typically insert on the master buss of your DAW to take a look at the average frequency spectrum of your mix and compare it to some common genre targets. It will also allow you to import an audio file to generate your own targets. Those targets are stored as JSON files on your computer.

The JSON files cannot be imported directly into Smaart. Smaart wants to see a column of frequency and magnitude values. You’ll need to reorganize the data. There aren’t that many values so you could do it manually, but I’ve been trying learn MATLAB so I decided to use that.

%% JSON to TXT
% Decode file
filename = 'your file path'; % File from Tonal Balance Control.
text = fileread(filename); % Read contents of file as text.
S = jsondecode(text); % Decode JSON-formatted text
% Convert structure to table
f = struct2cell(S.frequencies_hz.Value); % Convert structure to cell array.
f(1,:) = []; f=f.'; f=cell2mat(f); % Clean and convert to matrix.
% Pull out the three traces (high, mid, and low)
high = struct2cell(S.high_normalized_mag_dB.Value);
high(1,:) = []; high=high.'; high=cell2mat(high);
low = struct2cell(S.low_normalized_mag_dB.Value);
low(1,:) = []; low=low.'; low=cell2mat(low);
mid = struct2cell(S.normalized_mag_dB.Value);
mid(1,:) = []; mid=mid.'; mid=cell2mat(mid);
% I ran makima here to make sure the frequency resolution matches that of Smaart, but it's probably optional.
% Create tables
highTbl = table(f,high,'VariableNames',{'frequency','magnitude'});
lowTbl = table(f,low,'VariableNames',{'frequency','magnitude'});
midTbl = table(f,mid,'VariableNames',{'frequency','magnitude'});
% Write table output
writetable(highTbl,'high.txt','Delimiter','tab');
writetable(lowTbl,'low.txt','Delimiter','tab');
writetable(midTbl,'mid.txt','Delimiter','tab');

Results

I like it! It’s nice to have a second opinion. How does my mix compare to some one elses?

I know that none of the targets I’m using were created under the same circumstances, but I have used them on my last ten gigs and I’ve found them helpful. I can find something that’s bothering me or get ideas for improvements.

The most recent example was a colleague asking me if I could make the mix sound more hyped. Hyped does mean something to me, but what does it mean to them? Luckily, they sent me an example.

I measured it. I found that it was different in some specific ways. I made some changes in pursuit of a compromise. It seemed like an objective way to get more of what they wanted.

Ideas

This gave me an idea for a plugin. If we can compare a measurement against a target, the logical next step is a filter suggestion to move the measurement closer to the target. That’s what I’m doing with my eyes anyway. It would just be nice to know the exact filter gain, width, and center frequency to get there.

I was able to come up with a non-realtime function as a proof of concept. It just finds the point of greatest contrast between measurement and target and then the filter to best reduce it. Sophisticated auto-EQ algorithms probably do this is a smarter way, but this seemed to work for now.

%Find a filter
micCompare = micMagnitude - targetMagnitude; % Find the contrast between the measurement and the target.
[pks,loc] = findpeaks(micCompare,'NPeaks',1); % Find the single highest peak.
peakFrequency = w(loc); % What frequency is it at?
startF = round(peakFrequency); % Start looking at the peak.
startGain = round(pks * -1,2); % Start gain at -1dB.
startQ = 1; % Start Q at 1.
x0 = [startF,startGain,startQ]; % All starting values.
fun = @(x) paramEQmagOnly(x(1),x(2),Fs,x(3),w,targetMagnitude,micMagnitude); % Custom function to find a parametric EQ based on magnitude only.
x = fminunc(fun,x0); % Minimize the function.

Here’s an example plot showing a filter inserted at 6.7kHz.

I spent a few days trying to build a plugin prototype in MATLAB, but I didn’t get very far. There are lots of examples out there of how to easily build a plugin to modify the audio passing through it, but not many to just measure the audio.

Have you tried something like this already? What were your results?

Do subwoofers need time alignment?

By Nathan Lively

It’s really important to get the low end right at live events. Research has shown that 3/4 of what people consider to be high quality sound comes from the low frequency content.

Subwoofers are a big part of that low frequency content, supporting and extending its capabilities. However, subwoofers also require careful setup and alignment to ensure optimal performance.

If you’ve ever had trouble getting your low end right, then you might want to read this article. It will explain why subwoofers need to be aligned properly and how to do it.

What is subwoofer time alignment?

Subwoofer time alignment is the compensation for arrival time differences between sources at the listening position. The difference in arrival times may be caused by a physical distance offset or an electronic delay. It is not frequency dependent.

The journey of sound from transmitter to receiver is not instantaneous. If two sources are separated by any distance then their sound arrivals will also be separated. This is the common situation with mains in the air and subs on the ground. From the listeners perspective the subwoofer is closer and must therefore be delayed (or physically moved) to be time aligned with the main.

distance offset

Does high frequency sound travel faster than low frequency sound?

In short, no.

The speed of sound in an ideal gas depends only on its temperature and composition. The speed has a weak dependence on frequency and pressure in ordinary air, deviating slightly from ideal behavior.

Wikipedia

To compare the speed of sound a different frequencies using the Rasmussen Report PL11b at 20ºC, 50% RH, 101.325kPa:

20Hz: 343.987m/s

20kHz: 344.1206m/s

From 20-20,000Hz the speed of sound changes by only 0.1336m/s.

What causes subwoofer time misalignment?

Subwoofer time misalignment can be caused by acoustic or electric latency. Acoustic latency occurs when two sources do not have matched distances. Electric latency happens upstream in the signal chain, often in a digital signal processor (DSP).

Unless the receiver sits equidistant from both sources, some amount of acoustic latency will always occur. Ignoring any boundary effects, imagine a situation where the entire audience stands at 1.6m height. With a subwoofer on the ground and a main speaker at 3.2m height there is no difference in distance from each speaker to the audience. Everywhere else, there is.

Electrical latency can occur anywhere in the signal chain, but often occurs when one source is processed separately and differently than the other. If two matching copies of a signal are sent to the main and sub then there is no latency, but if the signal for the sub is processed independently through ten plugins then there will be a difference in latency.

How much latency or misalignment is too much?

When a main is stacked on top of a sub we don’t usually worry about the acoustic latency. When the sends for main and sub are split in a DSP we also don’t usually worry about the electrical latency.

Why is that?

Acoustical latency

The wavelengths of low frequencies are relatively large and require a big change for misalignment to bother us. For the purposes of this article I will define a significant misalignment as anything beyond 60º or 17% of a cycle because it will produce a reduction in summation of 0.5dB.

How far apart do our speakers need to be to create a 60º misalignment?

The operating frequency range of a Meyer Sound 750-LFC is 35-125Hz. The highest frequency has the shortest wavelength and therefore the greatest risk. The wavelength of 125Hz is 2.75m, about the height of a male ostrich. 17% of 2.75m is 0.46m, about the length of your forearm.

If we return our example of a seated audience with a sub on the ground then the main would need to raise up to 5.15m to be 0.46m farther away than the subwoofer from the mic position.

offset

Don’t try to generalize this example into a rule. You could just as easily put the sub in the air with the main, but 0.46m behind it to create the same misalignment or change the microphone position.

It is difficult to generalize, unfortunately, because the relationship between source and audience will always be different. However, I can see how it is helpful to translate alignment to distance. This is why the SubAligner app includes maximum distance offset in the Limits pop-up.

limits

The opportunity here is that after you have performed an alignment for a single location then you can move out from that location in any direction while observing the change in distance offset to find the edges of the area of positive summation (aka the coupling zone, aka <120º).

Electrical latency

Matched electrical latency is maintained by splitting the send to main and sub at the last moment necessary. This doesn’t mean you can’t mix to subs on a group in your console if you prefer, just make sure that the sends to main and sub are coming out of the console with the exact same latency. You can verify this with an audio analyzer. 

Time alignment vs Phase alignment

Subwoofer time alignment can be confused with subwoofer phase alignment because the two are interconnected. Time offset causes phase offset, but phase offset doesn’t necessarily cause time offset.

In most cases the timing is set to “align” two (or more) signal sources so as to create the most transparent transition between them. The process of selecting that time value can be driven by time or phase, hence the relevant terms are “time alignment” and “phase alignment.” These are related but different concepts and have specific applications. It’s important to know which form to use to get your answers for a given application.

prosoundweb.com

Time alignment connotes a synchronicity of sources, e.g., they both arrive at 14 milliseconds (ms). Phase alignment connotes an agreement on the position in the phase cycle, e.g., they each arrive with a phase vector value of 90 degrees.

prosoundweb.com

We have already seen how acoustic and electronic latency can affect time alignment. Let’s look closer at what can affect phase alignment.

What is subwoofer phase alignment?

Phase alignment is the process of matching phase at a frequency and location.

If a sine wave is generated starting at the 0º position of its cycle and then fed into a subwoofer, will it come out at 0º?

That will only tell us the story at one frequency, though. How can we look at the story of the entire operating range?

What does sound look like before it goes into a subwoofer?

This video compares the input and output of a microphone cable passing sine waves at 41, 73, and 130Hz with an oscilloscope. Traveling at the speed of light the mic cable appears to create no time offset.

I could insert a video comparing the input and output of a microphone cable with an impulse response, but without anything in line, they look the same. I added a 1ms delay to put the IR in the middle of the graph.

This image shows the transfer function of a microphone cable with a magnitude and phase graph. The magnitude and phase trace are effectively flat. Exactly what we want from from a cable.

What does sound look like when it comes out of a subwoofer?

This video compares the input and output of a subwoofer passing sine waves at 41, 73, and 130Hz with an oscilloscope. I have removed any latency so that we can focus on phase shift created by the sub.

This video compares the input and output of a subwoofer with an impulse response (IR). The IR seems to get stretched out as the amount of phase shift changes over frequency. This is the normal behavior of a transducer who’s group delay, and therefore phase shift, is variable and unable to reproduce every frequency at the same time through the operating range.

This video compares the input and output of a subwoofer with a magnitude and phase graph. Unlike most full-range speakers, the phase response of a sub never flattens out. It’s a moving target.

Do all subwoofers have the same phase response?

A subwoofer’s response will change with its mechanical and electrical design. Matching drivers in different boxes may have quite different responses. Even the same combination of driver and box might have a small contrast in response because a typical manufacturing tolerance is ±5dB.

For this reason it is important to avoid making assumptions based on a manufacturer’s spec sheet, but instead measure the final product and prove it to yourself.

Does the phase response of a subwoofer change with level?

A cold subwoofer operating within its nominal range should maintain a steady phase response against any change in level. But, as a sub approaches maximum SPL or begins to heat up, its response may become non-linear. This behavior will vary from subwoofer to subwoofer so it’s important to avoid driving two different subwoofers with the same channel.

Unfortunately, I don’t know a rule of thumb to guide you, but it would make sense to compare the response of a subwoofer when it’s cold to when it is hot. When I worked at Amex in Slovakia and we were setting up a new system, Igor would punish it outside playing loud music for a few hours and listen to it afterwards.

Of course you can measure this change with your audio analyzer, but another fun test is to push on the driver with your hand when it’s cold to feel how rigid it is. Run it at a maximum level for two hours. Push on it again. Feel how it has become less rigid (increased compliance).

Here is a graph from Heat Dissipation and Power Compression in Loudspeakers from Douglas J. Button showing a sample loudspeaker before and after full-term power compression. The solid line is the one with more heat and a worse quality rating.

Does the phase response of a subwoofer change over distance through the air?

…allow me to remind you that the loudspeaker’s phase response, within its intended coverage, typically doesn’t change over distance, unless you actually did something to the loudspeaker that invokes actual phase shift, i.e., applying filters of some sort which you should be able to rule out!

merlijnvanveen.nl

Here is the magnitude and phase response of a subwoofer measured at 1m and 100m. The only thing that has changed is the level due to the inverse square law.

1mV100m

Room interaction however, will make it appear like the loudspeaker’s phase response is changing over distance because the room makes the traces go FUBAR.

merlijnvanveen.nl

Here’s what that above measurement looks like if I enable four boundaries. At 100m the reflections have transformed the phase trace (blue).

1mV100mWithBoundaries

Where is the acoustic center of a subwoofer?

Why does distance offset not correspond exactly with phase offset?

All other things being equal, the distance offset measured from your microphone to your subwoofer may not exactly correspond to the measured phase offset in your audio analyzer. This is due to an interesting acoustical phenomenon documented by John Vanderkooy.

As a useful general rule, for a loudspeaker in a cabinet, the acoustic centre will lie in front of the diaphragm by a distance approximately equal to the radius of the driver.

J.  Vanderkooy, “The Low-Frequency Acoustic Center:  Measurement, Theory, and Application,” Paper 7992, (2010 May.)

This fact becomes important when estimating delay times for subwoofer arrays where a small distance in the wrong direction could compromise the results. It may also be important if you are attempting to estimate subwoofer phase delay from far away without prior access to its native response.

What is the subwoofer crossover frequency?

A subwoofer’s recommended crossover frequency may exist on its spec sheet, but when it comes to subwoofer alignment in the field we must look beyond a single frequency to the entire crossover region affected by the alignment. To make an exaggerated theoretical example, imagine if you turn the subwoofer up by 100dB. The crossover region where they interact will also move up.

crossover
crossover+50

The crossover region is commonly found where magnitude relationships are within 10dB because you have the highest risk of cancellation and the highest reward of summation. To find this region in your audio analyzer, insert a 10dB offset and find the magnitude intersection. Some audio analyzers offer other tools like cursors.

What causes subwoofer phase misalignment?

The most common reason for subwoofer phase misalignment is user error. This may seem like a bold or aggressive claim, but manufacturers have historically placed their responsibility on their customers.

There are many subwoofers in the world and only a small number of them have detailed instructions on phase alignment within a narrow set of limitations. The rest require the user to discover an optimal alignment for themselves. This is further complicated by the fact that reflections can make measurement and listening tests misleading or impossible when performed under typical field conditions.

We saw above that what comes out of a subwoofer is not what goes in due to system latency and phase shift. Some products take this fact into account and are specifically designed to work together and are phase aligned when equidistant, therefore only requiring compensation for any distance offset. Other products are designed to work together, but are not phase aligned when equidistant. The third, and most common, scenario is that sound engineers like me and you end up combining products from different generations, families, and manufacturers that were never designed to work together.

I should pause here for a moment to say that I’m not passing judgment or point a finger. I don’t have enough aware of all conditions to say why things are this way, just that the complications exist. And honestly, I enjoy the puzzle. See any of the video on my YouTube channel from the past couple of years for evidence. 🙂

What are the consequences of subwoofer phase misalignment?

Let’s ask Nexo.

Consequences of badly aligned systems
Mis-aligned systems have less efficiency: i.e. for the same SPL you will be obliged to drive the system harder, causing displacement & temperature protection at lower SPL than a properly aligned system. The sound quality will decrease. The reliability will decrease as the system is driven harder to achieve the same levels. In certain situations you may even need more speakers to do the same job.

NXAMP4x1 User Manual v3.1

Do subwoofers need time alignment?

Yes, subwoofers need time alignment any time there is a distance offset creating acoustic latency. They also need phase alignment in any event when they are being combined with another source that is not already phase aligned when equidistant.

Do not assume that your main and sub are phase aligned when equidistant just because they came from the same manufacturer. You have a 33% chance of creating cancellation instead of summation.

How do you time and phase align a subwoofer?

Although there seem to be many methods, I have only ever found one that works reliably and has all three unobtainable characteristics: fast, cheap, and good. It may sound like I’m about to go into some wild conspiracy theory you’ve never heard of, but the method I use is also recommended by L-Acoustics, d&b audiotechnik, RCF, and Coda Audio (and probably more). It involves two steps: first in the phase and then in the time domain.

  1. Create a relative equidistant alignment preset using filters, delay, polarity, etc. (this is the fun part)
  2. Modify that preset in the field using the speaker’s absolute distance offset by adjusting the output delay time or physical placement.

The method goes by various names, but I’ll give Merlijn van Veen the credit for the Relative Absolute Method since he introduced the idea to me. I then packaged the idea into an app called SubAligner. It not only includes alignments for many major brands, but a total of 39,183 possible combinations between different brands.

How do you verify subwoofer alignment?

How do you know if you’ve done it correctly?

A listening test should reveal higher SPL and a tighter response around the crossover region. SubAligner offers a black and red pulse to focus your ears in the right area.

An audio analyzer should show matching phase response between each speaker and expected summation in the magnitude response through the crossover frequency range. Appropriately filtered IR peaks should be aligned.

All of these methods should work, but can be ruined by reflections. In these worst case scenarios, I still rely on the Relative Absolute Method because I’d rather use something I know to be true than try to speculate on what might be true. I have written more about this in Don’t Align Your Subwoofer to a Room Reflection and Can you remove reflections from live measurements for more accurate alignments?.

Have you tried this method? What were your results?

Acknoledgements

I want to thank Francisco Monteiro for the feedback and patience with my many questions and misunderstandings.

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • …
  • 18
  • Next Page »

Search 200 articles and podcasts

Copyright © 2022 Nathan Lively

 

Loading Comments...