Sound Design Live

Build Your Career As A Sound Engineer

  • Podcast
  • Training
    • My Courses
  • Archive

Nothing in an audio analyzer tells you how good it sounds

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live I’m joined by Advanced Systems Specialist at d&b audiotechnik, Nick Malgieri. We discuss the self-aware PA system and the future of live sound, cardioid subs, and why there’s no polarity switch in d&b amps.

I would like our d&b users to be thinking more about the artistic goal and making adjustments based on what they’re hearing and not getting lost in the science and the measurement and the verification. We’re trying to build a platform that doesn’t require that. And we can just focus on mixing or show.

Nick Malgieri

I ask:

  • What are some of the biggest mistakes you see people making who are new to sound system design?
  • The self-aware PA system and the future of live sound
    • If the most destructive part of the signal chain is between the loudspeaker and the listener, then what is the most powerful tool we have to deal with this destruction?What are some specific ways that d&b helps us with directivity?
    • Array processing reduces or eliminates the need to measure the PA on site.
    • Chris Medders : I’d be curious to hear how accurate he feels the phase prediction feature is when measurement values are precise in the field, and how effective that is for eliminating the need for TF measurements in varying sized rooms.
  • From FB
    • Chris Tsanjoures What is the best theme for a bar, and why is it Tiki?you seem to do a lot of traveling and consulting. If you think going a different direction than a clients current plan would be best for their situation, what are some of the things that you are able to identify a client needs, without them realizing they need it?
    • Christopher Patrick Pou What does a “typical,” mix bus section on any given mixing desk look like in an object mixing-based environment?
    • Gabriel Figueroa I’d also like to know why some deployments are not using the desk as the control of the objects and what the pros and cons are of this approach.
    • Peter Jørgensen whats the behavior in a endfire sub array with internal cardioid subs, like SL-sub.edit: What happens when you build an end-fire array with a cardioid subwoofer like the SL Sub?
    • Johannes Hofmann Whats the minimum distance of a cardioid sub to reflecting surfaces behind the sub to avoid cancellation in the lowend?
    • Istvan Kroki KrokaveczWhen will games be available on D40 amps?
    • Tomasz Mularczykhighest scores in D80 games
    • Benjamin Tan “How does engaging Array Processing change your tuning approach?”
    • Michel Harruch: Is there any plan to incorporate polarity inversion for the design of complex subwoofer arrays like gradient or end fire on ArrayCalc?
    • alexdanielewicz Why can’t you flip polarity in the d&b ecosystem?
    • Robert Kozyra How to identify the problem speaker(s) in a large array hang.
    • Daniel Brchrt How do I combine speakers from different series with unmatched phase response, like the T10 and Y7P?
    • Sunny Side Up Why have external amplification rather than built in amps?
    • Steve Knots what do you think about renting cranes to hang PAs rather than rigging them from truss?Well, I’ve seen photos of big festivals where it’s being done already so I’m curious about the whole thing — safety, rigging for crane lift, stabilizing / aiming the array, and of course security around the crane base to make an un-climbable fence wall type deal. I guess. Seems innovative
    • Wessley Stern What is their philosophy with sub/main crossover? It seem to me that they let there subs LPF be much higher than other companies, well above where the main cabs HPF is in most cases, resulting in a lot of LM summation. I really enjoy their systems and the perception this results in.
    • Vladimir Jovanovic Subwoofer driver sizes and uses. Is there a trend of releasing 21″ subs? Not just from d&b, but other brands too. Did the needs of events changed to drive this trend? (Pun intended, I know where the doors are) If there is a trend at all.

Notes

  1. Quotes
    1. Nothing in an audio analyzer tells you how good it sounds.
    2. I have never found a discrepancy in what the alignment says in ArrayCalc versus what I found on-site.
    3. Our whole design ethos is little, light, and loud.
    4. If you’ve done all the alignment in ArrayCalc, we don’t need Smaart.
    5. We’re trying to do as much of the science as possible for you ahead of time, so that when you get onsite, you can focus on your show and for the vast majority of applications, there’s absolutely no need for a polarity button because we already have cardioid subs.

Transcript

This transcript was automatically generated. Please let me know if you discover any errors.

I’m Nathan Lively. And today IEM joined by Advanced System specialist at DB Audio Technician. Welcome to Sound Design Live.

How’s it going, Nathan?

Going good. Just for you. I see that there are some special tools here that I have that would really welcome you. That’s the last time I’m going to do that.

Do we need a director to call sound effects on the show or what?

Stage manager. All right, Nick, I definitely want to talk to you about object based mixing and firearmies and combining speakers from different families. But before I do that, after you get a sound system set up, what’s one of your favorite pieces of music to play through? It to get familiar with it.

Actually, the first thing I usually play isn’t music at all. It’s a very simple recording, a very dry recording of a simple snare drum. For me, that’s a great way to check system timing. And when we play with soundscape systems, with emulated room acoustics, it’s a good way to hear the nuances of Reverb tales and stuff like that.

Cool. I would actually like to add that to my list of things. I’ll send you a link near Drum Sounds.

There’s a couple of things. I’ll send you a couple of things. Great.

Okay, so we had a lot of questions come in, so we’ve got a lot of technical topics that people want us to hit. But before we do that, we should talk about career and business stuff for a minute. I wondered if you could take a look at your career so far, Nick, and pull out some lessons that you’ve learned and that have helped you find more of the work that you really love. So what are some of the ideas that you can share with people that might help them look beyond maybe some of the typical front of house, mixed positions that people think of and just maybe some career advice that you have found over the years?

I think probably the first thing I like to tell people is never in my career did I get hired off of a resume submission.

You’re saying that my plan to just make a beautiful looking resume and send it out to everyone and then do no follow up is not a great correct.

Yeah. Not recommended. Every single job offer I got was like a verbal offer off of someone that I knew or met or we knew someone in common and I came as a reference or something. So I’d say as general career advice, just be around people, make friends with people, make connections, find an excuse to visit a company, find an excuse to visit a show site. Maybe you have a friend with an Inn or something, shake hands, make your smiling face known, and just be the person who is on the forefront of their mind when they’re in a last minute scramble and need somebody.

Yeah, that’s a great point. If the audio is based on personal referral, that’s such a great point about staying top of mind. How can you do that in sort of non manipulative and fun way, showing up, being places? Yeah, that’s great. It’s not a recipe, but it is something that is probably the opposite of just me sitting here at home waiting for the phone to ring.

Yeah, absolutely. And don’t forget, there’s a lot of markets in audio other than like, touring front of house engineers.

Tell me about it. What are some things I may have forgotten?

Let’s not forget about the in house gigs, right? In your hometown. There’s a lot of performing art centers, clubs, all that kind of stuff. And there’s a whole other world of audio called installation, which, by the way, was largely unaffected by Kofi. And a lot of people from the touring world just segue right over to installation, and only some of them are going back out on the road now that they’ve gotten used to spending the evenings at home with their families.

Now, is installation a place where I could continue working as a freelancer, or is it mostly employees? And so should I be going and looking at job boards or looking at their websites for openings? How do you recommend I get started with that?

The installation companies are probably more likely to accept, like, a cold call resume if they have an opening, but knowing someone there is still going to be the inside track. And in my perception, there are two kinds of audio installation companies. You have ones that maybe also have a touring division and really specialize in performance audio, and they have staff on hand that are audio ninjas to be able to really do high end systems. Then there’s a lot of installation companies that are really just responding to bid requests, and they’ve got the labor for the physical installation, the rigging and the wire termination and all of that stuff. But they might not do performance audio systems frequently enough to have an audio Ninja on staff. And a lot of those companies are either leaning on manufacturer people like me to come help Commission it, or a freelancer to come in and be their Ninja for that one off gig because the other five gigs are going to be like low voltage alarm systems and camera systems and stuff like that.

Maybe doing a little bit of research could help or at least knowing gain in. Oh, this is a place that focuses on performance, audio or this is not. And then coming into that conversation intelligently. Hey, I know that you guys don’t focus on this, and so I could really bring that to the table and be helpful in that way.

Yeah, that’s right. You can’t just ask for a job. You have to propose your value to somebody. So figure out what they’re missing and what you can provide for that.

Now, when you proposed to your wife, was it similar you’re here is the value. I bring this cow. I have a car.

No, something like that. Right.

Okay, let’s talk about technical stuff. What are some of the biggest mistakes you see people making who are new to sound system design?

I think the most common mistake always happens on show site, and it’s just poor prioritization on how to manage your day.

Like what?

Like spending too much time thinking about what’s happening on a Smart trace and not enough time thinking about just having a good physical layout of speakers. Or maybe this isn’t a great time to make noise because I’m pissing off other people who are working in the room. Hey, I’ve got a rigor in the air. I probably shouldn’t be blasting a speaker next to them or just spending too much time tuning a PA and not actually getting to a sound checkpoint, which ultimately is just as important as tuning the PA. Let’s just get it most of the way there. And if we find some time later, we go back and do some touch up.

What is the bad thing that’s going to happen if I don’t prioritize my time correctly?

Yeah. First of all, if you can’t prioritize your time and manage it, you’re going to end up missing meals or something. Then that makes a long day extra hard and unhealthy. So let’s take care of ourselves at some point during the day. Also, you need to be thinking about what’s the content for the show. How am I going to mix it? How am I going to route it all with this kind of stuff? What is the artistic priority as opposed to trying to make a PA perform quote perfectly on a screen?

Now I’m remembering back. I remember you have a pretty good story about prioritization and its relationship with smart. Do you know what story I’m referring to?

Yeah.

Can you tell us about that?

Yeah. I love this story. We’re good friends with the folks over rational acoustics that make the Smart software. And I had a really fun experience when they got a d&bA, and I was going to come out and help them tune it. And first of all, is a little bit of pressure because I use Smart sometimes for tuning. And the last thing I want to do is get caught using it wrong from the people that make the software. And I’m like you guys are going to provide the Smart rigorous. And it showed up. And we had two days allocated for commissioning and training and all of this kind of stuff. And because of scheduling and travel conflicts, day one, I was working with Jamie Anderson from Rational, and on day two was Chris Andrews from Rational. And so two days we got to tune a PA with two separate people that work for the company that makes the tuning software. And I showed up to a very well installed PA. First thing I did is make noise, come out of all the right places, and verify that it’s wired correctly and stuff. And then we had the standoff moment with each other where it was like, So how do you want to do this?

And they looked at me, how do you normally approach this PA? So we started to get into it. We walk in the room making some changes by ear, showing them how rate processing works and this kind of stuff. And after the end of day one, they were like, this is great. Let’s go to dinner. And I realized never once had we looked at Smart. Well, it was sitting there running the mic was there. We just never placed the mic. We never did anything. We never heard something that we needed visual feedback to correct. And so then Jamie leaves. In day two, Chris shows up, and Chris is like, I don’t care what you did with Jamie yesterday. Let’s reset it and do it again. And so we tune the whole PA differently, but in a similar style, using our ear, walking the room, and never once looked at Smart. And it was a really good reaffirming moment to me that even the people that design it, they like to say there’s no such thing as Smarting up here. It’s a tool. Use your judgment, be influenced by the artistic goals of the show and the logistic constraints of your venue.

And Smart is there if you need it.

Yeah, that makes me think that the audio Analyzer. Maybe I should be thinking about it more as a verification tool, as a problem solving tool, unless as a qualitative tool that says, IEM going to tell you what’s good. Nathan, I’m like, okay, all right. Audio Analyzer. You tell me what’s up and I’ll do it.

Yeah. I’ve heard really well measuring Pas that I don’t like the way they sound. And I’ve heard very poorly measuring Pas where the second I push up some channels on my mixer, like, this is great. I love this. So, yeah, nothing in an Analyzer tells you how good it sounds.

Okay. During the last Lifetime Summit, you gave a great presentation called the selfaware PA System and the Future of Live Sound. And if people want to listen to that, they can go to Livesonsum at 2021 Soundslive.com. But a couple of follow up questions about that presentation. If you say that the most destructive part of the signal chain is between the loudspeaker and the listener, then what is the most powerful tool we have to deal with this destruction?

Yeah. And just to clarify that these days, year 2022, we have these pristine signal chains of all digital, high bit rate, low noise floor, virtually zero cross talk. And then the sound leaves the speaker, and we’re still subject to the same pesky physics that we’ve always been subject to, and we can only control so much of that. And that is what is turning into feedback. That’s what turning into intelligibility, lack of impact, all of that stuff. The best tool for us to avoid this primary source of degradation is directivity. The more that our loudspeaker can focus on where we want it and avoid all other directions, reflective surfaces and open microphones, the better the PA is going to sound before we’ve touched an adjustment at all.

Wow.

Okay.

And so what are some of the specific ways that d&b helps us with directivity?

On the subwoofer side, it’s all about cardioid subs, not just to cancel sound on the rear at some frequencies, but equally across all frequencies, so that even if you are on the back side of the sub, you’re still getting a proper representation of the frequency response, just quieter. Then we have the SL series line array cabinets, which have side firing low frequency drivers that not only add more energy to the front, but cancel on the back, which is great as you walk off axis one of those rays, all frequencies get attenuated evenly. And then even on point source cabinets, we rely a lot on what we refer to as a dipole, which is two smaller low frequency drivers instead of one larger low frequency driver. And those two smaller low frequency drivers are spaced out in the cabinet so that they create summation directly on access, but cancellation in other directions. Not only do we get good directivity out of the frequencies coming out of the Horn, but we get added directivity of lower frequencies as cool.

Okay, so another thing that you said during that presentation is that array processing reduces or eliminates the need to measure the PA on site. And that connects with one of the questions that came in from Chris Meters, who says, I’d be curious to hear how accurate he feels the phase prediction feature is when measurement values are precise in the field, and how effective it is for eliminating the need for TF measurements in varying size rooms. Funny way to say that. And the subject he didn’t mention there is I think he’s referring to a Ray Calc. Would you agree?

Yes. So it sounds like there’s two questions there. One is about a rate processing, and let’s put a pin in that for a moment. The other one is about the ability to tune your PA quickly and accurately within the software before you’re on site. And to answer the question simply, I have never found a discrepancy in what the alignment says in a Ray calc versus what I found on site. Even when I put up a mic to verify it’s within ten degrees of phase wrap between the subs and the tops and anything there couldn’t be much more perfect. And why would I want it to be more perfect at a specific location anyways? The idea of alignment is to make it work for a larger portion of the audience as possible. And one of the main benefits of using the software to do this is you can very quickly with a couple of mouse clicks, pick multiple points for your measurement microphone, and verify if the timing decision you’ve made translates not only to the 100 level, but also to the 200 level and the 300 level. Whereas if you’re on site with a microphone that just turned into a 45 minutes process just to get the mic from the 100 level to the 200 level to the 300 level, and who’s got time to do that?

When tuning, you load in at eight and sound checks at noon or something. So it allows you to be more informed from the comfort of your home. And as long as your file is accurate to the way the PA is deployed on site, you just push those settings to the amps and then bust out smart if you find yourself with some extra time and energy that day. Now, rate processing is very similar for anybody who doesn’t know. Rate processing is our technology, where each cabinet within the array requires its own DSP, path and amp channel. But this allows every cabinet within the line array to have a different signal sent to it, so that the behavior of the array as a whole matches the geometry of our venue better than we could with just mechanical splay. So this means we need to have an accurately represented array, proper height, proper spray angles, all that kind of stuff. And within the array calculation, we need to make sure we have accurate venue geometry. Then the software can say, okay, now I know the relationship between the PA and your audience areas. Let me optimize myself for perfect spectral response from the front row to the back row.

This does a couple of things. One, it corrects for weird HF peaks and dips and all this stuff. It fixes Farfield HF reduction because of air absorption, and it makes the PA hit a target curve at the listener positions, so it will hit the same target curve in the front row as it does at front of house and at the back row. And if you have a rate processing enabled on other parts of your PA, like delays and sides and 270, all of those parts of the PA are now hitting the same target curve at their respective audience positions. So this way now you don’t have to worry about level matching and spectral matching different parts of your PA, which is the biggest part of measuring the PA. Now you can just say, oh, the whole thing is too much lowmade. I’m going to pull out some 250 Hz and apply it to all parts of the PA. And they’re all going to respond much more similarly than they would without a rate processing.

That’s so cool. And I’ll just add that it is really fun and so powerful to be able to check all those different alignment positions really quickly. If you’re like me and you want to try to calculate the best alignment position ahead of time, and then you do that, and however you do that, then you just have to accept, okay, this is going to work. It’s really nice in a Ray calculator. You can then verify, oh, yeah, this is the right one. Okay, great. And so I like that tool a lot.

Yeah. There’s only one gig I’ve ever had where I really cared about making the whole system align in one specific mic position. And in a previous lifetime, I worked for a rental company out in California that Myers Sound used to hire for their internal events, like their parties and stuff like that. And then my question was always like, Where does John Myers sit? He’s the one whose name is on the check. He’s the one that can hear the difference. Let me make it a line there, and everybody else can just deal with it.

Tell us about the biggest or maybe most painful mistake you’ve made on the job and what happened afterwards.

What’s the old joke? In our business, I’ve screwed up bigger gigs in this one. Expertise. What is it saying? Wisdom comes from expertise, and expertise comes from failure. We can do these one liners all day long, but it’s true. Making the mistake is the way to learn and be a better person. And we’ve all done them. I was working a show. We’re Loading it in. I was working in the doghouse of an analog console, if anybody remembers what those are. And I was pretty pissy. It was a rough day. It was a gig at a winery where we loaded it on grass, and I was trying to figure out how to make this console sit level on a grass embankment next to the stage. It was hot. There’s mosquitoes. I’m just pissy. And the voice behind me, someone on the stage, it’s empty. They’re like, no audience. There’s no artist yet or anything, but his voice is like, how are you? And I was like, this is fucked up. And I just totally went off and just, like, verbal diarrhea on how I was feeling and turned around. And it was the main headliner artist singer of the show, and was like, oh, God, what have I done?

And he ended up being really cool. I hear your brother. This is a hard work environment. Just keep going. I really appreciate it. That was when I turned around who would say something so nice.

He could have been like, who is this guy? Get him out of here.

Totally. And that’s all it takes. Just rub someone the wrong way and he’s thinking about his pressures of performing, and he doesn’t want my negativity involved. And I’m the monitor engineer. So if I’m going to be like that during rehearsal, it’s gain to ruin his vibe. So he could have just said, yeah, get them out of here. And then that’s it. I’m fired. And once that happens, you never get that gig back.

I don’t know if this is great career advice, but a friend and student of mine got a new job once, and it was a really important one for a big, well known company. And I said, hey, I think one of the best things you can do from my experience is to figure out as quickly as possible what things are going to push your buttons and then figure out how to deal with that. Because the worst thing is that it becomes a surprise that’s when it’s really painful is. Yes. All these circumstances. Yes. All this pressure and stress, and then also a surprise, like something falls on your foot or something is late or whatever, things go wrong. And so if you can sort of get ahead of that somehow, man, it can really help because that’s the difference between saying something you really regret to a manager or something, and then you have a whole thing to deal with.

Totally. I feel like one of the best professional advances I’ve made came as a byproduct of moving to the Southern US, where I just had to learn how to keep my mouth shut more than I’m used to. I think people in the south tend to be a little bit more cordial, a little more polite, and they complain in a different way. And that’s been a good career and life skill for me.

Christian Giroud says, what is the best theme for a bar and why is it Tiki?

Oh, yeah.

What are you talking about?

Yeah, one of the things I missed most during the COVID era is hanging out in some town where there’s some trade show like Info.com or name or something, and ending up at Tiki bars with rational acoustics guys. Okay, they love a Tiki bar. I love a Tiki bar. And we need to get back to this trade shows just for the Tiki bar. I couldn’t care less about the trade show. As soon as 05:00 hits and we’re all looking at each other, am I going to get a blue drink or a green drink? That’s what that week is all about.

So Chris says that you seem to do a lot of traveling and consulting, and this question that I’m going to paraphrase, which is basically, how do you handle these situations? Or have you been in a situation where it seems like the client wants something and they’re saying, this is the result that I want, and so here’s how I want to do it. But you know, that’s not going to get them the result.

Yeah, that’s the hardest thing about audio, right? Human beings are visual thinkers and audio is invisible. So everybody has an idea of how to do it, and there’s no real way to prove it. And even your average person might not know how to listen to the PA to know if it was achieved or not. So it’s all about being a bartender and playing psychology and just having good verbal interactions. And there’s a way to advocate for what you think is the right decision without knocking down a client’s request. I think there’s a way to verbalize that there’s a certain approach. Just don’t be the annoying it guy who’s just no, that’s not how it works. You don’t know what you’re talking about. No one wants that kind of audio person. Just speak normally with them and say, So what I’m hearing is repeat what they’re saying. It makes them feel heard and say, how about this? What if we tried an approach to do this and explain in simple terms why you want that approach? And I find it’s really hard for a client to argue with that. It almost makes it feel like it was their idea to approach it the way you want to approach it.

And you’ve told me in the past that a rake out can be a tool to facilitate these discussions. Sometimes it really helps to have a visual element. This is what you want. Here’s how we can do it. What about this? What about that?

Small churches and clubs and venues that want a line array, but it’s too small of a room for a line array? Let’s look at it in a Ray calc. Let’s show you how a line array performs versus a point source, and it will be immediately apparent that there’s a really good discussion there. And if in the end you want to win array whatever it’s your PA, you can buy whatever you want. But at least I advocate for what I think is best.

There’s a bad movie podcast called How Did This Get Made? And I don’t listen to it that often, but it comes to mind in this moment because we’ve all been in music venues all over the world, but even here in Minneapolis, I’ve been into several music venues where the PA does not fit the room. And you’re like, how did this get made? These two big arrays, half of it’s just playing into a balcony and a wall, and it doesn’t seem to fit it’s.

Funny, this is the number one theme of being a support person for d&b Audio Technic, because our whole goal, our whole design ethos is little light and loud. How do we get very high directivity, high bandwidth and high output out of the smallest cabinet possible? And our clever Germans do a pretty good job. Meanwhile, we have people coming and saying, I don’t want that speaker because I don’t think a pair of tens are big enough. Woofers, which used to be the simplest method of evaluating loudspeaker. You have to explain to people, no, you don’t understand. This pair of tens has more low frequency extension than our old speaker that had a 15.

So you’re finding some preconceptions about just things people think about the size of related to power quality. Okay. Christopher Post says what does a typical mix of section on any given mixing does look like when in an object based mixing environment.

So let’s be clear. When you’re using soundscape and object based mixing, there is no master bus in your console. We need different performers, different types of signals to hit that process. Or the processor works is like a summing matrix with the spatialization data and renders that to the PA using delay and level distribution. So then this is a great question. How do you feed the processor from the console? The short answer is there’s no one way, there’s no one way to sound scape. But I could give you a very kind of simple anecdote that represents a lot of projects that I work on. Let’s say it’s a typical band. So maybe Kick, Snare, hat come out of a mono bus and send it to a processor where we can place Kick, Snare, and hat within our mix using a sound object. Then maybe a stereo mix that has all of the Toms and other drums. Those come in as two sound objects, and we can make those Toms big and wide or accurate and sound like they’re coming from the drum set. And then maybe another stereo bus for overheads and chimes and percussion stuff that maybe wants to go wider than the Toms.

Then maybe you have a bass player who has an electric and an acoustic and a Di and a mic, and I don’t know, a foot pedal organ thing or something. All those inputs can come down to a mono bus called Bob the bass player, and then Bob the bass player’s bus comes into a sound object that we control in our one called Bob the bass player, and we can place that where Bob is located might do the same thing for guitars, keyboards, bust them down, but then send them to the processor in a way that represents an individual performer. And then as you get to your money channels, your lead vocals, your pastor Mike, your CEO for the corporate event, those might be post Fader direct out of the console. All your channel strip processing works. Your Fader affects the level of that, but it immediately leaves the console and gets summed in the processor where each singer can have their own sound object. That way, when people sing together, they’re not stepping on each other in the mix. You want to listen to the Alto or the tenor you can DeMask it binaurally just like we do in an acoustic world and retain clarity headroom and require less processing on the channel strip to get it.

Related to this, Gabriel says, I’d also like to know why some deployments are not using the desk as the control of the objects and what the pros and cons are of this approach.

Yeah. So if you have a soundscape system and you’re using an Avid S Six series console or digital SD series console, you can control soundscape natively from within the console. And I know it sounds awesome and it can be your object parameters are being saved within your scenes of a console, and that’s really nice. But for a large venue and we have 100ft of travel where the sound object could be through the mains and maybe sides if you have them. And that 100ft is now represented by a three inch by three inch quad panner on your screen, it’s not as meaningful as you would think.

The scale is off, right?

And we can scale the stuff separately from what the console sends into what the processor receives. But yeah, three inches to represent 100ft is pretty coarse no matter what. So I always tell people, let’s think of it as like a wave control computer. Let’s just have our one running on a touchscreen, hovering right over your console like your wave screen does, and you can just touch the object to move it. You get a full size screen, you can visualize the room better, you can put in a seating chart. So you really know when you’re placing a sound object exactly where it is instead of just placing it in this vague square on the console.

Peter Jorgensen says, what happens when you build an infrastructure with a cardioid subwoofer like the SL sub?

Yeah, I’ve done it with the SL sub and other subs and from other manufacturers subs, because I’m not just a DB guy, I’m also just a sound guy. It works well. It’s cool. You don’t have to make an end fire out of omnidirectional subs and you can mock this up in a Ray calc. There’s this myth out there that you can’t do in fire Subaru in a Ray calc. You most surely can. It will automatically calculate your delay times for you as well. And if you want to learn more, send an email to [email protected] and we’ll show you how to do it. But to answer the question, we have some cabinets that are cardioid by themselves and then we put them into an end fire. And of course it depends on your spacing and the number of cabinets within the array and the delay times, etcetera. Etcetera. But essentially it turns it into hypercardiot. And I did it. I do a gig every year at the Monterey Jazz Festival and I run the main stage there and I do an end fire of cardioid subs. And the reason is twofold one, it’s a wooden stage that resonates.

I think it’s right at 78 Hz. Oh, wow. Yeah. And it rolls pretty slow. It used to be years ago, the stage would hear the feedback long before front of house did, and they would just hit the call button on calm. And if I was at front of house and saw that call button lighting up, I would just immediately pull the subs back because I knew it was coming. And so when we put ourselves in an NFL array, it allows me to change the delay times so that I can take that 78 Hz null and point it directly at center state, so that it’s really trying to cancel that one frequency in that one direction to stop the stage from resonating. And that feedback. And the second reason I do it on that gig is because I don’t have anywhere else to put the subwoofers. So it’s a win win in that I can’t stack them high because it blocks sight lines. I can’t do them horizontally across the front because their VIP section would be like their knees would be touching the subs and they wouldn’t be very happy about that. So they have to be up on the deck, but only one high.

And so then putting one in front of the other is the only way to make them fit.

All right. Johannes Hoffman says, what’s the minimum distance of a cardioid sub to reflecting services behind the sub to avoid cancellation in the low end?

Yeah, this is a really common question, and I totally get where it comes from, because when you have a speaker firing in the back of the subwoofer, it seems like it needs some breathing space. And it does, but not as much as you’d think you can. Actually all the d&b cardioid subs, they have the casters on the backside, so you flip it up to roll it. So then when it’s lying down, the casters point backwards and I just tell people push it all the way up to to the casters touch the wall. It only needs that four to six inches that the caster represents.

Okay.

However, most people don’t realize when you have a cardioid sub, you really need to maintain 2ft of open space to either side. It actually needs more space on the sides than it does in the back. And that’s because we need the sound to wrap around the sides to interact properly between the rear driver and front drivers. So, for example, we see people all the time that might have, like, an SL sub, but they’ve decided to place it up on end so it’s higher. Maybe that’s because they want to put a front fill on top of it or something, and it works and you can do it, but it eliminates one path length around one side of that cabinet because the side is now obscured by the ground. And undoes a whole bunch of the cardioid effect and it ends up turning into kind of like a loose cardioid.

We don’t want loose, we want tight.

That’s right. In a breakout, you can select between an SL sub and an SL sub upright, and you can look at how that affects the rear rejection.

Okay, Eston says when will games be available on D 40 amp? So this is news to me. Apparently there are games on some amps, but not on other amps. Tell me about that.

Yeah, all d&b amplifiers have games built in, and you should know that if you perform a firmware update on a d&b amplifier, it will reset all the settings, as you would expect with a firmware update except for its IP settings. So it doesn’t reset the network card, which is very convenient. And it also does not reset your high scores in the games. Critical and even the really old amps had simple games. Then we came out with the fancy four channel amps with the color touchscreen and the games got way better. And now we have this brand new amp platform that I suspect will eventually get the games. But to be honest, our software team has been working really hard making all of the audio features work correctly in the brand new amps, and I would rather they prioritize that than the games at the moment.

So Thomas wants to know the highest scores in the D 80 games, and I’m guessing these amps don’t report back to you and you don’t have a list, but I think we were talking about how it would be fun to have a leaderboard so we could see self reported who has the highest scores.

Yeah, or log it within our one, since you’re already on the network with your computer so you can have your own list, you don’t have to go back to the amp to find your high score reported back to Dbau.com, so we can keep track of who’s winning the games. We also get a feature request quite often that people want to be able to play multiplayer games across the network on front panels amp so the stage right fly guy can play against the stage left fly guy during the show.

Benjamin Tan says how does engaging array processing change your tuning approach?

It’s all part of the PA performing nicely and more like each other. So even if we have a main hang of 24 GSL and a side hang of twelve V, those are voiced to the similar target curve. So I don’t really have to worry about matching curves, even though they’re different box counts, display angles and box type and all that. And it’s doing things like mostly or completely fixing the kind of HF peaks you get right down in the front row underneath the line Ray, that kind of Fresnel effect. It gets rid of that. Which, by the way, really resolves feedback issues. If you have an artist that ever goes out on a thrust in front of the PA. It fixes the HF absorption issue in the back rows, so I don’t really have to worry about tuning for that. At the end of the day, I just need to voice the PA overall for whatever my overall mix is going for. We already have controls, like a coupling filter is what we call it in our one where we can change kind of the overall voicing of lows to highs. Do you want a flat response or do you want the case stacked low end for a lot of power?

And we can just make those broad adjustments and then maybe put in an EQ filter or two, depending on what I’m feeling, what I’m hearing, and you’re done. And if you’ve done all the alignment and Raycock, we don’t need smart soundscape systems are similar. This is why I talked about the self aware PA on the soundscape. The processor knows where every loudspeaker is located and how it’s pointed, and so it times itself. You never enter a delay time into a soundscape system. It realigns itself based on where you want the sound to come from. So I would like our d&b users to be thinking more about the artistic goal and making adjustments based on what they’re hearing and not getting lost in the science and the measurement and the verification. We’re trying to build a platform that doesn’t require that, and we can just focus on mixing our show.

Yeah, that’s cool. It sounds like there’s this idea of letting the computer do what computers are good at, and let’s have the humans do the creative decisions that the humans are good at.

I love it.

Michelle or Michael says, is there any plan to incorporate polarity inversion for the design of complex subwoofer arrays like Gradient or In Fire into a Ray calc? And they are expressing this sort of surprise that I remember having as well the first few times working with d&b systems and realizing, oh wait, there’s no way to insert a polarity inversion. But referencing back to the clever Germans, there must be a reason for excluding this.

Yeah, we don’t have a polarity button. The amplifiers and the filters available to you within our one do play with polarity as needed to get the behavior we want out of the cabinet. And this is a contentious issue. We’re used to having a Polarity button. And why would a high end manufacturer like d&b just take that feature away? And in general, this kind of comes back to this ethos that I just described, where we’re trying to do all of as much of the science as possible for you ahead of time so that when you get on site, you can focus on your show. And for the vast majority of applications, there’s absolutely no need for a polarity button because we already have cardioid subs, we already have full broadband connectivity. We already have all these benefits built into the PA, as is and we all know a lot of sound engineers that can dig themselves a whole pretty quick by hitting polarity buttons and not entirely knowing what they’re doing. With that being said, I do recognize there are kind of niche setups where this would be handy. And if you want this as a feature, please don’t be shy.

Send us an email [email protected] And what would be really helpful is if we could understand what you’re trying to achieve that requires you to need the polarity button because we’re really good at trying to figure out what you’re really asking for and if there’s a set up that you want that’s common. Maybe we would think about just building an amp preset or something to achieve it so that you don’t have to know how to use the priority buttons and it just works. But either way, we’d love to hear from you. The feature requests are always welcome [email protected]

Robert Kazeera says how to identify the problem speakers in a large array hang. He’s referencing a feature in DB where it has some self verification built in. And he also told me later about sometimes he had maybe trouble where he felt like maybe some of the speakers were not making true reports because maybe there’s a reflection because they were too close to the ground. But anyway, maybe you could start by just talking about this self verification feature that is built in.

Yeah. Another excuse why you might not need a measurement Mike. So when we go online with our d&b system, with our one talking to the amplifiers, or even without our one, you can do this. Through the front panel of the amplifier, there’s a function called system check, and this will send almost inaudible low tones and completely inaudible high frequency sounds to the speakers. The amplifier then measures the return impedance and will graph out the impedance measurement of low frequency and of high frequency and of a rear firing driver or a midrange of that cabinet to verify that all of the drivers are operating as a circuit correctly. So this tells us that something is plugged in. It tells us if there’s a broken wire, it tells us if there’s a blown voice coil, all this kind of stuff, and it makes it very quick and easy without making any noise to verify that every speaker is performing electronically up to speed. Now this doesn’t test for things like torn cone or a cabinet rattle or that kind of stuff, but we’re going to get there once we start making noise. So we run system check that verifies the electronic circuits.

Then with vertical line arrays and sometimes other types of arrays, we run a test called array verification, which is just about the most clever thing I’ve ever heard of because we designed the system in a way calc and opened that same project. And R one now knows what amp channel is supposed to be driving which cabinet within our line array, and it initiates a test process where the amp channels, one at a time, will make a low level kind of noise. And while this is happening, it uses all of the adjacent loudspeakers within the array of microphones.

That’s cool, right?

And so by the time it runs this whole test, which takes 1020 seconds for a large array, it will tell you if your line array is wired the same way it expected it to be wired based on how you built your file. And with technologies like array processing, if we had a pair of cables swapped within our fan out, this could have horrendous and unpredictable results. So making sure that every box in the array is actually fed by the right DSP channel is crucially important. So not only will it tell you if it’s patched wrong, it will tell you how it’s patched wrong, which cables are plugged into the wrong cabinet. But what this user is referring to is we have seen times where people run this test before the pace at trim height, which is floating right off the ground. And some of those bottom cabinets are basically firing right into the floor. And this can create reflections, which throws off the test. And in my experience, it’s only happened with J series. There’s something about the LF sensitivities of that box that make it have this issue. And as soon as you take it, like more than 6ft off the ground, then you can run the test without that reflective for being an issue.

Daniel says, how do I combine speakers from different series with unmatched phase response, like the T Ten and the Y seven P? And he sent me a couple of measurements, and I was like, I wonder if those are correct. And I looked them up on the d&b site, and they were, yeah, talk about combining speakers from different families and different series.

Yeah. There are manufacturers that when they come out with a new generation of loudspeaker, they adopt a new phase profile. And this makes it hard to incorporate newer systems and legacy systems into the same PA. Our approach is to try to keep that phase plot as consistent as possible. Over the years, even when we came out with newer apps that are more highly capable, processing wise, we didn’t take that opportunity and just change the phase response to existing speakers. We wanted it, but J series on a D 80 new fancy amp to be exactly the same as a J series on the old two channel amps. We lock in that performance and make it consistent across the world across the decades. And mixing most d&b loudspeakers works really well right out of the box with complementary phase profiles. Now, there are exceptions. The Tseries is a great one. The T series has a very unique acoustic mechanism that affects its phase profile. And here’s how this works. So the T series for everybody doesn’t know it’s a small speaker, and it’s convertible between a point source and a linear box. And it has a rotatable Horn that doesn’t just turn the dispersion on its side.

It actually changes the way the Horn interacts with a secondary acoustic lens, which you can see on the front grille. You see these kind of stripes, this different perforation hole pattern on the front grille. And behind that front grille is a multilayered grill. And these metal perforated metal grill stuff. Multilayered actually affects path length of high frequencies. So when we turn the Horn and it changes the way the HF dispersion interacts with the secondary perforated metal mechanism, it changes the path length, the high frequency, and changes the curvature of the wavefront. So a point source speaker radiates an outward rounded wavefront. And when a Tseries is in a point source mode, it’s 90 X 50, I think. And then when we turn the Horn and we turn the cabinet, it’s now 105 degrees wide by a proportional vertical directivity with a flattened wavefront appropriate for a line source. And the way this works is because of this perforated metal slowing down HF frequencies by extending their path length, which is why the HF phase profile of a Tseries changes depending on the mode it’s in as a byproduct of this mechanical system. And yes, we do have the ability to change it with fancy technology that’s in all these amplifiers apply some Fr filters, all pass filters, all this stuff.

But it would incur latency. So now we have part of our PA at a different latency than the rest of the PA. And it would make Tseries on new amps be different than T series on old amps, which is not something that we want to introduce to our users. So people ask me all the time, though, this is such a cool thing, how come you don’t do this Tseries rotating a Horn perforated metal thing on all the speakers? And now you know why there is a downside. And it works well for a small speaker like a T series.

But that’s not something we want in our Stadium PA. And I remember you saying that in the rare occasion that you would need to combine these two speakers, you just need to make a choice, right?

Yeah. So what part of the frequency bandwidth do you want to have it be aligned? Do you want it for good LF steering and the kind of low, mid and lows want to be perfectly aligned? Or is the T there? For Intelligibility, people commonly use a single T series in the line array mode as a high powered front fill. And in that case, we really care about the HF. So let’s make the HF part of the frequency response align better for alignment with our main system. So, yeah, you make a choice. There’s no such thing as a free lunch and audio. And if you want, like a cool feature like point source to line array, which is highly valuable for small, mid sized rental companies. Then you got to give something else up on the other end. In this case, it’s a non complimentary phase profile.

Yeah, and I’m sure there were conversations on the production side before anything ever happened where they’re like, okay, if we do this, then we’ll have this consequence. And they said it’ll be worth it.

And that’s just another reason why d&b makes 100 different models about speakers, so that you can pick and choose these trade offs as needed for your application, Sunny says.

Why have external amplification rather than built in amps?

Sure. The timeless debate. I see strengths both ways. I used to work for a rental company, a couple, actually, that only had self powered speakers. And from an inventory management point of view, it’s perfect because you never have to think about I’m sending this many speakers, and so how many amps do I need? And every speaker is an amp. So problem solved. Send them out. Don’t have to think about it. On the other hand, if you’re a rental company, it’s a lot more expensive to have an amplifier for every speaker, whereas a lot of rental companies have enough amps to run the A system or the B system, but they never have to run them at the same time so they can buy half as many amps. So there’s that stuff from the commercial side. Then from the technical side, of course, having an amp and a speaker makes it way more. And the question is, do you want that weight in the air? Do you want it on the ground? And amps do fail from time to time. When that failure happens, do you want it in the air? Do you want it on the ground?

Being able to hot swap an amp without having to bring in a rigor or lift is pretty valuable. So there are positives and negatives both ways. I like having one type of cable go up to the array instead of signal amp power. I like having the electronics down on the ground where I can monitor them more easily and troubleshoot them more easily. I like having a lighter array so I can get away with using less rigging and all of that stuff. The roof can only support so much or whatever. So having a light array allows me to use the array I want, not the array I can hang. So that’s easy for me to say. I work for d&b.

And one interesting point I hadn’t thought of before that I remember you telling me about is that if the amp weighs more than the rigging also is going to weigh more because it has to be higher rated to be able to carry heavier weight. And so it’s not just this increase in the weight, but also then the whole thing goes up.

Let’s say we have a really big line array, a maximum hang of 24 boxes. And Germany decided actually for this crossover, we have to use this coil of wire instead of this coil of wire. And the coil wire they want to use is £2 heavier. Not only is the box £2 heavier, but the array is now £48 heavier. And because the array is £48 heavier, the rigging has to be upsized to hold 48 more pounds. But not just the rigging at the top box where the extra £48 happens. But every box has the same rigging, so every box has to have Upsized rigging to hold 48 more pounds. That Upsize rigging now also added 48 more pounds, which means the rigging has to be upsized again to hold an additional 40. Everything is interconnected. So literally every ounce we can shave off of a speaker means £100 in the end or something. Maybe that’s exaggerated, but it’s not just an individual box. It’s quite a lot in the amp. Then at an additional £20 per box is a pretty massive hurdle.

So my friend Steve Knott says, what do you think about renting cranes to hang PA rather than rigging them from Truss? And I said, what specifically do you want to know about? And he said, I’ve seen photos of big festivals where it’s being done already. So I’m curious about the whole thing. Safety rigging for crane lift, stabilizing, aiming the array, and of course, security around the crane base. To make an unclimable sense, wall type deal seems innovative.

I love it. It’s not new either. Doing this for years before line of rays. Even like all rigging, as long as it’s done safely by a qualified and experienced professional, I think it’s wonderful. Personally, I think cranes are a little ugly, so the aesthetic of a giant yellow tractor isn’t my favorite show business aesthetic, but it certainly has logistical benefits. It’s a lot cheaper than paying a crew to come build a tower. I’ve done a lot of outdoor shows where the PA really needed to be in a place that was not conducive to rigging, like on a slope. And with a crane, you can rig it and then drive the crane into position or turn the crane into position. So that’s a huge benefit and it can be totally safe. I strongly suggest at night, between days on site, you bring it in and touch the PA to the ground just in case there was a hydraulic failure. At some point when you’re not there. A lot of times these hydraulic systems, they can have a very slow week and a regular operator wouldn’t notice because a regular operator doesn’t use the crane that just holds something in the air for four days straight, but it can slowly droop.

So let’s be aware of some things like that. But yeah, have a great time. Also, driving cranes and forklifts and lifts is just super fun.

Speaking of driving forklifts, I know you have used an NSL Five, I believe. Can you talk about that for a second.

The MSL Ten.

Msl Ten. These giant Meyer sound speakers.

Yeah, I don’t know. Myers an old company, so I don’t even know if I’d call it an early Myer Speaker, but they’re long gone at this point. But they were so large, a single MSL Ten barely fits into a 53 foot truck like it clears with a couple inches on either side. That’s how large this giant array speaker is. And it was brilliant in that they built slots for Forks from a forklift into the speaker. So you drive the Forks into the speaker. It’s now rigid on the Forks. You pull it out of the truck, you drive it in a position, you take it up in the air, and you turn off the forklift. Congratulations. You’re raised Hong. From logistics point of view, it was amazing. The sound quality could probably be debated. It’s still innovative for the time.

Believe it or not, the first place that I worked for when I moved to the Bay Area had some. They got them second hand somewhere from someone else.

Right. Good times. The last time I was using them was like the amplification for NASCAR, where it’s really about vocal band blunt force SPL. It’s not exactly a nuanced show, and they want it cheap, so being able to rig it without a single hand or crew person helps that be a cheaper installation. It was a great fit for that.

Okay. Wesley Stern, what is their philosophy with main sub crossover? It seems to me that they let their subs low pass filter be much higher than other companies, well above where the main cabs high pass filter is, in most cases resulting in a lot of low midstymation. I really enjoy their systems and the perception this results in.

So he likes that bump in the crossover range. It’s a bit of a misnomer out there that d&b doesn’t allow you to mess with the crossover. We do, but in limited ways. We don’t allow you to actually visualize or just the slopes, but we give you buttons that allow you to tailor the crossover point. And this user is right in that the subs generally go higher in frequency response than most users prefer. We leave it available to you if that’s your approach. But depending on the subwoofer model, it will either have a button called 100 Hz or it will have a button called Infra. Both of these, they lower that low pass filter to cut out some of the upper base. 100 Hz is approximately 100 Hz. Infra is closer to 70 Hz, but changes based on the capabilities of that subwoofer so that you can throttle down the frequency response of that sub and let it focus on the real low stuff, which is more common these days. And Conversely, all of the high mid boxes have a button called Cut, which is a low cut, and it moves up the high pass to cut out some of the well end response at the top.

And between these two buttons, we have four options on how to run this crossover. We can have summation in the crossover point for additional power. We can carve it out to have a little bit less magnitude in the crossover point because maybe we just feel like it’s muddy in that room or with that mix or any combination thereof, and we just toggle the buttons until we like how it sounds. And we have confidence that we haven’t skewed the phase response or made some kind of other compromise because the predetermined friendly buttons that are still compatible and you don’t have to think about it.

Vladimir says subwoofer drive sizes and Uses Is there a trend of releasing 21 in subs, not just from DB, but other brands, too? Did the needs of events change to drive this trend?

I don’t think the needs of the events have changed, but DB has gone to generally larger drivers’than we did in the past, and this is because I think it’s less about the needs of the act and more about the capabilities of the speakers. That’s the thing that’s changed when we had the J series, the kind of gold standard d&b large format PA the tops could go down to. I think it was like 90 Hz or something. Then we had a J sub that was 318s and a J infra that was 321s. A lot of people ran the systems without 21s because the three by 18s with enough low end. Personally, I think once you hear one of these big PA with even just a single infra, it’s hard to use it without because that extra low stuff really feels good. But the reason why there were two models of subs was because the 18 inch drivers could go fast and be high impact, but they couldn’t go very low, whereas the 21s could go really low, but they couldn’t go fast and be high impact. And what’s changed is voice coil technology, particularly with the SL series.

That whole voice coil magnet structure is really reengineered and requires a higher voltage to the voice coil, which the d&b amps are capable of providing. And all of this, in turn, allows the main speaker that goes down to 45 Hz. So we got rid of the upper base requirements out of the Subaru and allowed a 21 inch driver that now has full power even at full excursion, which means as that speaker pushes out, it still has full power to get pulled back to its neutral position as quickly as possible. So now the 21 inch driver can go faster, like an 18 with higher impact, which allows us to be like, oh, the 21 can now do the upper base and the lower base with more impact than the J series could do total. This is a huge win. Let’s go with the 21s. So now that SL sub with 321s not only has the same frequency response as a J sub and a J infra put together, but has almost identical SPL output as a JCB and a junk put together, but weighs less than a J infra by itself.

Okay, so there were some rumblings on Facebook. It seemed like there were a couple of people who are like something about they don’t like d&b phase response and they’re like something about it makes them upset. And our assessment of that is maybe this trend in the market towards flatlines magnitude response phase response. And so I just wanted to give you the floor on that for a minute to maybe address what you think are some of these preconceptions.

Yeah, I think we’ve seen a big marketing push from some manufacturers who are making their face response quote more linear. That is to be like more of a flat line without wraps in the phase response. And DB is not doing this. We’re not into it. We don’t like it. The reason there is we don’t really believe that you’re hearing much of a difference. In the end, we think it’s more of a visual improvement than a Sonic improvement. And there’s no such thing as a free lunch and audio. So just because we can preemptively mess up the signal in exactly the opposite way that the speaker is going to mess it up doesn’t mean we get that for free and doesn’t mean that we don’t incur other side effects in the process. And the main obvious one when it comes to fixing phase response is latency. I think Meyer has a really cool product called the Blue Horn that has a very flat phase response on like 50 Hz or something. And it’s very cool. But as a necessary compromise there, that speaker takes 50 milliseconds for sound to come out 50, right? Chris from Rationale says, yeah, if you want the base to come out the same as the high frequency, you need to think of it like a restaurant.

If the high frequency is your entree, the midrange is your appetizer and the base is your cocktail, you can have them all at once. You just need the kitchen to keep your cocktail and keep your appetizer until the entree is ready. And so same thing with F IR filters and fixing phase. Right? We need to make the high frequency wait, and then we need to make that mid frequency wait until the low frequency is ready to come out of that frame, and then we can align it. And then you end up with 50 milliseconds of latency, which for Bluehorn is totally fine because that is a post production studio environment product where latency isn’t an issue because it’s all playback. A concert, on the other hand, is a different story. That Snare drum already stopped by the time 50 milliseconds goes by. Maybe there’s situations where you could argue that’s, okay, and that latency is still good, but it does come back to my earlier point to the d&b amps have all the ability to make flat phase response right now as is and we could fix it. It takes one of our DSP people like 5 minutes.

It’s not hard. But then that speaker on a D 80 will sound different than the same speaker on an old D twelve and the world of change. And in the end, we don’t really think. We think if we did two versions of the same speaker and we AB them. One had flat phase response, the other one didn’t that you wouldn’t pick the right one if asked to in a blind test.

Nick, where is the best place for people to keep up with you and follow your work?

You can find me on social Nick makes it louder on Instagram, see some pictures of some d&b rigs, a whole bunch of soundscape systems. Otherwise, feel free to send me an email. You can send an email to support us at Dbaudio and just say hey Nick, I had a question about that thing you were talking about or tell me more about this. Anybody anywhere in the world can send an email to [email protected] tell them where you live. That email will get sent to your local support team in your time zone in your native language. Also we have a ton of tutorial [email protected] Everything from software use to wreaking and hey come and say Hi see me at a trade show if those ever start up in postcode come say Hi otherwise you’d be on the internet bar. Yeah. If you bring up the Tiki bar thing to me at a trade show there’s a good chance you’ll end up drinking Tiki drinks on the d&b credit card.

Well, Nick mail. Jerry, thank you so much for joining me on Sound Design Live.

Thanks, Nathan. So much fun.

that moment when you realize monkeys are trying to steal your gear

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live I’m joined by the Chief Engineer at B2B Podcast Agency Pikkal & Co, K Bharath. We discuss podcast production and the dangers of outdoor production in Thailand.

I ask:

  • What are some of the biggest mistakes you see people making who are new to podcast production?
  • What are the most important things to get right to make a remote interview go well?
  • Tell us about the biggest or maybe most painful mistake you’ve made on the job and how you recovered.

Notes

  1. Podcasts: Beyond Markets, The Lux Travel Podcast, Insight India, Bring Back the Bronco, Real Narcos, Darknet Diaries
  2. Intro to the Phase Graph

Transcript

This transcript was automatically generated. Please let me know if you discover any errors.

Welcome to Sound Design Live, the home of the world’s best online training in sound system tuning that you can do at your own pace from anywhere in the world. I’m Nathan Lively. And today I’m joined by the chief engineer at B to B Podcast Agency, Peacal and Co. Welcome to Sound Design Live.

Thank you, Nathan. Thank you for inviting me to the show.

You have made some special time in your schedule to talk with me today. So where are you in the world today?

So today I’m calling in from Singapore, my Homebase.

And what time is it there? Isn’t it like 10:00 p.m..

It’s ten, six to be exact.

Wow.

So we’re like ten or 11 hours apart. That’s amazing.

Yeah. Wonderful.

All right. Thanks for staying up with me. I appreciate it.

It’s a pleasure. It’s a pleasure to be on your show. Nathan.

So bad. I definitely want to talk to you about podcast production and conducting remote interviews. I know you do a lot of that, but before we do that, what do you like to use as a reference for podcast sound quality? So do you have a favorite podcast or an episode that you think represents the height of podcast production for you?

So I would go back to this very famous podcast. He’s got probably one of the best set ups we had a studio last time, and we modeled the studio based on his microphone. The show SM, seven BS and mixers and all that. It was one of the best set ups in my opinion. However, I was focused more on the outdoor recordings. And one particular episode which I listened to was like, it got my attention and the audio quality of that really got my attention. That was this American Life, the 24 hours Golden Apple. They were recording it from the Golden Apple. And the quality of that, capturing all the soundscapes, capturing all the noises and everything. I thought that was good. Just at that moment.

A man in a Hawaiian print shirt and khaki pants walks by their table.

He hears the word employer, mistakes it for the word lawyer.

And then turns to Tom.

Are you a lawyer? No. Do you want to be?

And I thought that was how an outdoor recording should sound like, in my opinion. So I modeled a lot of it based on that. And that episode was done in 2000, if IEM not wrong, 2000. And so I searched online. I came across industry changing mixer, the roadcast approach at that point of time. And I pad that with the audio Technica BPHS, that’s the headset. So that’s like my outdoor go to settings, and that is probably, I would say the best audio quality I could get. The closest I could get to this American Life podcast, which to me is my reference point for an outdoor podcast. Okay.

So we’re going to get some more into the technical side of recording these interviews and podcasting in a little bit. But first, let’s go back in time and tell me how you got your first job in audio. What was your first paying gig?

I was just a fresh grid, to be very honest. I was just a fresh grate and came out. I was just like a normal unigrade looking for a job. And I stumbled upon this internship, I should say. Right. And I got the internship. And on the first day, I didn’t do formal education of like, sound engineering or anything. Everything I learned through my job, everything I learned through online courses, the best lessons, actually, when you’re doing it on the spot, making mistakes and learning from them. So first they did my show, my colleague taught me everything. And I would say the first paying gig, I should say the first day on my job.

And was this in radio broadcast or was it like a podcast studio or recording studio?

It was just a podcast studio. So it’s just basically the MGX ten if I’m not MJX tenu and a couple of at 2020s. Those were the first set up by it.

Okay. Lots of things have happened since then, but I was wondering if we could Zoom in on one point in your career where you felt like maybe something changed, maybe you made a commitment, maybe you decided to stop doing one thing and start doing another thing. So was there a decision that you made to get more of the work that you really love?

I was one week into my job, into my rule with pickle. And to me, I had to make a very crucial decision there, because if I left, I would have missed all the opportunities which I’ve gotten over the three or four years, which I really do not regret. And I think that’s the best addition. I didn’t leave. I decided to forgo that, say, job opportunity and grow as an individual and grow with the company to where I am now or to where we are now, actually.

Well, that’s great. I like that because most of the time when I asked someone that question, they think about a time in their life when they said, I can’t stay in this job anymore, I have to go somewhere else or I have to take this new opportunity. And so it’s rare that I think you are in a place and you have another opportunity and you say no to that and you say, you know what? I want to commit to this thing and see this through. So I think that’s interesting that you decided to stay there and you’re happy with that choice. And here we are today, and you had a bunch of experiences.

Exactly.

All right. So let’s get into talking about podcast production. So why don’t we start off with some mistakes? You’ve made a lot of mistakes yourself and now you’re working with other people and you’re seeing other people starting out and they are making some of the same mistakes. So what do you think are some of the biggest mistakes people making who are new to podcast production?

I got an idea of what kind of mistakes people make from the show that I did recently, Magic Mike. So I was a show host for Magic Mike, but I was speaking to other podcasts.

By the way, let me just interrupt you, because for people listening now, think about what you think this show is about, Magic Mike’s. At first you might think, oh, Magic Mike XXL maybe it’s about stripping, maybe it’s about beefcakes. And then you might think, no, maybe it’s about microphones, magic Mics or Magic Microphone, but it’s not either of those. So tell us what this shows about.

Basically, it’s where I speak to other podcasters to get their journey, to learn their journey and to actually understand what kind of software are they using to actually make their podcasting journey easier. They actually shared with me some of the mistakes they made. And I also learned what are the mistakes people make in general? Firstly, microphone technique. That’s the biggest killer in my opinion.

So how do you do microphone technique wrong? So I have a microphone here. So what could I be doing better? What could I be doing wrong? What do you see people doing wrong?

I guess. Firstly, they have it on a stand and they just put it at a desktop their desk stand and they just speak just very far away from it. Iem just going to show you an example. It’s something like this. Yeah. They think it’s like a webinar. They just have an expensive microphone, a good microphone, and they just be far away from it without realizing they should be actually like a clenched fish away. Or like I say, eat it like an ice cream. Usually with the microphones, you got to eat it like an ice cream because that’s when you really get there, you really get a great audio. Another one is when they start off their podcast, they buy an expensive microphone and after six episodes, they realize actually they’re not really into it. So therefore, portfolio happens. Then the microphone collects us. That’s another mistake. That’s another mistake. I noticed. Finally, for me, this is like completely hate this, but podcasters using desktop audio to podcast.

Okay. You mean the microphone built into the laptop?

Built into the laptop. That is a no go for me, because the lease someone can do is get a wired earpiece that has a microphone. It comes with a headphone, plug it in, and they can use that all airports, not the best. But I think to kick start your podcasting journey. These are good enough. These are really good enough.

Yeah, I have heard people’s, videos and podcasts just doing recording with the microphone built into their Apple headphones or something, the wired ones normally. And I feel like, yeah, you can get by with that. And that’s definitely better than whatever is just built in. So that makes me want to ask you about how you work on this or improve this with guests that are not at your studio, because I found that the problem that I run into a lot and that I’ve talked to other podcasters about a lot is that as much as we try to get our guests to do all the right things like use headphones, be in a quiet room, try to use the best microphone that you have, or at least something that is not like some really crappy Bluetooth headphones or desktop audio, as you mentioned. So we try to get them to do all these things. And then for me, somehow I don’t think it’s ever worked 100%. Somehow my guest always shows up with something slightly wrong. Either the room is too loud or they may have a microphone, but they’re not actually recording with that microphone. And I just don’t know if there’s any way to overcome that without me being there.

So I’m curious how you handle that. Do you do some kind of pre production work where you meet with them and you check all these things the day before? Yeah. So tell me a little bit about that. How do you overcome some of these issues of working with remote guests who have completely different conditions that are not in your studio?

So that is the key working pre recording, because one thing I learned through the hardware was you need to analyze where they are seated, where they are located. Because I tend to do Mike tests, I tend to do Mike test with all the guests who come on the show. Even if I’ve done a mic test with them, say, two weeks ago and they’re back on the show, I still do it again because the conditions change, the location change, everything changes. I don’t want to risk it. So therefore, what I do is I do a mic test at least three days before because then if the earpiece is not good or sometimes the earpiece microphone does not connect well, at least they have enough time to get something new before the recording. So that is one. And also always checking with them, are you recording from the same location? Because what tends to happen is sometimes they have an office space and then next thing they are recording from a meeting room and that meeting room is probably just filled with glass panels. So I’ve come across situations like that and it was terrible. It was like audio sounding in the bathroom.

So that was painful to hear. But during my test, ensuring they are recovering from the same location, just making sure that they have a good experience in the my test. So in the podcast, they are upbeat and they’re looking forward to it.

Yeah, that’s really smart because if you’ve taken care of some of those technical issues ahead of time, then you’re not worrying about those going into it. You’re focusing more on the content and the questions. So I don’t do that with this show. And it is a little bit of stress because, for example, you and I didn’t meet ahead of time and we’re just showing up here and I’m at home in my studio and I’m pretty confident with this set up that I’ve used a few times already. But it’s still a thing that’s distracting me a little bit from just having a conversation with you. And so if we would have done that ahead of time, then maybe I would worry less about that. Or the thing that I found that happens often is that I’ll get on the meeting with someone and you’ll start talking and I can tell that there’s a problem. Either your microphone is bumping into something or there’s too much background. But because I am now in interview mode, I become the interviewer and not the sound engineer, then I can’t be bothered with that stuff anymore. So it’s almost like if you’re going to wear multiple hats, you really have to separate them and figure out, okay, I’ll do the sound engineering stuff on this day, and then when I come in on the next day, I’ll be ready to just be the interviewer.

Yes, exactly, exactly. And this is something I learned through Magic Mike as well, because what I did was I tend to just go dive straight into the show and not tend to care much on the microphone because I was bringing out their journeys. However, one thing I learned is actually doing a pre call with them would help this because then what you’re doing is you’re a picking out how the audio sounds like and be just running them through the questions and all that. But of course, this depends on how much time we have to actually allocate for that particular show.

So great tips here so far. Broader, maybe let’s talk about something that’s really gone wrong for you. You talked a little bit about mistakes people making, and we talked about working with remote guests. Can you share a story with us about something that has gone really wrong for you? Maybe it ruined an interview, maybe lost some recordings that you had worked on or something like that. So what’s one of the biggest mistakes that you’ve made on the job?

I can vividly remember, too. This still sticks out in my mind, like it never goes away. So this was, I think, second month into my pickle journey, so that’s when I was a bit more trained in live streaming and podcasting. And then what I did was I confidently went and switched on the mixer. Everything was set up. There was a couple of live shows. So what I didn’t have was like somebody watching me monitoring in the background. So live show went on and everything. I went back home. Next morning I came in and I listened back to the audio, and I realized, hey, these are not coming up from the microphone. These are coming out from the webcam.

Oh, no.

So basically, I did two live shows with the audio coming out of the webcam because I forgot to switch the microphone input to the mixer, which was the MGX. And I was like.

Damn, wow, what happened? I guess you had the recording and you still had to produce it.

So I think Luckily, I had a backup, because what I tend to do is I tend to back up the audio only into another laptop. So I recorded on the city. So Luckily I had an audio backup. I was able to change it.

I see on the second computer, you did have the correct input selected.

Yes, but it was a painful process because I had to lip sync the video and audio. I had to make a lot of changes to that. So it was painful, but it was a very good learning point, I should say, because every mistake is a learning point.

Right. You really need a checklist to force yourself to check those inputs, even if you assume that they are correct.

Yeah. I would say being an audio engineer in my case, it’s like being a pilot. You need those pre flight checklists and all of that to make sure that everything is going smoothly. And then while the show is going, it’s on autopilot. And the most important times of a show is like the start and the end.

Sure.

That’s a good point. If we didn’t start recording, we wouldn’t have a recording. If we didn’t stop it and save it.

Then those are the two important. And my second, this was an event, and the event was nearing to its end. Everybody was packing up, and we had this very important guest who just came by and decided to do an interview with us. I had the Roadcaster in Meadcaster Pro. It needs power. Am I right to say that powered it up everything with the audio Technica BHS once and recording was going well. All of a sudden, the roadcast approaches, the power went off, and I was like, wait, what happened? I just stopped the interview, and I realized one of the electricians was actually taking out all the electricity and cutting power to everything because they were in the middle of the recording. Yeah. At that point, Luckily, I did not panic and I had a backup plan. I took out my Zoom H Six, plugged in the audio Technica, and interview still went on. However, I lost the first part of the interview. Okay. Because the Roadcaster does not save when it’s automatically shut off. But that was an experience because another key trait I think an audio engineer should have is like when something screws up, something goes south.

We should not bend gain actually just aim for the solution. Yes.

Throw up your hands in the air and just walk out.

That comes with a lot of, I would say, the experience and how many shows that you do because you’re there, and that teaches you a lot. I should say.

If you are a fan of audio Analyzer, then you may be interested in my next workshop on phase. It’s called Interest of the phase Graph, and it is all about making quick decisions without a PA. So this is appropriate for a beginner level. So if you’ve just started using an audio Analyzer or if you’ve been using it for a few years, but you’ve always been a little bit intimidated by the phase graph and you want to feel more comfortable with it and learn how to use it, then this could be for you. So the first class is on March 5. So it’s coming up here pretty quickly, and it’s going to happen over three sessions. And I found that it’s not a great experience for everyone when I try to cram everything into one day and just make it long and painful and so IEM going to spread it out. It’ll be three 1 hour sessions so that we have a little bit of time in between to digest, think about what questions we have, actually try some of this stuff out and see how it works. So questions we will answer. What are the optimal settings for the phase graph?

How do I practice when I don’t have a PA? How do I convert phase to time and time to phase? Plus, we’re going to have to talk about how to get proper valid data to begin with. Actionable data. What does it look like? How do we know if we have good data if we’re doing it? So if you’re interested in that, it’s sound design live. Comintrophasegraph. Or you can just look at the link in the show notes for this episode.

All right, thanks.

The place in Bali, and that’s a coworking space called Hubu Brightsmake in the Farm. And what happened was we went there about 07:00 a.m. We were recording a show for one of the Airlines interviewing people, and we just sat back. We’re trying to have our coffee. Next thing you know, the monkeys were coming around and trying to actually steal our gear.

Okay. Because they got inside somehow because it’s just open.

It’s just open for you.

Wow. So you looked over and they were going through your bag and they were about to take things almost.

Yeah. If I’m not wrong, we actually had to stop our recording because they were going to steal out there.

Wow. So were you able to complete the interview or you couldn’t anymore because it was too insecure. Wow. That’s hilarious. Yeah. So I haven’t been to Thailand, but I did spend some time in Nairobi and we did go out and do some camping. And so I have been in a situation where someone actually has to stay at the camp at all times because if you don’t, yes, the monkeys will come and steal your stuff.

That was one of the best moments.

I should say one of the best moments. But you didn’t even complete the interview. But it was really funny.

Okay, that’s funny.

All right. Brad, I want to ask you about a book recommendation, and it doesn’t have to be audio necessarily. But what is one book that has been immensely helpful to you?

The Bachelor. I’ve not been someone who tends to read a lot. I’m more of a visual person. So I’ve picked up reading and talking to you about something which really striked me because like I mentioned earlier, in a critical situation, an audio engineer needs to be calm, especially when there’s a solution. And that’s what I learned through Baggadilla is to focus on the solutions and not on the problems.

Sure. Yeah. And that’ll help you in any job, not just an audience.

Yeah.

So bad. What podcast do you listen to regularly?

Firstly, I have to listen to the podcast that we do with our clients, and that’s been really immensely grateful because I’m getting insights into the stock markets, for example, and what is the travel industry. So a couple of podcasts like Beyond Markets by Julius Bay, The Spirit of Lux by the Lux Collective inside India. So these are like very interesting shows and the stories are pretty captivating, I should say. Some other shows I go to, actually more towards seas based bring back the Bronco Rio Narcos. Another one I picked up recently was Darknet Diaries. I’m not sure if you’ve heard of that.

No. What’s that about?

So Darknet Ivy is talking about what goes on behind, say like Scams or True Crime, so I started picking up on that. I found it quite interesting, especially this True Crime podcast. I’m starting to show a bit more interest in that. Yeah.

Super popular right now.

Yeah.

Where is the best place for people to follow your work? I guess it would be your podcast, Magic Mike.

Magic Mike is one they can find me on BTOB Fmkb. That’s where all my work. That’s where all the podcasts will be.

Like K and B.

Just K and B. And you can connect to me via LinkedIn. To actually find out more about outdoor Studios or other recordings, I should say.

Awesome. Thank you so much for joining me on Sound Design Live.

Thank you, Nathan. Thank you for having me on the show.

When will this ‘immersive’ fad be over?

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live my guests are the Director of System Optimization and Senior Technical Support Specialist at Meyer Sound, Bob McCarthy and Josh Dorn-Fehrmann. We discuss the perception of immersive sound systems from marketing nonsense to power system design tool.

I ask:

  • When is this fad going to go away?
  • How is it possible for each audience member to receive uncorrelated signals? If every array is source is covering the entire audience, won’t every audience member experience a 5x comb filter?
  • From FB:
    • Robert Scovill: Is Galaxy, when it is used in immersive systems considered a “spatializer” by a given definition? I know Meyer are incorporating delay matrixing with in the unit to achieve the spatial aspects of their SpaceMapGo application, but I’m curious if units like Astro Spatial and L-isa, Timax etc., are functionally – or mathematically – different than what Galaxy has to offer. How does Meyer define an “object” – is it a speaker output? Or an input source to the spatializing device?
    • Aleš Štefančič: I was wondering how far into the audience the immersive experience can be achieved before all those separated signals become combined and does that then cause cancellations in the back if the room?
    • Lou Kohley: When will this fad pass? 😉 Seriously though, Does Meyer see Immersive being commonplace or as a special thing for specific spaces.
    • Gabriel Figueroa: What do you see as the pros and cons of immersive in theaters that cater to both music and spoken word? Especially rooms with difficult dimensions, where traditionally you would add a speaker zone/delay, but now you could theoretically add not just coverage but imaging as well!
    • Robert McGarrity: Total novice for immersive programming, but where do you delay to? Is there a 0 point?
    • Angelo Williams: Where Do we place audience mic’s in the room for capture as objects?
    • Lloyd Gibson: I thought 6o6 was against stereo imaging in live sound because of the psychoacoustics and delay/magnitude discrepancies seat to seat. Does this not apply here or is there a size threshold where it can be successful?
    • Sumeet Bhagat: How can we create a good immersive audio experience in venues with low Ceiling heights ?

It’s a totally different experience mixing in a wire versus mixing in the air. That’s the beauty of immersion, but you have to be able to pull it off.

Bob McCarthy

Notes

  1. All music in this episode by LXIV 64.
  2. Spacemap Go
  3. Quotes
    1. Noise is the number one complaint at restaurants.
    2. There’s no upside to unintelligibility, but…intelligibility isn’t the only thing. We’re willing to give up some of that approach [of mono center clusters] in order to get some horizontal spread. People are willing to give up perfection and intelligibility in order to get that horizontal experience.
    3. Spacemap is a custom panner, basically.
    4. Can I use smaller arrays if I use more of them? The answer is yes. Consider the Fender Twin Reverb. It does only one thing: reproduce the guitar, and it can ruin the experience for everybody because it’s so freakin’ loud. So how do those two twelve-inch speakers out-do our whole $100,000 PA? It’s an object device that only streams a single channel, while [the sound system] is reproducing 32 channels or something like that.
    5. Time doesn’t scale.
    6. It’s a totally different experience mixing in a wire vs mixing in the air. That’s the beauty of immersion, but you have to be able to pull it off.
    7. One place I throw up a big red flag is people wanting to play matrix games with their under balconies and front-fills. It’s like, stop it stop it stop it.

Transcript

This transcript was automatically generated. Please let me know if you discover any errors.

Welcome to Sound Design Live, the home of the world’s best online training and sound system tuning that you can do at your own pace from anywhere in the world. I’m Nathan Lively. And today I’m joined by the director of system optimization and senior technical support specialist Bob McCarthy and Josh Dorn-Fehrmann. Bob and Josh, welcome to Sunday Design Live.

Hi, Nathan. Welcome. Thanks for welcoming us, I guess.

Yeah, good to be here.

Okay. So I definitely want to talk to both of you about Immersive system design. That’s what we’re here to talk about. A lot of people sent in questions. It is an exciting or polarizing topic, depending on how you look at it right now. But I hope by the end of today’s conversation, you may have some more information about it, and you may feel differently about it. We’ll see I may feel differently about it, but before we do that, I would like to know from each of you what was the very first concert you ever attended.

Can you remember whoever can remember first for me?

That’s easy. If you consider a concert at my elementary school gymnasium, that was Charlie Fer and his band, and they played and I don’t give up blank about a green back dollar. And they literally did that because it was in the Catholic school auditorium, so they couldn’t say damn. And I thought, wow, this is really cool. We’re all at this concert together and everybody’s cheering and they’re playing Peter Paul and Mary songs just like my records. And I didn’t even know such a thing was possible. This is really cool.

Oh, man, that’s way better than mine. Mine was a Christian artist of some sort. I was really involved in the Church when I was a kid, and I think it was Rebecca St. James. Maybe it was like a first Christian concert was like, what I did first.

And then you were both steeped in religion from a young age.

Yes. I grew up in Louisiana, and I moved to Texas.

The thing about my first concert was it was people that I knew that I went to school with their younger brother. So it was like, real people. So that set the seat in me that real people can play music for an audience. And that was like, okay, this is awesome. I want to be part of this. And there you go. Right there.

Then, of course, my first thing go ahead.

My first big rock show was Grand Funk Railroad.

Oh, yeah.

Iem your captain.

I’m glad you pointed it out because I think it seems like a magic trick for a long time. Right there’s, like, these magic things that are happening on stage that are making us feel feelings. And it kind of seems distant. We put the artist up higher, like we’re down lower. We’re disconnected from them in a way. So when you start to meet those people and see that they were once like you and also maybe knew nothing about music or how to play music or audio or physics or anything.

And then they learn that stuff, then your brains, you start to see, like, oh, maybe I can get involved.

Yeah.

Totally. I also got into theater really young. I remember watching shows and just at high school productions and being in elementary school and going to go see Anne Frank or whatever. And it was funny. We saw Anne Frank in Lafayette, Louisiana, at the Big Performing Arts Center. And at the end of the show, we got on the school buses. And the person playing Anne Frank was smoking a cigarette outside, and it totally ruined the magic and the spectacle. And that was probably the first memory I have of, oh, this is something that people actually do that are human beings.

It’s very interesting.

There’s a person inside that mouse costume.

Yeah.

Well, another seminal event like that for me was when John Huntington wrote his book on control networks and control system. Exactly. And it’s like, I had known John for ten years, and it’s like, Well, Gee, whiz, if John Huntington can write a book, I can write a book. Seriously. It was that much of a join on the head. And that was a big piece of pushing me forward to right above that’s fun. Yeah.

It’s really helpful when we see our colleagues doing something like, oh, this person can do it. I can do it.

You don’t have to be a College Professor or have mixed the Beatles albums to write a book about audio or to be Harry Olsen. You can write if it’s got something to say.

Yeah.

And I remember when he then went on to self publish a future edition. So he’s been a good role model for a lot of us who want to, like, publish and stuff.

Exactly.

So, Josh, when you’re coming out with your book.

Oh, man. I wrote a thesis for graduate school while I was on tour, and that was hard enough. And that was about 50, 75 pages. And it’s on restaurant sound design. So it was a great excuse to tour around the country and eat at great restaurants and talk about noise and how to elevate the dining experience.

Would you mind sharing a couple of pieces from that? Like, what was one of your biggest takeaways from looking into a lot of restaurant sound design?

Well, yeah. So noise is the number one complaint in restaurants. Right.

And they tend to just make that worse by putting sound into the space.

Oh, yeah. And it comes down and we deal with this all of the time and installs churches, theaters wherever. But the same thing happens with restaurants. And one of the cool things about noise is it sort of activates a certain SPL. It starts activating your fight or flight sort of mentality. And they see that as things get louder, the rate of consumption and food and drinking actually goes up. And I think it’s somewhere near, like 20% in some of the studies I was looking at.

That’s actually good for the bottom line.

So imagine something like a Chipotle.

People are stressed out.

Yeah. That’s why you go into a Chipotle, and it’s just concrete walls and glass everywhere.

Really. They did that on purpose.

Partially. I don’t know. You can walk into these fast casual restaurants and that’s the architecture. And then that architecture trend is carried over. And so there’s all sort of anesthetic synesthesia type research going on on how frequencies affect, taste and all sorts of different things. It was very interesting thesis. I went to grad school at UC Irvine in California and for sound design for theater. But, yeah, I was very interested in that. And then it sort of all came together list right before the pandemic. At a restaurant called Verse Restaurant in Los Angeles.

Manny Mariquin, who owns Laraby Studios. Very famous mix engineer, took over a restaurant space right next to his recording studio, and we put a Constellation system in there for full acoustics. You can use space Mapp, go, and you can also have PA. There’s an X 40 system of PA on sticks. It’s basically my thesis in a restaurant, and it actually exists now, and they actually have a fiber line connecting to the recording studio. And the RT time of the room is like 0.5. So it’s like a studio inside the restaurant.

And we adjust the acoustics for whatever bands are playing. And then we also use a Constellation technology that allows us to we call it voice masking, so we can sort of isolate the tables and make them that way. You’re having a nice conversation with someone and you don’t have to yell across the room or hear other people’s conversations.

I feel like we should do a whole other podcast on this, because now I’m wondering I was thinking that I can sort of pitch customers and clients on my work and on sound systems in general by saying, hey, the better the sound, the more money you’ll make. But it sounds like that’s not always true. So really, we should make the sound worse to help them make more money. But then their customers are also going to be stressed out. Like, Where’s the connection there?

Yeah. I think it depends on the goal of the restaurant. It’s like a good system design. What are your goals and what are you trying to accomplish? And then physics gets involved as well. Everyone will love a better acoustic up into a certain point of a room. If your room sounds too dead and you don’t energize it with Reverb and it sounds like an almost anacoic or like you walk into a cinema and you’re eating. That’s not a good dining experience. But if it’s got a little bit of an uplift to where it elevates and it has a little bit longer Reverb time and more early reflections, then you have an energetic room.

The problem with restaurants is you have too much reflections, you have so much hard surfaces, so many things. And then you have people that start trying to talk over each other.

Got it.

And it just creates white noise. So, yeah, there is a balance and you have to find it architects. That’s the job of the architect and acoustician. The cool thing about constellation is you can build a dead room, and then we can make the room whatever you want it to be and change it at the push of a button. So you can do the tables when people are dining and at the push of a button when the band gets on stage, it now can be a concert hall or a theater or whatever you want it to be.

And Bob, would you agree that there is this balance where it’s like the sound needs to be good enough in a commercial or restaurant space so that you feel safe and you want to stay there. But then not so dead. I guess that you’re not interested in being there and you don’t want to like, I guess, drink and eat. Have you seen that in the wild?

Well, I think that if a restaurant has overly absorbed of acoustics, which is so rare, where do you find that? Maybe in the old school? Well, this is an old school Steakhouse with the furry booths kind of thing. So if it’s really dead, you’ve created an environment that’s very you better have people far apart, because if it’s crucifixely dead and people are close, then you’re literally hearing everything exactly clearly that everybody else is saying. So the dead restaurant in the booth, those sort of go together because you got separation then.

But what I find is that you have this situation where as the background noise that tries to fill up that void to making people feel like they are not alone, that the place is alive. But some of these places will have these sensor mechanisms that raise the background music to make sure that it’s over the talking and that’s, of course, in my mind, a reverse. It should go down. If the place is already so full of people talking and have a good time, don’t send the music up because they’re already having a good time, just bring that thing down and de escalate so that then people don’t have to have the shouting experience and the what and the what?

And that feeling where you’re with a party of eight and you really are only able to talk to the person on your left or on your right. And that’s really all why.

We’Re talking about immersive experiences. And a restaurant experience is an immersive experience. You’re surrounded by people. You’re dealing with the various acoustics in the room versus restaurant in La, and a couple of other restaurants have a ton of speakers, and they do a lot of other crazy things with it. But the experience of being surrounded and experiencing what’s happening in the restaurant is key. And so we do things like we’ll raise the acoustic and make it a little more vibrant and energetic in the bar area. So it’ll be more vibrant by the bar.

And then the rest of the restaurant will be a little more quiet, less Reverb, so that people can have a better conversation. And you can sculpt all of this with the technology constellation. But that’s one of the many tools of Immersive audio. And I think Reverb and reverberation and room acoustics is a side of immersive audio that people are starting to get into more and more. But then you have the other side, which is more helical of speakers across the stage, speakers all around you moving sounds around and doing things like that.

And so what is Immersive audio? That’s a big question to me. It’s a marketing term, and whatever term you use, it, whether it’s hyper, real immersive or whatever, it all goes into the same bucket of it’s an experience for people live and in the real world.

So for me, the ultimate restaurant, plus, Immersive auto experience has got to be Chuck E. Cheese man.

Exactly.

To those animatronic cheese balls on the stage.

There you are, so in it, there’s no windows. It’s just a warehouse, everything’s blacked out. So it’s just this experience they’ve created.

That’s probably my first concert experience, actually robots at Chuck E. Cheese.

Okay, Josh, thank you very much. You’re a great co host. We needed a transition into Immersive from restaurants into Immersive. And the first thing we need to talk about is sort of the tough stuff because there are a lot of people listening right now like me who are thinking, when is this fad going away? Why do I need to care about this? And those people who are like me typically try to ignore things until it’s something they have to know tomorrow. So I’m not going to look up the directions to the airport until I have a ticket to leave tomorrow.

And so I’ve been ignoring all this stuff about Immersive for years. So a couple of years ago, I was in Orlando for the name of a conference I can’t remember, and everyone was showing off their Immersive systems, and I thought, this is really fun, but I don’t need to worry about it, because this will never be a part of my life because it’s so escalated in terms of complexity and expense that I’m never going to work on something like that. Fast forward to this year’s Lifetime summit.

And we’ve got Robert Scoville presenting about why he thinks immersive systems are so cool, why he’s trying to pitch them to producers, event producers for tours that he has coming up. And it turned into kind of this polarizing thing where we had it felt like we had people who had drunken the koolaid or were on this side of the fence. And we’re like, this is so cool. And then people who like me, who are still kind of on the other side of the fence or on the fence and are like, but wait, is this just marketers trying to sell me more speakers?

So we’re all friends here. So I know you guys don’t take any offense to me saying things like that, but I feel like we kind of need to go through this conversation before we get into some more of, like, the fun system design stuff. So I wanted to give each of you a chance to say, I guess I want to give each of you a chance to say what excites you about this idea of immersive sound, what we can do to sort of allay people’s fears, what we can do to take the fear away, that this is something that is going to be forced on people.

Is that a weird thing to say?

No, I’m there with you, man. And I was there with you up until about two years ago. And, yeah, what’s interesting is from our company’s perspective, we’ve been doing Immersive audio for 30 years. One of the first products John Meyer made was a subwoofer for the touring production of Apocalypse Now in Quadraphonic. That was the first cinema world has been doing it for a long time. Theatrical sound designers have been doing it for years and years and years. And so this live audio world where we have a stereo environment, and now we’re moving into something different or even a mixed mono environment.

It’s scary. And I think people have every right to be scared. But before I talk, let’s bring it to Bob because he’s been working in stereo and mono systems for all of his career. So Bob, go for it.

I am not an immersive evangelist. It’s not my role. What I do is try to give you guidelines to if you’re going to make an immersive system to make one that’s going to work and achieve your goals and not ruin your other goals. So for me, the laws of physics still apply. The laws of human perception still apply the acoustic realities, interactions with speakers. All those things still apply. So now you’ve decided you’re going to be immersive. So here’s the rules that you have to go by or the guidelines.

I more likely look at them as guidelines and rules because nobody wants rules, whole trade of Cowboys. And so what you want to do now is if you’re going to do this, these are some guidelines to help you succeed. So to me, I go back to the easiest thing to think of. Okay, if we’re going to make a successful system, the first thing it has to be is intelligible enough for people to understand the material in the world of theater, they have to understand the words in the world of House of worship.

They have to understand the words in the words of rock and roll. It’s pretty helpful to understand the words, although a lot of times it’s not sung with the kind of clarity and that you can bend on that. But it is really helpful to have that there’s no upside to unintelligibility. But if you look at why we go and we don’t have mono center clusters all around the world doing all of our shows, it’s because intelligibility isn’t the only thing. We’re willing to give up some of that perfection.

And the absolute bestness of that approach in order to get some horizontal spread. And I think a big piece of it is that most people are born with two ears, two functioning ears, and you want to hear a horizontal panoramic spread because that tickles your brain in a really positive and engaging way to have things spread over a horizon. So left and right is here to stay. It’s not going away. And people are willing to give up perfection and intelligibility to get that horizontal experience. And then that brings you to the next big chunk going to three channels.

Lcnr, the world of cinema crossed that road a long time ago, and they were very troubled by if you just go left and right, as soon as you’re one seat off the center that you image to that side, anything that’s panned to the center. And it’s an insolvable equation, no matter how much somebody tells you, they’ve just invented a new magic filter that time. Smears and blah, blah, blah. I don’t want to hear about it. You sit one seat off the center in an arena and everything mixed mono is on the left side.

Okay, everybody knows this. We don’t want to admit it, but everybody knows. So the deal is if you want the vocal or some center image to stay in the center, you need a center channel. That’s why you have a dialogue channel. But you have to now not go and put everything in all three channels where you can sort of put a lot of things in left and right. But when you start putting in left center and right now you’ve got a problem because they are going to have all sorts of fights.

The correlated comb filter fights that we all know about my life’s work is screaming and putting up flags about this subject. Once you go to this, you’ve crossed the line, and now you need to take a decoration approach. That is, I got to put different things in the center, then I put in left and right. And if I’m going to take that approach, that center channel has to reach all the seats. If it’s going to carry the big voice, the big star, the lead of the show, if it’s going to be theatrical, it’s going to carry the vocal content of the show.

It can’t just be a 90 degree speaker that covers one half of the room, which you can get away with on your left and on your right. So now you have pretty hard and fast rule that if you’re going to make a channel as a standalone to cover the whole room, it has to a cover the whole room. And that is the key thing. Once you’ve crossed the three and you’ve got left, center and right. Well, now the crossing over to adding surroundings on your sides and on your rears and on your overhead.

Those are just more versions that follow a similar set of guidelines.

Yeah. Bob and I joke a lot about, okay, you’ve spent all of your career separating coverage and making sure that everything is separate but equal. And now we’re doing the exact opposite and just overlapping everything. And people are like, well, what about the comb filter? And then that’s where the processors are doing all of the magic. And yeah. So on your question, is this a fad? I think it’s a tool. It’s not the right tool for everything. It’s not the right tool for every situation. And for the exact reasons that you laid out cost sometimes is prohibitive.

There are arguments from different manufacturers of why one is better than the other and how you can save money. Some people say you can have a smaller line array. Some people say, since your headroom is spread out amongst your five across the front or whatever, you can use smaller speakers because you’re distributing that through multiple loud speakers. And there is snake oil in the industry. As a mentor once told me, Audio is a series of compromises and snake oil salesmen, and you have to figure out what is true and what isn’t.

And there’s a lot of snake oil in our industry, from the gold platinum cables of power to all sorts of other things. And marketing is a thing. People are trying to sell speakers with this, and I don’t think they’re honest if they aren’t trying to do that. But with Immersive Audio Systems, what we did with Space Map Go is this is a technology that’s been around for almost 2025 years, and what we said with Space Map Go was okay, let’s just make it free. So it’s a free update to your Galaxy and where we get in system design, where we get really what’s really happened from a marketing perspective is these new up and coming Immersive systems require you to have a lot of fixed loudspeaker locations, and they say you must have five across the front.

You can have seven across the front. You can have this many on the sides. You can have this many above. You Dolby has a spec on how to design sound systems for cinema. And so people are used to these rules and that they’re like the static. I have to do this, and I have to have this many speakers in order for this to work.

All right, let me pause you, Josh, because we’re about to bust a myth. So let me introduce the miss, which is something that I believed until a couple of months ago, which is that Immersive meant five times the expense and five times the complexity, because you take your normal mono system, and then we’re going to upgrade to Immersive. Then everything gets multiplied by five. And that makes it really easy for me to ignore and say, oh, this is a fad, because no one can actually support this kind of expense and complexity.

We can barely get mono and stereo systems, right? How can we do this? And so you have been a big proponent of pointing out to people how flexible this is. And it’s a container for new system locations and system designs and not rules. Okay, so continue.

Yeah, not rules. The only rules that we like are physics. And those physics rules still apply. Pick the right speaker, put it in the right place and point it in the right direction. Now, that’s different for mixed mono and stereo systems than what it is for immersive systems. Those are the only three rules. Pick the right speaker, put it in the right place, point in the right direction. Now we at Myersound and Space Mapp. Go has a lot more flexibility in terms of what you can make an immersive system because our algorithm and we can get into the weeds about this.

But the space map algorithm and what a space map is is a custom panner, basically. So you can make a space map system out of one speaker, and that’s a Panor that you make. And the difference between space map and what everyone else is doing is that we allow you to make the panner. So let’s say you have a theater and you spend a ton of money on a five across the front system on the sides and around you like, you have a full 360 degree shell of loudspeakers where most of these immersive systems are failing is they only let you drive that system one way.

So if I use their GUI in my object panner and I move that object of my guitar to the top left corner of that panner, it’s only going to come out of the top left side of the sound system. The difference between that and space map is that we can say, draw the space Mapp to control the loud speakers however you want. So it’s like having a Ferrari and driving it like a Prius because you’ve spent all this money on loudspeakers, but then you’re only allowed to move sound around in very certain ways.

Whereas if I can draw a space Mapp for that room and have a sound zigzag and zip around every other loud speaker, send to all loudspeakers, send to just the vertical and then cross fade down to the sides. You can do some incredible things with the space Mapp technology, because space maps are abstractions of loudspeaker layouts that you draw. So instead of having one fixed location, you can draw the space map to be whatever you want it to be, which is very different than what this is.

But ultimately, the technology that all of these companies are using, including us, they’re big cross point matrix, and they’re using either level delay and delay or just level. And they’re also using level and delay together or just delay. And then there’s all sorts of other algorithms that people do and do not tell you about. Most companies don’t show you what’s going on behind the hood, whereas you can see the matrix values in Galaxy. While this is happening to see what math is actually going on. So, yeah, this is something we can get into, but we can make a space Mapp system, an immersive system out of three speakers, put them in a triangle.

And if you’re in the middle of that and those speakers can be on sticks, you can pan sound around those three speakers. It’s like a sandbox of system design compared to and the reason for that is very particular, because when space map first got started, it was designed in a geodesic Dome. Back in 1979, Steve Ellison was in Australia, and he had to work on an Apple two computer. There were speakers all along this geodesic dumb, and he had to figure out a way to mathematically move a sound around to each one of these speakers.

And it was inspired by the geodesic Dome. And then a couple of years later, he and Jonathan Dean started a company called Level Control Systems. And the first show that the technology got deployed on was for an arena touring show called the George Lucas Summer Spectacular Adventure. Yeah. And so there was over 150 people in the audience for the first show that they deployed this technology on. That was like in the 80s, like, early 80s. And so since then, what we’ve done is worked with sound designers, really in theater and big spectacles and started adding to these tool set that’s needed again, audio is a series of compromises and live sound.

What we do as live sound practitioners is incredibly difficult. And so we need to have a system that is flexible enough to overcome the challenges that we face on a day to day basis. Oh, I can’t put my speaker there because there’s a wall. Okay, well, just draw a virtual note and space map and make a virtual speaker there. So all of these tools that have been added to the space map over the years have really evolved with the mindset of it’s a live sound tool.

It needs to be flexible and scalable and easy to deploy. What we didn’t do for years and years and years was make it easy and accessible to use. It was very expensive. And some of the new Immersive processors out there from other competition and companies are incredibly expensive, and they require you to have two, and they almost handcuff you. So you buy this Ferrari’s worth of loudspeakers for your room, and you buy this processor, and then you can only drive it like a Prius because they make you only be able to move sound in the way that the room will look.

You’re getting all worked up.

I know it’s just frustrating because the rules, it’s a marketing thing that the companies are saying. These companies, these companies. What’s cool about the Galaxy is we can it’s just marketing.

You want to make something that people can reliably make work. So you put some guard rails on it and what their approaches is to make a thing that shoots down the shuttle down the middle of the road, and it would work in the middle of the road applications, and it would be repeatable, and it stays in this safe kind of repeatable thing. What we have done, because it goes back to the start of this creative place is to make a non guardrail version, but it comes in a kit form that you have to assemble yourself.

So you have to say, okay, here it is. There’s a pile of stuff on the floor. It’s like a bunch of Legos. You can build it into anything, but you have to build it. You have to conceive of the sound design. So it’s not something. It just pops up into your brain. And as far as that one size fits all sort of mentality. Now that runs into realities, such as the shape of the physical room of where you can put speakers. So if you make it so that it’s always just that it’s just for a standard arena shape.

Okay, there you go. But we have taken a thing that is ready to go in whatever shape that you do. Whatever you’re in. My first one was in the literal Planetarium. It was under the sea, the little Mermaid. And we had speakers around the circle, 360 degrees of laterals. And we had speakers in the center and speakers up in the Dome. And the Mermaid flew up and down, swam up and down. And the sound image came up with it as you turn on the lower speakers or the upper speakers.

And the characters all ran and swam around the Dome. You could image to these things. And this was in 2001 at Tokyo Disney Sea, and we literally built that thing for that place. And those trajectories are only for that application. So it’s not universal. It’s a custom fit. I don’t want to take the approach of really of talking about or disparaging other platforms. My thing is that we have a platform that can make a five channel with laterals and things that can also make six channels or four channels or two mains and 19 surrounds or whatever it is.

We’re ready to go give me an application. I’ll bet we can do what you’re looking for. That’s what I have to say. I bet we can do it. It just might take a little time, but we can build something to that shape.

Yeah, and we can shape the Playdoh, however we need to. If we need to make it look like the room, the Panor to behave the way all of these other Panters behave, then we can do that. But that’s just a fraction of what a space map can do. And it’s really a creative imagination. The other day we had someone come up to us and talk about a need for an escape room and a maze to sort of guide people along. It’s a very intricate move zippering around type of room with loudspeakers everywhere.

But the way you would do that with most Panters is very difficult. Well, with space Mapp, since they’re abstractions, we drew the layout as it would look with loudspeaker notes. But then we use what are called virtual nodes, and we just made a linear Fader. So as you drag your finger across the bottom of the space Mapp, it activated the speakers in a linear order that you want the user as they’re walking around that room to experience. So this abstraction is really cool because you can move beyond just the plan view 2D representation of the loudspeakers that these other products have.

You could do that. It’s still fine, and it’s totally useful, especially when you’re first grasping how to deal with immersive systems. But then you can do things like I want this sound to play out of the speaker in front and then the speaker completely behind me and then above me and then Zigzag. And you can make these really fun space Mapp. And I have one that’s called a randomizer that I show in some of our work. And the randomizer was designed to emulate crowd noise in the Stadium during the pandemic.

And what it does is it just randomly sends level to about six loudspeaker locations, and it adds random level changes to an existing room. In this case, we used it to represent Stadium audience sound with a mono signal with a mono signal.

We made it sound like it was all surrounding you, coming from everywhere, 100,000 people.

That’s cool.

I want to address for one moment. Can I use smaller arrays if I use more of them? And the answer is yes. Think about that. If you want to know an object lesson of that, consider the Fender Twin Reverb twin Reverb has only one thing. It reproduces the guitar and it can ruin the experience for everybody because it’s so freaking loud. Okay, so how does that one just 212 inch speakers can outdo our whole giant $100,000 PA because it’s object device that’s only streaming one single channel, and we are reproducing 32 channels or something.

Okay, so if you go to five mains and you partition your band into fifths, well, okay, now each of those has headroom available because of the decrease complexity and density of the waveforms that they’re reproducing. And I can tell you from the experience going back to 1019 and 74 listening to the Grateful Dead Wall of Sound, which is truly an object based sound system. Each instrument was separately had separate columns of speakers, and if you put them all together, it would have been a big giant blur.

But as separate events blended and mixed now in the air instead of mixed in the wire, there you go. Now you have the ability to spatialize and you can still fill the same amount of acoustic energy into the space. But of course, when you scale the thing and get too big and get too far apart. Now you’ve started to offset time and you have a band when you put the guitar that’s 100 milliseconds away from the piano. Now you’re starting to get the experience of listening to the marching band at half time at the football game, which let’s face it, it’s not tight.

And Marcy Bannon is not tight. So the thing about scale is that time doesn’t scale. So you get this thing overly large. You get it into stadiums and things. Time doesn’t go proportional. It goes in milliseconds and Hello. Hello. It’s a real issue issue.

Yeah.

I was watching something with Robert Scott, actually talking about when he first did Rush and Quadrifonic, and he tried moving Neil Pert’s drum kit around the room in the arena. And he said Neil stopped. Neil Fert stopped. And Robert will have to tell you the story. But he said he stopped and was like, what is that? And it was the propagation of time of the symbols or whatever going back to the arena. And of course, he said that Neil partt was good enough to where he had figured out the time off at set and adjusted drumming to match what was happening back coming back from the other side of the arena, which is amazing.

Yeah.

Thanks for bringing that up, Bob. I think that speaks to my question of isn’t this just a five X in expense, or is it more just like redistributing complexity and expense? Maybe there are some examples that each of you could share because I think the application for sound design when it comes to theater and circus events is really clear. The sound designer says, I want this, this and this to happen or it’s in the script. It says this happens and the sound moves around. But have you seen successful applications?

Are there interesting applications for concert corporate, some of these other places that a lot of us work in and might be wondering, is there an application that I should know about as an option for me, a sound designer, system designer. And have you guys seen that? Could it be a tool in those environments?

Oh, yeah, absolutely. We just did the AES Nashville event, and there was a spring training event. And one of the experiments that I personally wanted to perform was take someone who’s worked in stereo most of their life and just give them as minimal training as possible and put them in front of a fully immersive system and see how easy it is for them to work. And so we hired Pooch, and we invited Pooch to come in and work on it. And we did five across the front.

There’s also existing line arrays. So we tied into those as well. We did a full surround system only running on two galaxies, so really processing wise, it was two galaxies worth of outputs, 32 outputs, I think, and speakers. And that experiment seemed to work pretty well. What Pooch found was he had to reduce his dynamics, the amount of dynamics and compression he was putting on things he had to use less EQ, and he could space things out the way he wanted. One thing that also came from that was instead of using five across the front, we found ourselves wanting a little more width on the outside of the stage, and so we could easily have done a left center right and then had two sort of mid hangs to really bring out the width of the image.

To understand how all of these systems work. Let’s talk about what we’ve done in stereo, which is we’ve had inputs. Those inputs have gone into something like a console. And then out of the console. We’ve always had either one or two channels, stereo or mono or mixed mono. And those have been then distributed to loudspeakers amplifiers, whatever across the stage. Now, with Immersive systems, what’s happening is you have your inputs, they go into a console still, but then out of the console, instead of having one channel or two channels, you now have 32 channels, sometimes 96 channels worth of outputs, whether it’s buses, you decide, Auxes buses, whatever.

And so all of those new channels can be sent different things. So you now have 32 pipes that are going into the loudspeakers. And so there’s 32 separate pathways in the instance of Space Map go. So my drum kit could be on three channels. Maybe my kick snare is one, maybe my overheads are a stereo channel. And now I can move my drum kit around in a group of things while that sound is moving around those pipes. What’s happening is these Immersive processors are adjusting a level of a matrix row, and sometimes they’re adjusting delay of a matrix row as well.

That’s what’s called cross fading delay. So that’s sort of the basics of how Immersive audio works. And then everyone’s got their marketing term and secret sauce of what math they’re using for how to do it. You’ll hear terms like Mapp, Mapp, wavefield synthesis. We’re using space map, which is manifold based amplitude panning and barrier centric panning manifold base. Yeah. A manifold is a map, and you can actually look this up. There’s a white paper AES white paper on it called Me app manifold based amplitude panic.

So if you think about what a space map is, Bob, it’s a map of the room. A manifold is technically a map, and the Mapp that goes behind that is all there. So that’s all of that is to say that I think the expense of this is really in the processor. And the expense of this then carries over to other things. You’re now dealing with element for output. So in a system that has amplifiers, not in their speakers, you then have to have a lot more speaker cable, which is way more expensive than XLR, up into each line array.

Well, you have to have separate channels if you’re doing side surrounds. There are six surrounds going along the wall. If it’s cinema style, the old school that can be run off of three speakers per output, you can run on two channels, one, two channel amplifier. But if you’re going to do a full Immersive, you’ve got six channels. It’s going to take you three times as many amplifiers, and there’s no jumbo ring of speakers and cables to the next thing. So it’s all home runs. It’s all individual channels.

If it’s a two way now, there’s a crossover involved, all of those things, it all adds up. So you want to go and you want to make things move all around. It’s going to cost you channels to do it. You have to have a discreet audio location.

Yeah. And there’s this other concept in Immersive audio that Bob and I talk about a lot is granular movement versus sort of more wider movement. And the way to think about this, let’s say you have four speakers and you put those four speakers in each corner of the room you’re at. If I want to sound, to move around, it will move around. But depending on how far my speakers are spread apart, how close they are together, my ear brain mechanism and my internal FFT transfer functions that are happening will determine where that sound is.

And we have some fudge factor. They call it the audiologists. Call it the cone of confusion, which is like right after your, like, 180 degrees, your peripheral vision, it’s about 15 degrees. You can locate one degree in front of you, but then it starts going like, 15 degrees, and then behind you, it’s a little bit. We, as mammals basically visually, can really locate on the horizontal. But anyway, that’s the Sidebar. But you move it around. You have four speakers, you move it around. Now, let’s add three speakers on each side.

That is more granular. And if I move the sound around, I can locate a lot easier to where that sound is coming from.

It seems like you’re going from course to fine.

Yeah, coarse to fine gain.

I kind of look at it. As do you have hours on the clock? Do you have minutes on the clock or do you just have the Cardinal directions? Is it just East, north, southwest? That sort of thing? You look at your basic old school cinema surrounds your 5.1. That’s just the Cardinal directions. There are left surround, right, surround, rear surround. So it’s North, South, West. And then there’s the front, which is three channels. So the front is more granular, but the sides are not. Whereas as you break into more discrete channels, you increase the granularity and your ability to move and locate things individually and to have a separation.

I think it’s a really important thing to consider just from a creative point of view. What are we trying to do? Because that was what originally was Nathan’s question here is in order for this not to just be a fad. What’s the creative drive behind this? And so one thing is the ability to place audio content in locations. And those can be static, so you can separate out and you can go five mains across the front and you can separate out the band. You can hear a bluegrass band, and you can hear them all separated and then mixed and blended in the room very much like a magnified version of what you would experience if you were standing there with those musicians in your living room, like enhanced realism.

But I don’t need that mandolin player to be running around into the ceiling. That’s not really part of the creative event. Okay, so there’s moving things, and then there’s static separation in the left and right is not enough because we end up with that perpetual problem of as soon as you’re off the center, everybody pans things in their brain differently. So the panning things are just for somebody that’s exactly on the center, and everybody else is governed by the physics of your binaural listening system, and it’s never going to be solved no matter how much somebody tells you, they’ve solved it.

So then when it comes to motion, there’s a whole lot of stuff, but you’re getting into creative content and special effects. Now there can be things that are like in theme parks, like stunt shows or animation like Pirates of the Caribbean and animation where it’s basically this gigantic projection screen in front of you that’s 360 degrees. There’s a full Planetarium Dome that you’re in in your little boat that you’re in. Well, you can place the sound image all along that Dome and there’s video that is flying across.

You can make that movement of the Cannonball coming. All that fantastic usage of this median to make motion link up to video. Now we’re asking, like, is this all just a fad? I’ve concluded in terms of the five times expense, it’s video that’s the fad. And once video is done, people are tired of it. They’re going to give all that money that video people normally had to us. And then we can do our five times. So all right, the dream. But seriously, you have this capability to move things.

Now, what are you going to do with that? You have to have something that makes sense if you’re doing a classical music concert, moving things is stupid. But spreading them out is fantastic, because when you listen to a real Symphony, it doesn’t have it to be where all of the violins all come together with the Obama. It’s not that way. They are coming from separate locations. So it’s a beautiful thing to hear that I can tell you one of my most really, truly exciting immersive experiences.

I’m talking full goosebump experience was at Natasha and Pierre and the Comet of 1812, which is a theater production running on Broadway that had ten sound systems distributed through the room, each of them capable of covering the whole room. And the actors would come out not only from the state. They would actually have parts where they were on the balcony and singing to you from the balcony. Well, there’s this great wedding scene, and everybody is the whole cast of 36 or whatever is spread out over the room.

And then they sing this song together. And it’s this very gospel kind of chant thing, and it’s coming out of all ten sound systems, but it’s a coral blend that’s not all down into left and right or down into one tube. It’s literally blended in the room, which is what you get when you stand in a Church with a choir. And it was just like head blown. It’s like that’s through a sound system. That’s the thing there’s using the ability to mix in the space because it’s a totally different experience, mixing in a wire than mixing in the air.

That’s the beauty of immersion. But you have to be able to pull it off and have the things scale, right? Yeah. As a coral blend with a long sustains, it’s a beautiful thing. The same thing they tried to do was do a super tight Intelligible hiphop Hamilton wrap coming from ten sound systems spread all over the room. What do you say.

About the corporate and then the Church? You mentioned those two examples for why this tool could be important. I think corporate is very useful. We have a couple. The Audi Experience Center, I think, is one that just opened up sound art museums. But let’s think about these corporate car shows. That’s a great place for this. When you have your CEO, that’s about to walk out and they need a spectacle of sound and movement and stuff, that’s great. But when they start speaking, we need them all to focus on our presenter.

Let’s say you’re doing a big presentation of a product and your CEO is on a microphone and is walking around a system. Well, you could put them on a tracker and have them walk around. How distracting that is depends on how you feel about it. I find it extremely distracting sometimes when the sound is moving as the person is walking around, but it’s totally possible. But if there’s a band on stage, we can spread the band out and make them sound like where they’re coming from and be very realistic and add just depth to the feeling corporate that’s one way and the same sort of rules apply for churches.

It helps out with houses of worship to really bring in the focus to the pastor wherever the pastor is. And then during worship during service, spreading out that music and spreading out where the choir is and where the drummer is and where the bass player is, it really just helps immerse. And then on the other side of that with things like space map. If you have a couple lateral speakers that are out into the room, you can then goose in some Reverb from your console there.

And now you’re enveloping and using the Reverb on the outer and the dry on the inner. And you can really start mixing the room as a room. And that’s the thing we’ve been putting things down. This one or two very large pipes for so long, and those pipes are great. Stereo systems are great. Mono systems are even better in most live sound applications. But those pipes can only be so big. And what we’ve done is what our whole careers as mix engineers has been is carving out space and the only minimal frequency spectrum that we have for every single instrument.

And so what’s cool about this is you don’t have to do that as much, because now you have 32 pipes instead of two pipes. And now you can sculpt just by separating the pathway into the loud speakers. And that’s the most important thing is you’re no longer frequency masking. What you’re doing is overlapping your speakers and separating your signals.

Mixing in the air. But when you look at one thing, I just want to mention about the houses of worship and things is we need to talk to architects, because when you make that they love that fan shape, that super wide fan shape room. And then they close the volume down with a fairly low ceiling and those two things, then you want an immersive experience. Well, how are you going to do that? It’s a shape that really defies immersion because your audience spread across the super wide thing.

You’ve got 160 degrees of audience and to get from the far left all the way and reach the far right, you’ve got to go across the whole middle. And it’s a really difficult thing. So you have to be realistic and calibrate your expectations, balconies those create a real thing. And then there’s the other really important thing is let’s say, okay. They say you’ve got the budget for five mains, except that here’s this one little proviso. They have to have clear sight lines. You used to be able to have your left and right down nice and sweet in the right place in the room.

Now you’re going to have five mains, all just at the same height as you would a center cluster, which has to be so that the people on the third balcony don’t have their block sight lines. Right. And so now everybody is 100ft tall. And to me, that’s a trade off. That is really you have a hard time telling me that that is a good trade off because you’re so disconnected from the show, you can’t beat the physics that you’re late. You are to the floor where all your prime seats are.

The sound system is arriving tomorrow with today’s newspaper.

Well, that’s a great transition. And maybe we can look at Gabriel Fiero question, and his question is a little bit long there. But basically he’s working in a Church and he’s wondering, is this an opportunity for Immersive? He says right now there’s only two arrays and a couple of side fills, and some balcony fills. No delays or proper center coverage. So I’m looking at the differences between a new, correctly deployed system versus Immersive for our next PA. Now, I should point out that Immersive also has to be correctly deployed, but I actually have some pictures of his space, and I can send them to you if you want, if that would be helpful for you to talk about this.

But the important thing that you just mentioned is that the ceilings get lower and lower as you get to the back. And so to me, that seems like that’s probably not going to work for them, or at least not for these people in the back if they don’t have a good space to consider immersive right.

When you have a low ceiling in the back, you have to take an inverted delay approach. So you basically have a little speaker on the back that does a non granular approach. The more traditional surround approach in the back, and then maybe six rows forward, you Mount a larger speaker that’s high enough to make a granular surround to cover the main part of the room. I’d have to look at the exact physics of the room, but essentially those get linked together by space map as derived nodes as linked signals so that you could pan the signal around and it would light up in a granular approach, the big surrounds and then the ones that are on the outside perimeter.

Those light up as groups, so they perform a non just sort of an overall rear, whereas people in the center. I did a Church design recently. It’s a fan shaped Church that has a very popular shape. It’s a fan shape, and then it’s a fairly flat floor. But it has these ramps on the side that go up. So the ramps go up, and then it’s a balcony over the 160 degrees. Okay, so there you go. What you’re left with is the ability there’s the dog. So you’re left with the ability to do a full granular surround on the floor center and then a non granular Cardinal directions on the upper balcony and on the ramps, one on the side.

You’re forced by the physics. You’d have to kill people in the rear to get that to fire all the way to the front. And it’s not complaints always are that’s gain to stop your surround fantasies.

So, Gabriel, I think you should definitely take a look at there are three recent videos on the Myerson YouTube channel about system design with Josh and Bob going through some of the stuff, and that should answer some of your questions, because as Bob’s talking about here, you’ll see that all of the sources need to cover all of the audience, and if they can’t and they have blocked sight lines, as is the case with your people going deep under that balcony and the ceiling getting lower, then there’s going to need to be some reinforcement somehow.

And so as you’ll see in these videos with Josh and Bob, they explain how you solve all these problems. But it does start to generate some complexity as you have blocked sight lines and portions of the audience that are not visible to all sources.

Yeah, one thing that the Church market under balconies are another big thing, but there are tools to deal with with these systems built in the most immersive sound systems. I agree that I feel like the five across the front and a fan shaped room almost as a marketing dream, especially on those extreme sides. But there’s a way to do it in space map to have a left center right across each section of seating and then control each section as a left center right together from our front of house perspective, or even just stereo systems, but not stereo in the traditional way.

Stereo is where the left and the right of both that’s covering one section of that fan is overlapped. The one cool thing about this is like, let’s say you do have a smaller budget, something like a Galaxy. You have 16 available outputs. So if you did a traditional PA up front like you normally would, and then you did for your Christmas Spectacular production, you brought in a couple extra loudspeakers. Well, if you have extra outputs on your Galaxy, then just plug those XLR into those speakers, and now you can use those and send some sound for your special Christmas Spectacular sound effects around, as well as still maintain the mix that you’re using.

So yeah, there’s tons of options really depends on back to this course versus granular what do you want to do? What is the goal and the intent of the sound system?

Okay, so let’s get into some of these and let’s just see how far we can get. And then maybe we’ll even come back to some of my questions. But people are so nice to sending questions that I want to make sure we get to those. So Robert Scoville, I asked him, what do you want me to ask them about their system? And he said in Galaxy when it is used in Immersive systems, considered a spatializer by a given definition, and he doesn’t give the definition. So I’m hoping someone can say something about what a spatializer is.

He says. I know Mayer incorporating delay matrixing within the unit to achieve the spatial aspects of their Space Max application, but I’m curious if units like Astro Spatial and Lisa Tmax et cetera are functionally or mathematically different than what Galaxy has to offer.

Hey, Robert, the first question. I hope you’re doing well. First question, spatializer Space Map Go, and the Galaxy itself is a loud speaker processor. So the cool thing is, the Galaxy will still tune your PA and do all of the things that Galaxy has done for years now, with the free updates to Space Map Go, you can now use level changes. I don’t know what the definition of specializer would be, but it is an Immersive audio platform like all of these others. In addition to be a loudspeaker processor, and we’re not using delay, Galaxy does have a delay matrix, and you can set static delay times on a queue basis or snapshot basis.

But we’re using level based planning very similar to what all these other companies are doing. And the difference between the three companies that you mentioned is, yes, their math is different. They’re not talking about what math they’re using, and timeax is delay based. Lisa, I believe, is only level based with a little bit of delay. And then Astral spatial, I don’t know enough about to really, I think it does both. I think it does delay and level based, but there’s ways around it. I’m working on a project right now that is going to be using a sort of static cross fade delay matrix to move someone from an A stage to a B stage, so as they move, the delay time changes and steps for the outputs.

But Space Mapp Go is level based. It’s not controlling the delay matrix of a Galaxy. You can still control it with Compass.

I hope that answered that I can’t comment on the Astral spatial because I got Pfizer.

Okay, Robert says. Secondly, ask him how Myer defines an object. Is it a speaker output or an input source to the spatializing device?

Yes. So an object in general terms represents a channel. So if I have an output or a bus from my console that feeds that’s plugged into a Galaxy, so I have an XLR from my console and that XLR cable plugs into the input of a Galaxy, whether it’s an input. So an object is that input. So then that object moves around the space Mapp, and the space Mapp is the custom Panor that you design. So if I have 30 loud speakers, I can have a space map that has all 30 in it.

Or I can have a space map that just has four of those 30 loudspeakers in it, and you draw the space Mapp. Then on top of that, we have what are called trajectories and trajectories are pathways that you draw, and they control the objects automatically. So you can have tap tempo. So if I want to move a trajectory around at a certain BPM, let’s say I want a sound of my drum kit to go side to side. My symbols need to move in time with the music.

I just tap in that tempo, and then there we go. It’s moving left and right, and you can draw them to be as complex or as fun as you want. For example, I show an example all of the time called sound to source. Rex and my wife basically just drew a tree that is a trajectory, and I can load that on a channel and it controls the object and move that sound around whatever it is in the shape of the Trex. And since it’s on an ipad, you can expand it, contract it.

And this all happens in real time. That’s one thing that no one else can do in the industry right now that makes space map really fun. So that’s an object object is an input.

Okay.

So Alice Defancies has this question that I think we’ll need a little bit of unraveling because I think it expresses some assumptions, but I think it’s good to get into because probably some assumptions that a lot of people have. So most people are familiar with this phenomenon that as you move farther away from a speaker, that his coverage seems to get wider unless you have an asymmetrical Horn. So I think this is his sort of thinking where he asked this question. He says, IEM wondering how far into the audience the immersive experience can be achieved before all those separated signals become combined.

And does that then cause cancellations in the back of the room? Now, right away, we’ve already talked in this conversation about how a little bit about the system design that we actually want all of our sources to be covering the entire audience. So I think he’s thinking that we actually want them to be all separate signals. So, Josh or Bob, do you want to try to speak to this question a little bit?

Well, of course, a speaker from an angular point of view stays constant over distance, but as a width in terms of meters or feet or whatever, it’s getting wider. That’s the simple physics of it. So when you’re too close in you’re going to find yourself that you simply are prohibitively close to something because the inverse square law is going to prevent you from you’re just too close. If you get up on a ladder and stand next to the sides around. Yeah, you’re not going to have an immersive experience.

What we do is we define the room sort from a design point of view. We have this thing called the go zone, and that gives you a fairly good guideline up to where you’re going to have a 100% immersive experiences inside of that go zone. And then from there, it’s a Gray progression out of full immersion. There isn’t a place where it suddenly just locks in and you have it. As you get closer to the perimeters, you’re necessarily getting closer to those laterals and farther away from the others.

And simply the physics are going to catch up with you eventually. So the signals themselves, the more that a signal is individuated, the more you will have everybody be able to if they were all blindfolded, would point to the sound source, where is the frog coming from? And everybody would point in the same direction to the frog direction where you placed it in a space map. And that’s the key thing are people consistently showing you experiencing the same localization content. And if you then Mapp the things out so that you have immersed them into a swamp full of frogs and cicadas and all these things around them, then everybody could point to this and that the locations that’s really the goal and the more that you are towards the center, the more sure that experience is going to be.

Yeah.

And I would also say one thing that people get wrapped up on is like, okay, well, what do I do about fills? What do I do about all my subsystems and five across the front in a lot of rooms won’t cover the whole entire room, regardless of how pretty it looks in the prediction software and the subsystems are still real. So if I am sitting underneath a linear and for whatever reason, my five across the front are very high up, I could maybe have two front fills in front of me and do using what are called derive nodes.

Do a stereo mixture of what’s happening up above me. One thing that I use derive notes for a lot is, let’s say, for whatever reason, you can’t have your console in the room. So what you do is set up a stick of a five one surround system in the booth where your mix console is. And then that uses derive nodes. And so whatever happens out in the room translates and mixes down to a five one mix for your room. We do that with under balconies as well.

And so you’ll have this sort of main system that is covering as much of the room as possible. But then you’ll have these subsystems that are doing immersive mix downs, whether it’s down to Monos, stereo or whatever. And a lot of time. That’s very helpful for especially when these speakers have to get hung so high across the proscenium. Front fills become really important for imaging. Just imaging that voice down.

I’m just gain to mention, though, is this that you can’t get stupid about these things? Okay, front fills are only going to cover two rows and about five people wide. Right. So they are not part of your spatialization system. You’re not going to be zinging things around the front fields and have everybody go, wow, look at it across the front. That’s not going to happen. That’s not going to happen in your under balcony speakers, because if the under balcony speakers are designed correctly at all, they are designed for correlated signal for minimal overlap, because their job is to bring up Intelligibility.

They have a very clear mission. Do not go and start screwing with playing one of the places I really throw up a big red flag is people wanting to play matrix games with the matrix delays and silliness under balconies. And in front of us, it’s like, Stop it, stop it, stop it. Those things are combat audio. You must make them simple. And Intelligible let them do their job and don’t screw them up. Yeah.

And now with the 32 pathways. What’s cool is that speaker now becomes a multi use tool. It could be that delay doing correlated signal for the mains, but it can also be used in a separate pathway. For some version of a mixed down.

It can become an overhead and shoving. People are looking up, because now it’s not merging with what’s coming from the front. It’s suddenly all by itself is a Peter Pan over your head saying, look at me. Yeah.

And so under Balconies, of course, and above upper Balconies, you’re going to have less of a granular immersive experience, but you can still design a system to have an immersive experience.

Okay, cool. Let’s get to Robert McGarry’s question. He says total novice for immersive programming. Where do you delay to? Is there a zero point? And just for some context, IEM going to make an assumption here about what Robert’s talking about. I think he’s thinking of a practice in theater where we might have a center point on stage, in theater, where we want our voices to Sonic image source back to or we may have a concert sound stage where we want where we kind of time back to the drums.

I think that’s kind of what he’s thinking of. As I’m learning more from you both about immersive systems. I’m thinking that this question is actually not applicable to this, but yeah, what do you have to say about where is the zero point?

The same would be if that was a left right system or if it was five systems across. If you want to make it timed to events on stage and you don’t already have enough delay because you’ve got a digital console and a digital this and a digital that have already stacked up your latency. So if you’re going to actually add a little bit more, then sure drum kit is a usable place. Or if it’s theatrical, you can go to a point on these, but those become essentially in our world that’s a static event, or it can be set up through that delay matrix as a set of presets.

If you wanted to make it so that you had a moment where an actor was downstage left for some dramatic moment, you could have a separate delay matrix timing for that, but that’s a static part of the tuning process, and then the emergency facilities movement would come on top of that would be changed by that.

Yeah, I think of it as an immersive systems. I think of two different delay types. There’s system delay, which is what we’re going to need to do for time alignment of systems, whether that’s main subs relationship frontfill relationship that all gets handled, you can use the delay matrix on the Galaxy or the outputs for each speaker on the Galaxy, and then there’s artistic delay. So if I have an actor moving from proscenium downstage to upstage, I can fire a snapshot that changes that inputs delay time, or I could just do it on the console and have a snapshot on my console and adjust their input when they’re not singing that instantly swaps their delay to a new zone.

This is very typical of what we would do in musical theater, having three zones or four zones across the stage. There are fancy devices that are very expensive that do that automatically for you. But with a Galaxy, it’s a free update, and you can just do a snapshot change.

Cool. Let’s try to squeeze in two more questions here and then we’ll start to wrap up. I don’t know if you have anything to say about this, but Angela Williams says, Where do you place audience mics in the room for capture as objects?

I kind of don’t understand the question, but let’s talk about how do we capture surround information that’s happening in a room? There are two different scenarios. One scenario is I have an artist on making a recording or I just want to have some microphones laid out to capture the audience noise and send it back in. That could be wherever you want. And if you wanted to, you could put them on face map and then send them to all loud speakers or just some loud speakers. The laterals you can make them an object and move that audience sound around.

I think the question is for the analysis of the object placement. That’s the impression I got of that question.

Yeah, it could be. I also don’t totally understand it. And IEM realizing now I should have asked them to clarify a little bit, but it did make me think about mixing those in, but I don’t know how you would mix those in. So yeah. Do you want to say something about that, Bob?

If it’s mixing things in, my answer is no, I don’t do that. That’s Constellation’s job. And that’s what you’re getting into. If you want to start recirculating audience mics ambient mics in, that’s a whole nother thing. But if it was to in order to analyze, you place a virtual mic into a real mic to analyze the localization, my answer would be anywhere you want, anywhere you want to know the answer.

Yeah. And there are tons of different mic styles to do that. You could do that with a binaural microphone headset. You can do that with an Ambisonics microphone. You can do that. Whatever. If that’s just for capturing the recording of what’s happening in the room and in Map 3D.

We do it through virtual SIM mics. And I do that as part of the analysis. I’ll go and place a mic when I’m designing a mic in Map 3D, and then I’ll run the different speakers and I’ll be able to see as I lay one trace over the others. Like, okay, IEM consistently seeing within three DB. All of my laterals are all reaching within three DB in this location. Okay. That’s cool. I know that this has a really consistent specialization there. Okay.

I wanted to get to Lloyd Gibson’s question because even though we’ve already talked about this, some at the first part of the interview, I wanted to do it again because I would just want to make sure this is clear for there’s probably other people out there who have this question. And I want to give Bob a chance to maybe correct some misunderstandings about his own teaching. So Lloyd Gibson says I thought Bob was against stereo imaging and live sound because of the psycho acoustics and delay magnitude discrepancies seat to seat.

Does this not apply here, or is there a size threshold where it can be successful?

Okay, so stereo in live applications, let’s get into the semantics. There’s a left main and a right main. You can call that stereo. I call that left right, because stereo is something that happens when you put on headphones. Happens when you sit there in your living room because you’re inside of the five milliseconds that you have to play with in the world of physics, of your brain and its ability to make a panoramic stereo image. There’s very little of the room that is inside that five millisecond window.

In our world of PA, it doesn’t have to be a big PA doesn’t have to be an arena or Stadium. It’s like even in a small theater, there’s very little that fits inside that window so everybody else can call it stereo all they want. But I design systems left and right systems, and I design them to have no more overlap between left and right than they have overlapping into the walls. So that’s my thing. Basically, I don’t want to invite the wall into the thing any more than the virtual wall, which is the correlation point of where the two speakers meet in the middle and all physical acoustics modeling as a wall.

So that’s where I aim systems. I don’t aim your left and right deeply inward. Unless you can promise me that you’re going to put left completely separate material than in the right. Like if you’ve got left, center and right and they are now discrete and separate channels. Now I’m going to turn that thing inward. Now I’m going to cover the whole room with left and the whole room with right and the whole room with center and the whole room with left, center and right center and 17 whatever they are.

I’m back to the I’m the whole show. So if I’m the whole show fine. But if we are left and right and 99% of the material outside of the littlest bits are going to be pushed this way when all the really stuff that matters, the Fader with the big star on it is going to be mixed center. I’m going to make your left and right system so that it does the best performance that it can as correlated signal. Okay, so that’s my simple answer to that.

I haven’t changed on that. But if you go to a full multi channel as soon as you introduce two multi channel and that’s what happens when you add that third one, that center register functioning center channel. Now when you’ve gone to full multi channel, if that’s the way you want to address it. Now we can go and play decorating, but a lot of times what you really see in an LCR is you’re going to do LR are still going to be a very LR system. Very little gets panned out, but the center is its own thing.

So now you have a decorated center, but a semicolated left and right. So I hope that was the answer. That wasn’t too unclear.

I thought that that was clear. Yeah, that’s great.

I don’t tell people how to mix. Right.

And Immersive is a new way to mix. Instead of sending things down two pipes, you now have 32 or however many channels you have. You no longer view it as LCR, and you view it as my canvas that I can put objects on. And that’s really the way I have to start viewing it is IEM looking at a stage. Okay, now I’m painting where I want to put my artist or where I want to put my objects.

Okay, Josh and Bob, thank you so much for all of your time today. And I should end by asking, where is the best place for people to go who want to learn more about space, map, go and Immersive systems on.

Yeah. So Myersound. Com. Well, Myersound dot com is a great location for all information concerning Myersound. We also have the thinkingsound YouTube channel that’s our YouTube page. We’ve done about 6 hours worth of Space Map Go content as well as Map 3D content. There’s tons of information there, like every other company we participated in Webinar Wars.

And I never heard him call that. That sounds so violent.

Nick from DMV called it the other day while we were hanging out, and I thought it was hilarious. So shout out to Nick. But anyway, Webinar Wars was what happened. But anyway, there’s tons of content, not only just about Immersive audio and everything. And then last resource for Space Map Go is the Space Mapp Go Help website. That’s basically the operating instructions for Space Map Go. And the cool thing is, this is all free, so you can download Compass and download Space Mapp, Go onto your ipad and mess around with it.

Play with it. You don’t need hardware to start looking at what this can do.

I want to just throw in one more thing. Hope I don’t get in trouble, but there’s also some physical places where you can go to experience Space snap. There are some locations where there are, at least at the moment that we’re making this recording operating systems. There’s one here in New York. I believe there’s one in one still in Nashville.

Nashville.

Yeah.

At our office at Soundtrack in Nashville and then center staging in Burbank.

Yeah. So we have an LCR system left center and right there for the United States, and I think there might be one in Europe. I think there’s one in Europe.

Yeah. All across the world. Really? We have a touring Roadshow Space Mapp Go road show, which is happening across the US road Show.

When is that coming to mind?

I don’t know, man.

I think it should be called Space Mapp of Gogo.

Yeah, it should be called Space Mapp of Gogo, but yeah, if you look at the website on our website, there’s an article about it, and you can reach out to sales at Meyerson dot com to find out when it’s coming to a city near you. They’re thinking about doing one in Europe very soon. Australia has been touring around and New Zealand have been touring around Space Map Go systems for a while now.

So you can’t go to Australia.

They won’t let you leave that exactly.

Yeah.

And then our dealers and distributor network across the world, some have set up Space Map Go system, so reach out to sales at Meyerstown dot com. If you want to hear this, you want to hang out and then we’re open to give you a demo. And the New York room is really cool. And Bob might meet you there.

Oh, wow. Just throwing Bob’s hat in there. Great. Thank you.

The other thing is we will be at Infocom this year, and so there will be a Space Mapp system there. That will be we can’t talk about it. Really too much yet, but it’s going to be cool. I’m excited about it.

Well, Josh and Bob, thank you so much for joining me on Sound Design Live. Sound Design.

EQ Your Vocal Reverb Return

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live my guest is producer, engineer, and FOH mixer for artists such as Counting Crows, Goo Goo Dolls, and Avril Lavigne, Shawn Dealey. We discuss finding your sound, tips for mixing live vocals, and sound system calibration.

I ask:

  • What series of events lead to you mixing FOH for the Counting Crows? How did you get the gig?
  • What are some of the biggest mistakes you see people making who are new to FOH mixing?
  • Finding Your Sound in the Studio
    • “Walking into the studio isn’t the right time to decide what your record is going to sound like. I know the feeling of being a kid in a candy store is enticing, but this is the time where having a clear idea of what you want your recording to sound like is really key.”
    • Can you walk us through what a conversation like that might look like for a live event and how that would influence your decision making process related to the mix or the sound system?
  • Tips for Mixing Live Vocals
    • “A loud stage would be a more appropriate choice for a dynamic microphone.”
    • How do I set the HPF on a vocal?
    • How do I set the pre-delay on a vocal reverb?
    • How do I EQ the reverb send to help it sit well in the mix?
  • System design and calibration
    • Tell me about the system design for the Clyde Theater.
    • Is there some way to guarantee that the client will be happy? Is there some way for me to get inside their head to predict the characteristics of good sound to them and produce a result that they are happy with? Or is the only way to sit next to them in the audience and audition changes until they are happy?
    • Do you worry at all about system latency? Does total system latency influence your decision making when setting up the console and specing the sound system?
  • From FB
    • Bobby B: Best practices for building a rider
    • Gabriel P: how was it working with Avril Lavigne?
    • Greg McVeigh: Ask him when he is going to ditch the “real job” and mix touring acts again.

If I’m watching a show and I see someone play something and I can’t hear it it’s like, where is that? Why is it not in the mix? What’s going on?

Shawn Dealey

Notes

  1. All music in this episode by Glitch.
  2. 34 mic snare drum shootout
  3. Audio Test Kitchen
  4. Work bag: adapter kit, Ultimate Ears, Universal Audio interface and Apollo satellite
  5. Books: Recording The Beatles, Chairman of the Board
  6. Podcast: Working Class Audio, Under The Skin
  7. Quotes
    1. Play with sound every day.
    2. I’ve seen people mix a show while they’re watching Smaart and it’s like, this is not working.
    3. If I’m watching a show and I see someone play something and I can’t hear it it’s like, where is that? Why is it not in the mix? What’s going on?
    4. The best way to fix a problem is at the source.
    5. A condenser microphone in front of a wedge is a completely unstable situation.
    6. Raise the HPF until it sounds bad.
    7. I just didn’t want to have a PA that sounded bad for the show.
    8. I have a career trajectory that I can link to 3 people for fifteen years of work.

Transcript

This transcript was automatically generated. Please let me know if you discover any errors.

I’m Nathan Lively, and today I’m joined by producer engineer in front of house mixer for such artists as Counting Crows, Goo Goo Dolls, and Avril Lavigne. Shawn Dealey, welcome to Sound Design Live.

Thanks for having me. Okay.

So, Sean, I definitely want to talk to you about this topic of finding your sound tips for mixing live vocals and sound system calibration. But before I do that, after you get a sound system set up, what’s one of your favorite pieces of music to play through it to get more familiar with it.

So I have a few different pieces I listen to obviously have somewhat of a tuning playlist, but there’s one specific song that I use pretty consistently that slightly obscure. A band called Spy Mob. They’re out of Minneapolis, and the record came out in the mid 2010. Lord Algae mixed it, and it has an extensive amount of mid range information. So when I was mixing shows for Kennedy Crows, there was a lot of mid range information. Adam had a lot of mid range in his vocal. There’s three guitar players, keywords, and so it was a really dense sort of Sonic landscape.

And so this song is 2040 by Spy Mob, something that I would be able to listen to after we’d done some tuning on the PA, and I could throw that up and make sure that I was hearing all of the stuff that I needed to pull off a show. So that was kind of a song that I would get a lot of questions from a lot of guys that were around or people in the venue be like, hey, what is that song that you played? Because it’s an interesting song.

It’s pretty catchy, kind of quirky sounding, but really kind of defined. If a PA could kind of handle what I was about to put at it as far as definition and detail in a mid range, heavy live music mix. So that’s kind of my go to. There’s a couple of others. Thomas Dolby song that was from, I think, Aliens at my Buick that I picked up from another engineer that was mastered in the 80s. So it’s actually quite quiet, but has a lot of really nice high end high frequency information where I could sort of tell if there’s the crispness of a rig and then some really nice sub information, that’s not overwhelming, but you can really hear the definition if it goes all the way down, so specific songs for specific things.

And that’s kind of something I try and share with people, too, as developing a playlist of songs that really kind of establish what you need to get out of a PA and listen to those. You can kind of pull out the elements that you’re looking for and understand if it’s not going to work for you and you need to go back to the drawing board or back to Smart and your Lake and hack away or add. Those are a couple of songs I would dive into.

Other than that, I would play the intro for those about to rock you, and then I would always stop it as the song started, which would always make people quite angry that I didn’t play the whole thing through. There’s a couple of kick drum hits. It really kind of reinforced the fact that the PA can rock and understand that it’s functioning, and it was a test procedure for me. It wasn’t enjoying the song, even though I do like to listen to that song quite loud on large scale pas, but I didn’t get too deep into it.

I didn’t listen to it all day long. It was pretty quick. I could tell if I was getting what I needed from the boxes, and if it wasn’t, then it was into fix mode, and then who knows what things we would uncover at that point. But those are kind of the few songs that I would really rely on to get me to where I needed to be with the PA. Nice.

Sounds like it has some important milestones in there that quickly you can queue into and know some things, get some actionable data, like, Is this going to work? Where do I need to do some more work?

Yeah. And the Spy Mob track the Snare drum on it has this, like a little bit of a mid range knock, lower mid range thing. And if it was to pronounce, I knew the boxes were going to be a little bit boxy sounding, and if it was gone, I knew that I was missing some of that load made information, which I like to introduce into my drums. And so there’s all these little things that I could pull out of the songs because I was so familiar with them on so many different systems that it would establish how much fun I would have later in the day.

So very cool Minneapolis natives. All right. So Sean, as I was preparing for this interview, I was looking through your Instagram feed, which is pretty fun. Lots of cool pictures in there of gear and shows and stuff. Also of you going to restaurants and what to me looked like breweries or potentially bars. And so I saw this photo that looks like a bunch of beers. And so I was going to ask you, what’s your favorite beer?

Well, I don’t drink, but I think the photo you’re referencing is there’s a shop down in Huntington, Indiana. I moved to Indiana a few years ago to work at Sweetwater. And in Huntington, Indiana, there is a soda shop called Antichology, and they have 700 flavors of soda. It’s like a vintage ice cream and soda shop. So I haven’t been there much, but it was an impressive wall. And, yeah, I tend to gravitate more towards soda than beer, but that was an interesting spot. Moving to Indiana, kind of getting used to the Midwest.

There’s a lot of interesting stuff. I know there is a lot of breweries around here, and there’s a lot of good food. I mean, Fort Wayne is kind of exploding. We’re growing as a city. So there’s a lot of really cool restaurants that are popping up and stuff. So, yeah, I do enjoy that. I enjoy cooking, and I enjoy eating. So, yeah, that’s one of my favorite pastimes. I mean, that’s something I missed from my touring days is being able to get out and get around and try different restaurants and stuff like that.

Tell me about either a couple of things that you tried at this 700 soda shop store or one of your favorite sodas. Just curious, what are your tastes?

Yeah. Big fan of Black cherry. I actually had a friend that drove down to Kentucky. No, he went up to Michigan and went to some sort of cherry Orchard and brought back six or eight different kinds of cherry flavored soda, ginger ale, gingerbread beer. All that stuff. So I got some treats from my friend Lynn Houston, who is someone I work with here. Lynn and I work on a lot of very cool marketing projects. He is the manager of written content here. But if no one is familiar with Lynn Houston, they should be.

He’s a total geek. And we do a lot of shootouts together. And so he travels a lot on the weekend. So he brought back a bunch of super cool flavored sodas, which we got to try this week. But we’re also in the middle of doing a 34 Mike snare drum shoot out at the moment where we’re shooting 34 different mics. So in Studio A right now, we have a pretty interesting photo shoot going on with a lot of microphones around one drum. But we do a lot of cool stuff like that.

If you get a chance. Google in. He did a lot of engineering in Nashville for years, and he ended up here, and we do a lot of really cool content that makes it on Sweetwater’s website.

Are you familiar with the audio test kitchen?

Yes, I am. Okay. Cool.

I interviewed Alexawana a while back, and his project was so interesting, seeing how they did all of the recording. So are you guys having I’m assuming you’re having a human actually hit the snare drum in this case.

So, yeah, I think that Lynn’s ultimate goal in life is to find someone to develop the robot drummer that actually can hit a drum consistently. But yeah, we have our in house session musician and content creator Nick Deaver Gillo hitting our drums for us. And so we try and keep we’ve done this on a lot of different things. We’ve done speaker cabinets, ribbon mics, vocal mics comparisons to virtual mics, a lot of different things, and we try and keep the control factors to a minimum. So everything is really consistent.

We like to use lasers, take precise measurements, calibrate things. So we try and take into account the most amount of or eliminate the most amount of variables from any of these processes. And so I think we had 19 mics around the snare for the dynamic microphones, and we had twelve condensers, and so individually we lined them all and tried to get them where they failed. One of the best we could in actual placement around the drum in the same distance and the same height from the head.

And then we captured one pass of the recording of the drum. So the variable of the performance is not taken into account, but the placement is the variable that we had to kind of give into. But it should be cool. I think it’ll give people a good understanding of what different microphones sound like on the same drum. So that’s something that’s going to come out in the next month or so when Sweewarters drum month comes out, there’s going to be some cool content with that cool.

Yeah. I would love to see what my own tastes are and do like a blind can AV test. And I’d also love to hear what you would end up picking if you listened to a bunch of them and not knowing what they were and just pick the one that you thought sounded the best for that specific situation.

Yeah. And I think I want to do that, too. I gravitate towards things that I know and things that I trust and from repeated use and success. But that’s one of those things where you can sit back and listen. And it’s been funny the few times that I have blind taste tested microphones, I usually end up picking the ones that I go to, so I feel like I’m usually using my ears and not my eyeballs for listening, which I think is a good thing. But yeah, the snare drum thing.

I think it’ll be pretty cool, but there’s some mics I would never even think to put on the drum or just haven’t had a chance to in a long time. So it also gives an opportunity to refresh my memory of things I like or haven’t used in a while or finding new things that would be a different flavor than what I’d go to consistently, because I’ve been known to be fairly set in my ways about certain things, but I’m trying to have more open mind about gear these days.

So, Shaun, you worked with the County Crows. It seems like at least for me and a lot of people who love The Counting Crows in my imagination, of course, never having experience working with them as humans. But just in my imagination, it seems like a dream gig. So I would love to just talk about sort of how you get work. So in this specific case, how did you get the gig? Like, what sort of relationships and series of events led you to work for The Counting Crows.

So I feel like I’m going to tell a pretty unique story about that. I started with The Counting Crows as their drum and keyboard technician after I had toured with them for a summer, working with the Guggodols as drum and playback technician for them. I really hit it off with a drummer. He loved the way I tuned the drums. Side Note I started as a drummer as a teenager and spent a lot of time playing and then a lot of time tuning drums. So I’m very much a nerd about all of that stuff.

But I started with Kenny Crows, I think in 2008 as a drum keyboard tech. Then that developed into recording their shows, archiving them for them. And a couple of years into that, I had stepped back over to the Guggenheim, was guitar teching for them for a tour. I got a phone call from the guitar player from Count Crows, Dave Bryson, who reached out and said, hey, we’re going to go to the studio. Would you like to come record us? And I was like, Well, that seems like a great idea.

So I jumped on that. And I had a pre established relationship with the band. I got along with everyone really well. And we got into the studio and it went really well. And I had a really good handle on, I think what the band sounded like and what I felt they should sound like. And I think we gelled on that in the studio. So they really kind of trusted what I had brought to the table as far as Sonics in the studio. So that translated into an offer to mix the band live.

And so just as a quick backstory, I had dabbled in live sound my entire career. But I’d never had a large format gig mixing a band. I had always worked as a backline technician because those are the gigs I was getting. And then when I was at home, I was mixing life sound at clubs and bars. And when I was on the road, I was mixing opening axe. And so I was getting a lot of experience. And I was always the annoying guy for a house asking the engineers like, hey, what’s that do?

What are you doing? I was always lurking in the audio Department, even though that wasn’t my job. So picked up a lot of that stuff along the way. So when I got the opportunity to mix Accounting Crows. Obviously, I jumped at that because that was sort of where I was aiming to be, but I hadn’t had the opportunity to get there. So I had put the groundwork in to get a gig like that. But I just hadn’t been working in the field as a touring front of house engineer.

So I went from co producing a record with them to mixing Front of House after being a backline technician. So I feel like that’s a slightly strange trajectory for anybody in the touring world. Usually you sort of start somewhere and work your way up. So I landed what I feel like it was a dream gig to have in sort of an obscure way, but at the same time, built on trust and engagement with the band and being able to communicate with them and get them what they were looking for.

What’s interesting for me, an interesting point about this story is that you didn’t start out in your life saying, I want to be front of house mixer for Kelly Crows. You were just working on shows and you were a drummer. And then I guess at some point there was an opportunity for you to become drum tech. And so it sounds like you were just sort of open to learning all things. And then as you were around and you build relationships, like, sort of opportunities came up, it doesn’t sound like to me that you were sort of lobbying for any particular position.

You weren’t calling that guitar player every day and say, hey, when are you going to give me that front of house mixing gig? When are you going to give me that recording gig? And then he finally called you.

Yeah, and I think that with a positive attitude. I work hard. I try and stay engaged with people. And I think that being around and obviously having a good attitude towards things is really beneficial to that. But I jumped on the opportunities that were presented with me straight out of high school. I hit the road. I had a bit of a helping hand for my father, who owned a road case company. And so I had already been established working in the industry, got a job at about the age of 16 doing backline and stuff at a local backline company.

And so I already started engaging with people. I met in front of house engineer who took me on my first tour when I was 18. And so even with him, he kind of took me under his wing and showed me some stuff. And I started mixing some of the opening acts on that first tour. Even so, I was always very interested in that. I mean, I established that I could build a career on being a backline technician and that the paychecks would come if I would do that.

And it gave me the opportunity to really learn a lot and engage with a lot of people. One of my second tour was with Average Levine, and I was working with her as a drum technician. I was taking care of the playback rig on that tour, and I met one of my mentors on that tour, Jimmy A. Kobuski, who’s a world famous sound engineer, also another Canadian, and he was really helpful. And I was really able to get a lot from him. And funny how it worked out as it went full circle on my last tour with Counting Crows, he was mixing Matchbox 20, and so we were able to mix side by side and have a fun two or out on the road.

But those opportunities that I got myself into, I tried to take advantage of and try to get as much information and get as much knowledge from the people I was working with because there’s a lot of really talented people I cross paths with. And that was something that I kind of realized early on. It’s like, these people know everything that I want to know, and if I’m nice enough to them or ask them enough questions, I’m sure they’ll share some of this knowledge. And so I was able to extract enough stuff out of that to kind of put a skill set together for myself.

That’s great. And that’s like a whole lifetime of learning. I just wonder if I look at this. Is there anything I could take away from this for my own career? So if you were my mentor and we had a mentor mentee relationship and I was asking you, Shaun, I want to get to a place where I can be mixing some of my favorite artists and doing these kinds of tours. Is there any sort of anything I could be doing in order of, like, taking action? Is there anything that I can do, or am I just kind of waiting for the phone to ring and hope that those opportunities come up for me.

Play with sound every day? I don’t know that’s something I feel like I don’t ever stop trying to improve my skill set and try to learn that’s something that I see if there’s someone that’s waiting around for a gig. I don’t think those things happen very quickly if you’re waiting on something, if you’re pushing yourself to improve your skill set, to expand your Horizons, to learn new things, to get engaged. I mean, the only way that you’re going to have a good handle on mixing a show in a bunch of different venues, as if you’ve mixed a bunch of shows in a bunch of different venues.

So I have to say to me, one of the most important things is to get yourself into a position where you get an opportunity to do some of the work. You like to do as many times as possible in as many different situations as you possibly can, because once you get into, I would say the bigger leagues when you get an upside down situation and you’re kind of painted yourself into the corner, you need to have the skills to get out of that and still put on a good show.

So the experience is really what I think establishes people that are successful because they know how to deal with all of the problems or at least have a skill set to adapt and overcome, which is something I think is necessary in our industry.

So making mistakes and having the skill set to adapt and overcome. Speaking of that, what do you think are some of the biggest mistakes you see people making who are new to front of house mixing. So you’ve been around, you’ve been starting out, and then you’ve been mixing the headliner and seeing other people come up who are just getting started. And now you are even in a position where you’re doing even more education. So could you talk about maybe one or two of the most common mistakes you see people making who are getting started?

Yeah. Probably kind of a twofold answer on that. I feel like there’s some of the skill set needed to be the audio engineer that’s based in science and some of it’s based in art. And I think that the blend of the two of those is really the key to success. I feel like a lot of people look at audio a lot. I think they rely on real time analyzers and measurement data, which is obviously that’s going to tell you what’s going on and trust the information as long as you know how to measure things properly.

But I’ve seen people mix a show while they’re watching Smart, and it’s like, this is not working. And so I think that there’s a reliance on the science side of thing. And then the other side of it is like, you need to have an understanding of what music should sound like. I really had an uphill battle with the County Crows, where we had seven guys on stage, seven people singing, and everybody was playing a bunch of different instruments. And I think that my goal whenever there was a song being played was that I could look on the stage and I could hear everything that was being played.

And I think that that’s something that one of my biggest pet peeves is if I’m watching a show and I see someone play something and I can’t hear it, where is that? Why is it not in the mix? What is going on? And that’s something that takes the art of understanding how the music should be represented and then also knowing all the parts, like, are you missing cues that you have? Are you not unmuting instruments? Things like that. So that kind of is something that I think is a mistake people make when they’re, like, still worried about the drum sound.

And it’s like, nobody cares about how the drum sound right now. If the lead vocalist isn’t over the mix. Or if there’s, like, a lead guitar part or some sort of something going on, that’s interesting. That’s integral to the song, and you can’t hear it. The fans are used to the record. They need to have those sort of elements in the mix so that they can enjoy the show. So I think that those are kind of a couple of things that I see that bother me when I hear people mixing where I’m not getting engaged by everything that’s going on.

I’m like, Man, I wish I could hear what he’s playing because I can’t hear that right now and then also the reliance on visual stimulation instead of using your ears and kind of making that judgment call of like, okay, yeah, it looks bad, but sounds good. So we’re going to move on. And that’s something I’ve seen people do in the past. And for me to enjoy a show, that’s something that really I don’t know. It’s kind of a bummer, because with having high standards like that, it’s hard to go to a show that’s not mixed well and enjoy it.

Totally.

Okay.

So we’re going to talk more about you mentioned vocals. We’re going to talk more about that in a second. 1st, let’s just talk about finding your sound. And you mentioned just enjoying the show, looking at the stage, things sort of makes sense as they’re happening on the stage in the audio. So you wrote this article called Finding Your Sound in the Studio, and I wanted to see if maybe we could use the same topic, but for talking about live production. So I’m going to read this quote from the beginning of the article that says, Walking into the studio isn’t the right time to decide what your record is going to sound like.

I know the feeling of being a kid in a candy store is enticing, but this is the time we’re having a clear idea of what you want your recording to sound like is really key. And in another interview, I’ve heard you talking about being in positions where you get to spec the sound system. And so when you get to that position where you can say, oh, these are the microphones that I want. This is a mixing console that I want. This is the sound system that I want.

That is kind of the similar thing as a recording studio of being a kid in a candy store, right? You could pick all these things. So you might just say, I want all the most expensive microphones and all the most expensive figures. I don’t know. I’m just like, you go into a restaurant and you’re, like, bring me the most expensive wine. I don’t know. So I wondered if you could talk about how that conversation might go when you are first getting into a live production. Was there a time, for example, with some of the artists that you’ve worked with where you said to them, or how did you figure out what is the sound quality that’s going to make them happy and make the show successful and make the audience happy?

And how does that influence my decision making process? So how does that kind of conversation go?

So I feel I’m a pretty big proponent of getting the sound right at the source. And to me, that’s something that has to be a conversation with the musician involved. And that’s something that I established with the County Crows early on, when I was working in the studio with them, we worked on guitar tones. We worked on sound, drum sounds. I was already working with the drummer, and he was really happy with the way I was making the drum sound. And so all of these components that I was working on with the band were building blocks for a great sounding show.

So we were working through different guitar amps to achieve different guitar tones. We got to different base dies. We found one we really liked, and we ended up getting some for the touring rig. And so I was able to work hand in hand with the band. They had my trust to give my input on some of the band’s equipment that would establish the way things sounded. So that then my job from that point on, from the microphone out made my life easier. But then also, those tonal choices early on allowed me the flexibility to add microphones that I could get the best and get what I was looking for out of it.

So there’s a lot of things that I did in the live realm that I think some people would shy away from. I used a lot of studio microphones. I used a lot of ribbons. I used a lot of vintage microphones, but I was capturing sources that I was familiar with and sounded really great. So I was using that to my advantage, so I could then take it up another level by using a microphone that I liked, and that tone imparted on that instrument would translate to my mix.

And so I think if you have the opportunity to work with an artist who trusts you, getting the right sound, getting guitar tones and drums that work for whatever style of music or whatever artist you’re working with is going to establish everything else down the line to be more effectively mixed. And then, yeah, I was a kid in the candy store, and I did have tons of gear, and I toured with a bunch of outboard stuff, and all of it was kind of based on things that I’d like to use a lot of experience with some of the more esoteric outboard gear in the studio.

But all of that stuff I felt helped translate my vision of the show to the audience. So I was trying to get an album quality mix in a live setting as best as I possibly could. The flip side of that, too, is that all of the shows I mixed with The Counting Crows got released. So Livecannoncrows dot com has all of my mixes that go out to the world and get sold. So I was kind of trying to bring as much to the table as I could from a Sonic standpoint, but then establishing what I was working with and making sure that that was helping me get the results I was looking for.

Now, the flip side of that being here at Sweetwater for the past three years, I was head of audio at the Client Theater, which is a venue that we own in town, and I got a chance to mix everything under the sun for a few years, which is a lot different than having the ability to start from scratch or choose from a library of vintage guitar amplifiers and make sure the drummers using symbols that aren’t too loud. It really kind of gave me the opportunity to learn how to deal with anything that got thrown at me.

And so the best way to fix a problem, though still to this day, is at the source. If the base amp is too loud, the base amp is too loud, like go up to the base player and say, hey, if you’d like your show to sound a bit better, I’d love it. If you could help me out here and turn down your base amp, and that’s something that I think is always going to help you. If you can establish a relationship with the artists or at least get their trust, even if it’s a short term relationship, even if they’re an opening act, if you can do what you can to help them sound better on stage, then you can make a better show for them, and hopefully they can get some more fans and keep coming back.

And if they trust you and you’ve given them your best effort, hopefully that relates into a relationship that can be maybe long term, maybe not. So especially when you’re working in a venue and being able to engage with touring artists and sort of make friends, give them a good experience and show that you care. I think that really helps out in that final product of being better.

Do you think that comes through with the artist? Do they hear you say that? And they think, oh, Shawn’s here every night. He’s been here for three years. He knows as much as anyone about this room, so I should listen to him because I want to have a great show tonight or they think, oh, no, this guy’s going to try to make me turn down. I need to fight him off somehow because I’m worried about my own performance. How does that usually play out?

Well, I mean, there’s preconceived notions of the sound man being the angry sound man coming in and kind of pushing people around. And I try and establish that I’m here to help. And that’s something that I think when you deal with professionals, I think that you can hopefully, as a musician, let your guard down a bit. But I feel like there’s a lot of musicians that have their guard up because they’ve had bad experiences in the past or people trying to control them, not maybe for making the show better, but just making their life, quote, unquote, less miserable.

And I think that that really kind of establishes if you’re like, hey, I’m on your team. If you guys sound good, that makes me look good, vice versa. So all this stuff hopefully works in, like, a symbiotic relationship with the artists, where if I give them my good effort, they’ll give me their good effort, and we put on a good show, and then hopefully the fans enjoy it. And then they can come back and play more shows and have a larger fan base. And I think that on a smaller scale, when you’re not working with a headliner, putting in the effort that you would if it was a headliner like, I had a mixed template that had all of my bells and whistles.

Even if I had three inputs, I had everything ready at my fingertips on my console so I could give them if we got into something that was like a large scale thing. I had all this stuff there. If it was an acoustic guitar and a vocal, I still had some effects and things I could do. So I had my palette at my fingertips, and I was working on an avid SXL, which is not a common house desk. I was pretty spoiled with that, but at the same time, mixing on an X 32 or an M 32, you have the ability to get something going for any of the artists where you can give them a little bit more than the bare minimum.

And I think that hopefully translates if you show that you care, hopefully they appreciate that. And in the end, hopefully the audience appreciates that.

Okay, Sean. So you made this great video for Sweetwater called Tips for Mixing Live Vocals, and you and I have known each other for a long time now, I think about 38 minutes, and so I hope you don’t mind if we potentially disagree a little bit, because one of the things that you say in this video is that a loud stage would be a more appropriate choice for a dynamic microphone. And this kind of caught my ear when I was listening to it because I interviewed Philip Graham from Ear Trumpet Labs, who make some really cool looking, condenser microphones a few years ago.

And I basically told him all of my ideas about why dynamic microphones are better for live events in concert stages. And he disagreed with me on all of those things. And so I wondered if you wouldn’t mind just sort of defending your statement here about dynamic microphones.

Yeah. Happy to do that. I feel that dynamic microphones are the go to choice for a live situation. Now, there are situations where a condenser microphone may be appropriate. I would say 99% of the shows I’ve mixed in the past ten years have been on dynamic microphones. There is one specific occasion that I mixed a show with a condenser microphone, and it was spectacular. It was a singer and an acoustic piano on a stage, and it allowed for me to have a microphone with a more sensitive pick up on that stage because there was no real noise floor.

It was a piano and vocal, and I wasn’t mixing loud and everything kind of fell into place with that. But dynamic microphones, if you have wedges on stage, which some people still do that. But especially if you’re dealing with local artists or opening acts on tours in a house sound person environment, condenser microphone in front of a wedge is a completely unstable situation to try and manage, especially if someone has bad mic technique. So, yeah, I would probably take it to my grave that I would put a dynamic microphone in front of a vocalist on a stage almost any day of the week.

A huge fan of the telefuncan microphones, the M 80 and the M 81 are extremely amazing microphones, and I’ve used those for years and had amazing success. Things that come into play with that is that those microphones have an extremely tight polar pattern, so the pickup is very directional, so you don’t get a lot of bleed on the deck. So that’s something that I fought for years, especially the County Crows, having a bunch of guitar amps, seven people playing drums and all that stuff going on.

We had a couple of people still on wedges, and so there was a lot of noise going on. So finding a microphone that was pretty much the cleanest anything where I could get the most amount of direct signal without a lot of interference and bleed. Those microphones really kind of made my job easier. But you get into situations where you have artists that maybe using in ears and have a strange in your mix that if you’re using a dynamic microphone compared to a condenser microphone, the ambience in a venue really changes.

I mean, it’s pretty surprising if someone has a really loud vocal in isolated in ears, they pick up a lot of ambience of the venue, even on a dynamic microphone. So if you get into the realm of having something that’s even more sensitive that picks up some of the cavernous sounds of an arena that can put your artists in an unfamiliar place, and the performance may suffer. So to me that I mean, I don’t know. It would be hard pressed to find a solution with a condenser microphone that would make sense for me in a live situation.

And even I can’t think of one that would make me happy in an audio world. So I think I’m going to hold my ground on that one.

Sean, you’re making me realize that to pick a microphone, just talking about vocals. I guess you can’t really just audition those in a studio environment. You would really have to try them on a show because there’s so many different factors that are going on in a live stage. So I guess you just really have to try it on a show and see if it works. Is that kind of how you’ve picked vocal microphones over the years? Like, you tried something new and you did a whole show and you’re like, you know what?

For many different reasons, this really works or for many different reasons, this really doesn’t work in this situation.

Yeah. I mean, I went through a few different microphone changes with the Counting Crows. We landed on the telefuncan stuff, but we had an opportunity to do it, and it had to be something that both ends of the snake agreed on. Me and the monitor guy had to sort of be like, okay, we’re going to try this today. He would have to get it done, and he would have to be happy with it being something that he could work with. So that was as important as it was for me to have a good sound up front.

And so there was times where we disagreed. I mean, sometimes it’s hard to pry a 58 of someone’s hands and give them something else just because of familiarity, especially for an engineer that’s been worth a band for a long time. Changing things like that, that’s sort of like, you don’t want to take the blankie away from the singer or something like that. So I think it’s like, you have to establish that there’s a reason to change. You have an opportunity to try something in a somewhat controlled environment.

You have to know that the venue is not terrible. I’ve done it in the past. We’re like, hey, let’s try this. We try it. It just sounds super weird. And it’s like, today is not the day to do the change. And also the psychology of dealing with a musician. If a singer rolls in and he’s totally out and not part of engaging with a sound check or just not giving us all, if you’re switching microphones on him and he gets into a show situation, and he’s like, what is this thing?

And why does it sound so weird? You don’t want to be the one that gets blamed for that. So I think that establishing those changes has to be someone something that’s, like, sort of a team effort but has to be justified. And you have an end goal of success, and it’s sounding better or working better for whatever you need. And I think that, like that being said, a lot of stuff that I would do in tuning a PA and even implementing PA design on a tour situation would be to keep the center vocal position as clean as possible on stage.

So there’s a lot of work that we did to kind of keep the stage as quiet as possible, to leave some ambience at least amount of low and rumble and stuff like that. So that would establish sort of a consistent space for the singer to work in, too. And then the microphone reacts more efficiently with less interference of all those other factors. So it’s a weighted question, but there’s a lot of different variables that go into that.

Sean, how do I set the high pass filter on a vocal high as it’ll go until it sounds bad?

Yeah, I know.

When does it sound bad?

Yeah. And I think that I would probably go higher than lower on a high pass just to kind of keep that clarity. I mean, it’s easier to get a thinner vocal above a mix anyways, but, like, 150 is somewhere that I would Hover around between 120 and 150. And then if I needed to go, it dependent on things that like, if I was pushing the gain on something, I would tighten it up. I mean, I’m not scared of using the high pass filter or lots of EQ.

There was a point in time where I was actually using a channel of a Lake processor to EQ my vocals. So I was getting, like, surgical into slicing things out and cleaning it up so that I could get the most amount of gain before feedback. And so it really depends on what you needed to do, what the vocalist sounds like, and that’s sort of the thing got a really low voice, and you bypass too much of it. You lose all the character, but it’s that balance of what works for the singer, and then what works for your mix, too.

I mean, what you want to do is hopefully have people hear the vocal. I’ve been told to turn the vocal down or bury it before if someone’s not super competent. But for the most part, I think that people want to hear the voice, and you need to be able to get it up there. And so, yeah, I pretty much get it till it starts sending thin and they roll it back just a little bit, just so it has some body. But in a live situation, you wouldn’t be adding, like, 100 Hz to the vocal, like any of that sort of stuff that gives you some girth that would be in our studio recording just doesn’t need to exist.

I think in most of the live situations, but obviously that’s stylistically dependent in room and system depend.

I mean, if there’s a bunch of wedges on stage and side fills and they’re standing near the sound system, then you have loads more low frequency build up than if they just super quiet.

The monitor guys really hate when you go to the console and then roll their high pass up too. But that is something that happens when you got a bunch of big wedges on a deck when vocals on stage aren’t high past enough, which is sometimes modern engineers that are trying to get a lot of level try and get that chesty feeling out of that. But the blowback from that to front of house is sometimes pretty gross, and that actually affects some phase relationship with the microphone and with low frequency stuff that’s coming out of wedges.

So that’s always something, too that if you can stay friends with your monitor engineer and hopefully work together and be like, hey, man, that’s, like, really booming out here or, like, really thumpy, and that’s going to compete with the mix, too. So I think that that’s something that as much as you high pass a vocal, if it’s still, like.

Super.

Chesty or super stumpy on stage and there’s wedges up there that’s going to fight you all night long. So I think that’s something to be aware of. But, yeah, I’m not scared of getting rid of that stuff. And I mean, that kind of goes for everything. The high pass filter is your friend when you’re mixing.

Sean, how do I set the pre delay on a vocal Reverb?

Well, I like to keep it fairly tight. I’m not a big fan of really upfront effects. I like to kind of make them sound like I’m creating space around a vocal, but I don’t like to hear Reverb. I’m not a fan of long, sort of lexicon sounding things that are very apparent. So I end up using a few different reverbs sometimes, or, like, some short delays. I like to keep my pre delay on a Reverb usually under 20 milliseconds, anything longer than that, sort of, like gets it too far out from the rest of the vocal.

And really what I’m trying to do is I’m kind of creating a sound stage with my vocal effects. I’m not creating, like, a really prominent effect, but just like giving it a place for the vocal to sit in the mix. And that’s something that I’ve kind of been doing for a long time. I just either scared of, like, really loud effects or I just tastefully don’t like them. So there’s nothing wrong with that. But I like to make it so that it sounds natural so that I’m literally just, like, kind of pushing away some stuff so the vocal can sit in the middle of either the Reverb or, like, some short delays.

So, yeah, that sounds great.

It sounds like you’re really balancing on there between the two sides of making an artistic choice, which would be to have some big effect that’s really visible, really audible. And then over here where you’re doing sound reinforcement, which is what needs to happen here with this effect to help this mix work.

Yeah, I take that into the studio work. I do as well. I’m always hesitant to go overboard with effects. I’m not a big delay throw kind of guy, and that obviously with the County Crows, there’s not a lot of affected vocal. It’s pretty prominently trying to hear what Adam is singing or saying and bringing the lyrical content out. So I’m not trying to make it sound artistically affected. It’s trying to represent what’s going on. And I think intelligibility in any kind of situation where the lyrics are important is a really important thing, which goes back to the high pass filter and having the intelligibility of a vocal be there in the mix, I think really is important.

Just so people that are there, as fans can understand what the singer is saying. I don’t think there’s anything worse than showing up and being like, what did you just say? I can’t hear he’s not singing. So those things when you’re not overly affecting the voice, not pushing it too far into the mix, and then also keeping it clean with effective EQ that allows for the clarity and intelligibility of a vocal.

So let’s talk about clarity, intelligibility and Reverb return in that video about tips or mixing live vocals. You also mentioned that it’s important to EQ the Reverb return so that it sits well in the mix. So can you tell me more about how to do that?

Yeah. I think that my approach to EQing a Reverb return or any effects return. Is there’s a certain amount depending on your preset that you pulled up? I would say I’m not quite digital artifacts, but like, nonrealistic sounding space that comes back from a Reverb a lot of the time, and what I try to do is usually carve out some of the harsher, higher mid frequencies. I take off some of the top end. I high pass some of the low frequencies in order to fit that space into what I’m doing.

And I think that that is another piece of the puzzle where it allows me to create the space around the voice is that I’m tailoring that to support the vocal rather than just be like, hey, there’s the Reverb, all of it’s. There usually take off anything that picks up s is where you hear the s in the Reverb. So sometimes they’ll even DS a Reverb before they put a DSR in front of a Reverb plugin to take off some of the SS on the vocal. If I’m keeping them in the actual vocal sound, then at least it’s not hitting the effect as hard.

And then when it comes back, just taming some of that high frequency information, there’s a lot of that that just doesn’t need to be there. And I think that that’s a mistake that people do is when they leave, like, a full frequency Reverb in a mix. And you’re like, that’s a lot of Reverb. And it’s just because it’s all of those frequencies all of the time. When you tailor that to the sound you’re looking for, I think it gives you a more natural sounding Reverb. So that’s usually my approach to that.

And the same thing with if you have, like, a delay, a filtered effect on that where you high past low pass and find a spot where it kind of accentuates that voice and kind of makes it something that can sort of be a little bit Ghosty in the mix is a little bit more tasteful than having a blasting delay that is full frequency range.

Okay, let’s talk about the Clyde Theater, so I’ve never been there. I just look at some photos online. Let’s just start with some of your favorite things. So what’s one of your favorite things about working at the Clyde Theater? And maybe what’s one of the most challenging things about working there?

Well, my favorite part about the whole Clyde project was the fact that I was involved early on before the theater even opened up. So myself, along with stage manager and the modern engineer that we worked with there, Drew Consolvo, we work together to adapt and sort of deal with any problems that we foresaw before we even opened. So we weren’t part of the design process. But then when we got in there to do the final install and to Commission the system and get everything going, we modified a few things in order to make it so that people on the road rolled in there and were super happy with everything that was there.

We made things accessible. We made things flexible. It was clean. There was all of the cables adapters, everything was ready to go. Everything was dialed in, and that was really kind of a nice thing. And we got a lot of feedback from a lot of different artists that rolled through like, this is one of the most awesome venues we’ve been at. Cool, because we’re in the Midwest. We’re stuck between a lot of people coming from different established venues. That may not be the most fun places to do shows.

And so we really kind of made people have a great day at the office, and that’s something that I think we’re both really proud of having a venue where people could show up and just have an easy day. It was like a ramp load in our truck docked. It was 20ft from the stage. There was no stairs. There was no stupid things to make your day miserable. It was like we had everything we needed and it was accessible, and there was a nice facility, sounded really good.

We had all of the things that just make a day on the road easier for someone that’s been out for six weeks or whatever when they roll in and you can kind of give them a bit of a rest because everything’s covered. And that’s something that we kind of strived for. And that made I don’t know. It made it great for doing shows. It was an awesome experience with pretty much everyone that walked in, and yeah, I don’t think there’s really anything that I didn’t like about that.

I mean, it was a learning experience dealing with being a house guy for a change. I had a lot of great experiences mixing a bunch of random bands that never really would have got a chance to mix and having fun with that. So, yeah, it’s a great venue. Hopefully you’ll get to see it at some point.

Yeah, that’d be great. Well, I just want to say thank you. And I have so much gratitude for people who care about this stuff, because I have been on tours where we’ve showed up at tiny places, where we’re figuring out how to turn the electricity on. And we’re carrying cases of tiny stairwells and giant places where you are pushing things up and down giant ramps. And there’s not enough forklifts and all these problems. And then when you show up at a place that’s just easy to work at and seems like it’s designed with this kind of work in mind, like, oh, God, it’s just tears start to come to your eyes.

But between Drew and myself, we’ve been to every venue on Earth that sucked. We’ve been to all the good ones. And so we were able to bring that experience. And I was advancing some shows with some people. I was like, yeah, I know. We got you. And I could tell they didn’t believe me when I said, yeah, I know we got you. We’re in Fort Wayne. It’s a hard thing to believe, but it’s like, when you tell someone, yeah, this is a great venue. It’ll be an easy day.

And they’re like, Are you sure about that? And I’m like, yeah, it’s great. And so it’s nice to be able to do that because I know how that it wears on you when you’re on the road and you’re back to back and miserable venues and up the stairs and you can’t fit things. Oh, there’s an elevator, but doesn’t fit in any of our cases. And like, all these sort of things that just pile up. So when you can roll into a venue where there’s hot water and clean showers and good food and a nice PA and clean backstage and lots of storage, all of those things, I don’t know.

It’s just a reprieve from Slugging it out. And I feel like there’s more venues. And I hope through the pandemic that my understanding is a lot of venues put a little bit more effort into doing some upgrades and getting some stuff together. So hopefully when touring is fully fired, back up, which is looking sooner than later that there’s more awesome venues on the circuit. And I think in the end, the fans appreciate a band and a crew that’s happy because they hopefully put on a better show when everyone’s in a good mood as best they can be.

Okay.

Sean, you mentioned the final install. You mentioned having everything dialed in and you mentioned making people happy. So let’s talk about that. Recently, a friend of mine installed and calibrated a sound system in a small Church, and I actually came up to observe a little bit and do some tests of my own. So I happened to be there and I could see the whole thing go down. And when I asked them about it later, it turned out that the client was unhappy with his work. Now the client wasn’t there while I was there.

My friend just did all their work and then left. And then later on, the client listened to it. And so he just heard through secondhand that they had asked for it to be improved. So the company sent up another tech, and my understanding is that that tech basically reset everything just back kind of to the manufacturer defaults. And the client was happy with that. So I don’t think there’s really a right and wrong here, but I do wonder if there’s some way that I could potentially end up on the side of the client liking it more often than not.

So is there some way for me to get inside of their head to kind of predict the characteristics of what is good sound to someone and then try to produce a result that they’re happy with, or is the only way to really guarantee that to just sit with the client in the room and basically audition changes until they’re happy?

Well, I don’t know. With them being so subjective, I feel like it is a personal decision. Now, people can like things that sound bad to me and think they sound good. I also feel that even on an install, when we Commission specifically the Clyde system, we had one of the texts from JBL come out, Raul Gonzalez, who’s an awesome systems engineers all over the world doing JBL stuff all the time we went through it, we tuned it. He did some of his tricks to kind of get it to where he thought it was cool.

We got into it. And then after mixing a few shows on it, I felt it needed some changes. So we kind of modified it. But I never really stopped modifying it in a way that if an engineer came in, who I could tell it was a good engineer, we’d talk about the system. I’d be like, hey, what did you like about this? What didn’t you like about this? What could we improve? And we had limitations of, like, we had a sub cavity under the stage, and we only had so much space, and we could only space.

Specifically, the subs were always kind of a tricky thing, but we had physical limitations, so we can only do so much with those. And so with that venue, it was always a constant. Can I make it, like, half a percent better? Can I make it, like, a little bit better? Can I tweak this out and so it was kind of a work in progress. But I can see that someone imparting a personal taste on a sound system could backfire, because if it’s not something that the end user or the client is accustomed to or likes, or if someone is mixing on it and they don’t really understand what you’ve done to it no fault of yours, but just the general concept of tuning PA or a room specifically.

And they’re just kind of not understanding what you did. I could see that going sideways, but I feel like in a venue or an install, I roll into a lot of places and they’re like, oh, yeah, this is done. We had to install.

That’s.

Fine. Don’t touch it. But hold on. What is going on here? You ask a few why questions, and then you’re like, hey, can we improve on this? And that’s something that it’s a never ending process. I mean, even still in the Studios I am missing with. I got new speaker stands last week for the studio so I could move the subs that we’re trying out underneath the speakers. So they’re in phase with everything. And I totally change the dynamics of this room. So I’m constantly tweaking stuff, even in a studio capacity.

I’ll pull out smart, do some measurements, I’ll move some mics or some speakers around. And so I always feel that there’s an opportunity to improve, even on an install, even on something that someone says is amazing. Like, with so many people having so many good ideas and so much technology to improve stuff. I mean, I don’t think any one person can get everything completely perfect. And with the subjectiveness of it, it’s like, if you got someone that comes in, they’re like, Well, I need more sub.

I’m mixing a DJ gig, and this just doesn’t have enough sub. We need to bring in more subs. And like, that’s crazy. There’s tons of stuff for that guy. There’s not. And so those are the kinds of things where it’s like, you have to adapt. But at the same time, I always feel that there’s room for improvement, even on something that I think might have been some really good work on my end. Someone’s like, hey, this is what I’m hearing. I’m like, oh, cool. That gives me a new perspective to listen to it and then possibly approach either modifying or adapting what I’m doing to accommodate a fix on that or being like, You’re stupid.

I don’t need to listen to you.

Let me try to hear that through your ears.

Yeah. And I think that mixing by committee a little bit on that, I think kind of helps. I mean, not too many cooks in the kitchen, but some trusted years. I always appreciate even on mixes or come out to a show. It’s like, hey, what do you think? And actually, I would like to hear some feedback. It’s nice to be able to get a collaboration on things. So that being said about install stuff. I think there’s a lot of room for improvement on a lot of different situations, but I’m sorry that you guys didn’t win with your client there, but you could have someone that has a completely different approach to understanding or enjoying sound the way that you do, and when they’re paying, then they’re in charge.

So there’s those dynamics as well. But yeah, I don’t know. I’m always up for trying new things or trying to improve on a situation.

No, this is great. As you’re talking, I’m realizing that if you’re my client, probably a better way to go about it instead of me saying, hey, Sean, I’m really special, and I’m really smart, and therefore I’m going to knock it out of the park on the first try. Probably a better approach would be to say, hey, I’d like to build a relationship with you, so that with this first pass, I’m going to get it to a place where I feel like it’s consistent across the space. And then I want to get you in here and get you to listen to it.

So then I can make adjustments, and then I don’t know what the right way to say it is, but I’d love to have a relationship where I can help you improve this over the next X days or X weeks, and I’ll come maybe we have a relationship where I’m going to come back in after you’ve had a few shows and take your notes and make changes. That’s probably a better way to approach it than try to pretend like, I know what is the best for everyone in the world.

Yeah, and everyone has a different workflow, especially like House of Worship stuff is tricky because you have a lot of volunteer sound people, too, that are trying to make the best of what they know how to do. So having something that’s easy to control without a lot of bells and whistles is always a better approach in those sort of situations. But yeah, like checking in three months down the line, like popping in and checking the mix. Hey, How’s this working for you? What can we do? I mean, I don’t know those sort of improvements.

I’m always looking for that. And even like, every time I hear something, the more I listen to things and the more I experience things and the more information or knowledge I gain, I think the better equipped I am to sort of either make suggestions or, like, question, what did I do last year? Is that the end? I’ll be all of what I’ve done, or should I improve on that? Or should I look for a solution? Not that I’m trying to create work for myself, but just baby steps on stuff, but yeah, with a client relationship, like I mentioned before, dealing with artists, I think a lot of success in audio engineering is based on relationships, whether it’s with the artist, whether it’s with the venue, whether it’s with your a two, whether it’s having a team of people or having a good set of relationships with people, I think it leads to success.

So I think that shows that as well. It’s like being able to go back and continue working on something and removing a little bit of the ego that gets in the way of like, hey, I did the best job I possibly could. It’s like, well, for you, it might be, but maybe in that case, what the client needed was something else.

Sure, Sean, let’s talk about System latency. For a second. I’ve had a couple of people talking to me about this recently. I’ve seen Robert Scoville talking about different things going through the console and console latency a lot recently, and at the last Lifetime Summits. Were you looking at System latency when you were thinking about what would go into the Clyde theater? Is the system latency influence your decision making when you’re setting up the console, specking the sound system, that kind of stuff.

I think that the only time that I really get super concerned about is when I start getting into, like, crazy plug in world on a digital console, and I start pushing the mix so far away from the band that they can hear it. And I think that’s a dangerous situation to get into where you’re inducing more latency than you need to to sort of throw some more bells and whistles on your left right bus and things like that as far as, like driving a PA, trying to keep things as efficient as possible.

Everything we were driving to the cloud was AES, and so tried to kind of keep that as tight as we could. There wasn’t a lot of latency induced Besides the processing I would do. And I got into that deep on, like, when I was mixing on a venue and I had a ton of plugins and stuff like that. There was workaround to try and get delay compensation to work in that. But I feel like, for the most part, with a lot of the networked audio systems, I mean, we’re down, like, audio is moving pretty quick in the digital realm these days, so I’m not as concerned as I would be if I was just piling up, plugins on the console and creating my own problem at front of house.

I think that’s where it becomes an issue for me and becomes an issue for me when I start affecting other people’s performances when I’m doing things that are like, why is this, like, I can’t hear? And we actually talked about it the other day where we’re mixing in one of the Sweetwater theaters here. There’s an older venue with a bunch of processing because it’s split out for, like, broadcast and split up for, like, hearing assist and all these different things. But then we also mix wedges off that for people on stage, and it’s like, I can’t play on it.

It’s so weird. I’m like, we have to turn all the stuff off in order for you to get sound that’s effectively quick enough for you to perform properly. And so someone brought it up the other day, and I was like, yeah, well, not always in all the rooms doing things like that, but I’m aware of that. Like, if you’re mixing monitors from a console, it’s got a ton of plugins on it. Someone’s going to be like, this is so strange. So when you’re adding 100 milliseconds of latency in processing and stuff, it’s not fun for anyone to deal with that.

But yeah, other than that, there wasn’t much of a concern. Most of the system at the Clyde was previously speck before I got brought in. So through Harmon with the JBL and Crown stuff that was brought in. Sweetwater JBL has been a big part of Sweetwater for many years. And so this is, like, the logical choice for us to get that rig into that room. And so that was my first time really working with the A twelve S and all this stuff. It was impressive. I mean, I had mixed on a lot of rigs and spent a lot of time working on them to make them sound, how I was hoping they would sound and hearing the A twelve out of the box.

It was a pleasant surprise. And from there it was a really enjoyable experience. There was a little bit of learning the BSS world and stuff like that and some of the processing that I hadn’t been so familiar with. I’ve been pretty much strictly on the Lake processing and all of that sort of stuff. But that being said, all of those things latency wasn’t ever really an issue for us. It was just if I was causing problems by trying to do too many happy things, okay.

It’s fixable. You haven’t put on all the system and realize that you’re stuck now.

Yeah, there’s a few things, and I always know that there are variables that you can’t control in every situation, and I have to be accepting of those things. Otherwise, I would lose my mind trying to think of everything that could possibly not be perfect. So adapting to what those are is fine. But, yeah, the latency is my own doing for the most part, if it’s ever going on.

So, Shaun, you’ve mixed up some of the worst places in the world, some of the best places in the world. You’ve just had a lot of experience, and I’m sure made a ton of mistakes on the way that you learned a lot of lessons from. So I wondered if maybe you could pick out one of them to share with us, maybe something that was especially painful or was an especially big lesson for you and just kind of walk us through what happened.

I think probably the most embarrassing audio mistake I made was we were doing Pink Pop, a large festival, and I think it was Belgium, maybe five or six years ago, and we had a decent slot. There was probably 40,000 people in front of the stage, like, sizable European festival. And we had, like, a rotating front of house zone. So everybody had, like, the console on wheels, and we’d get pole position, and we’d like to roll the consoles into place. So I built my rig out there and got forklifted out there.

And it was like bouncing around. And I had lots of outboard gear and a whole bunch of different things. So I got it all together.

Wired it all up.

Strapped it, like, with big truck straps altogether. So I had this big rolling island of gear that just went into place. And I had a couple of friends and a couple of people for some other bands come stand by me while I was mixing. They wanted to hear the band, and I was all excited. It was getting broadcast. So I was mixing a broadcast feed as well. So everything was, like, pressure is on for that. And so I line checked everything through headphones. So I got the PFL and everything.

All my inputs are good, but I never checked anything through the PA, which is kind of my big mistake, which I probably would never do gain. So they start a song. Guitar intro goes on for a while, and the drums kick in. I got no drums. Shit, no drums whatsoever.

Do you think a tiny person in the distance hitting things.

But you’re not playing drums? I’m like, oh, my God. Where’s the drums? And I’m looking down and I look over and I got, like, outboard compressors on the drums, and they’re both, like, dark. And I didn’t even look over there. And so I ripped the strap apart, ripped the cases open, throw all the lids, and the ICS had just fallen out of the back. And I like, plugging both back in. The drums came to life. And I think by the chorus, I had everything together, and that was kind of like one of my more embarrassing.

But also, I should have checked all of this stuff. This is my fault. And everyone was kind of impressed at how fast I moved to try and fix my own problem. But when you get into a complicated situation and you have a lot of complicated extra bells and whistles, make sure that you have them functioning, because that’s not cool. When someone’s like, hey, why didn’t that work? Well, I had all this stuff, and then it wasn’t plugged in, and that’s my fault. So I feel like that was probably, like, in the most amount of.

And I really doubt that anyone really noticed. I mean, it would have broken my rule of, like, seeing someone play something and not hearing it through the PA. So that kind of bummed me out. I don’t know. I would say probably be the biggest, clumsy thing.

Why does the PFL work, but not going through the system work?

Well, no, I didn’t check it. I didn’t open PA up. I just played some music through it. I was like, okay, yeah, my outputs work, and then I didn’t pass signal routing through all of my messy routing and things that I was doing for fun and should have maybe been a little more straightforward. So simplicity probably wins and live sound for that. I would say that taught me some lessons and double checking or just being more simple in my process. And I think that was something that kind of opened my eyes and said, Well, I think it’s more important that everything just works rather than having all of these cool things better have a show than no show.

Sean, I have a few questions here that came in from the Internet. Bobby B says best practices for building a Writer so kind of a general statement question.

There any tips you want to give Bobby B for building a writer put on things that you can use and then make sure you very much highlight what you’re not willing to use. I think that that’s pretty much the biggest takeaway from that. I mean, I was pretty spoiled in the sense that we traveled with control all the time. So we always had console and processing and stuff like that with us for a whole tour for the most part. But there was some pas that I just refused to kind of mix on after a while of, like, repeatedly bad shows and bad coverage and more so obviously, I could mix a show on it.

But in the case of a few of them, it was like, it really gives the audience a bad experience because their coverage is so inconsistent. So that was something that I was really kind of adamant on. If there was a speaker system that I didn’t like to use, I won’t name names.

Wait, no. What is it? Are you not going to tell us which speakers you’re not willing to mix on? Well, I guess I have to have a show with you then get your writer. And that’s how I find out.

Yeah. And I really feel that in the development of sound system in the past ten years, if anyone has a PA that’s, like, 15 plus years old, it’s going to be beat down. So anything that you can do to make sure that you have new functioning speakers, and you know that they’re like, I’m not doing a show until someone signs off that all the drivers are functioning and everything’s working as it should, because that point large format, point source, pas and stuff like that where it should be a line array and things like that.

Or like, if it’s a venue that brings in a PA, making sure that you get something that fits your needs rather than whatever the cheapest option is is always something that I think is always a fight with a promoter and a venue, too. But other than that, knowing what you need to pull off your show, I think that there’s been a large transition of people being reliant on house consoles to people traveling with the X 32 rack for ears and maybe an X 32 in front of the house.

And that seems to be middle level touring. That is so many places. I see so many bands have their own gear, it’s all dialed in and ready to rock. So, like, there’s less of a reliance on a venue to provide consoles and things like that. But that being said, if you’re really particular, then bring what you need. I think that that’s kind of the take away from relying on other people providing you things. If you need it, you need to bring it with you, because if you get into touring the world and you need a specific piece of gear, it’s going to be tough to find.

People are going to blow past that part of your rider when they’re not super concerned about the weird audio gear you need to make your show happen. But a functioning PA that represents your mixes, I think a paramount piece of the puzzle. I think that outlining what you need to do your job and figuring out what that is like. If you’re mixing an acoustic Act, I need a Di and a 58 and some speakers that function. If you’re mixing an Orchestra and you need, like, 60 DPAs, then you’re going to have to make sure that those people provide what you need.

So I think that is figuring out what your limitations are of providing your services or doing your job. Like, what’s the least you’re able to do your job with? And then what are you comfortable doing your job with and making sure that your production manager or whoever is advancing the shows fights for that for you. Yeah, I think that kind of covers that as far as it went into details on that stuff. Really, the most important thing is I just didn’t want to have a PA that sounded bad for the show, and that was kind of it depends on the size of venues you’re working on, too.

I mean, all of that is scalable to some extent. Okay.

Gabriel P says, how is it working with Avril Lavigne?

It was awesome. I spent about 18 months touring with her. I think I just turned 19. We went to 49 countries in those 18 months, and so did a six week tour of just Japan, bounced all over the place. Pretty much did every TV show that was being broadcast at the time, all of the daytime and nighttime shows and MTV and all that stuff. So it was an amazing experience. I saw a lot of places, made a lot of really great people, made it to a bunch of different venues and a bunch of different festivals.

And so it was a really awesome experience. The guys in the band were really great. She sort of changed up her band after the tour that we did. But the bass player, Charles One, as he scooped a gig with Brutal Mars just after that and has been working with him since then as an engineer. I think he’s got five Grammys now. So there’s a few people that kind of went on to do some great things. And that was the tour I did with Jimmy Akubuski, who’s obviously an industry legend and learned a lot of things and still stay closely in touch with him.

So, yeah, it was a great experience. I learned a lot of things. There was an opening act on that tour. Butch Walker was opening up, who’s a pretty successful producer and solo artist himself. But his engineer, Paul Hager, is also a super talented live sound and studio engineer. And so I learned a lot from Paul, and that was a really cool experience to be around him. And he would go into Studios on Days off, and I would tag along and stuff like that. So it was an interesting thing.

And that continued on. He was actually the front of house mixer for the Google Dolls when I worked with them. So I made some long term relationships with some people on that tour. I got to see a lot of places and learn a lot of things. So it was very cool. Okay.

Greg McVeigh says, ask him when he’s going to ditch the real job and mix touring acts, gain delivering that with as much sarcasm as possible. His work with Counting Crows was just fantastic.

Yeah. Greg’s a great guy. And yeah, he came out. Actually, I’m trying to think I think the last time I saw him was in San Diego, and I had a Sandstorm console. Incident, there might have been another real one. We were in a poolside venue in Vegas and monsoon rolled in, and I went and ran onto the stage to save all of our guitars and left my console uncovered, and it ended up with, like, a sand drift over it. And so we went to the I think maybe it was the Orange County State Fair or somewhere near San Diego, I think.

And so anyways, I had a console full of sand, and so I was swapping that out. And I think that was the last time Greg was out. So, yeah, I love doing that. I’m really enjoying having employment through the pandemic. Sweetwater is amazing. It’s super cool things here, and stability is something that I’ve never really experienced before in my professional career. So the hustle of finding the next gig, finding the next record, finding whatever is something to be, don’t take that for granted. I quite appreciate my opportunities here, and it’s just a great company that is endlessly growing.

And we’re doing a lot of cool things. I get to be involved with a lot of stuff, and I have no idea what the future holds for me, but right now, this is super awesome, and I’m really happy to be here.

Shawn, what’s in your work bag? Are there one or two unique pieces that you have to have with you on every show or something interesting that might be fun to share with our listeners.

A bunch of iLOX.

Oh, really?

Okay. Cool. I don’t know. Like I mentioned the telephone and microphones are something that I really like to use. I do have a few of those I travel with. I, like, on my drums, guitars, vocals, all this sort of stuff. So if I take a couple of things to a gig, probably that I mean, adapters and all that fun stuff. I like to have a good adapter kit. I like to buy cable if I need some more options of a mixing on an analog console or something, I’ll have y cables to Bolt the snare and some stuff like that.

I kind of ditched headphones a long time ago in the live realm I use in ears for checking anything when I’m mixing, I think the isolation you get from them is really helpful. So I have a few different sets. Greg was very helpful in getting me sorted out with some ultimate years uerms back in the day, and those are really great pieces that I used and trusted to do some of the mixing and listening for the Cannon Crows live stuff. And then I would go back to the tour bus after the show and master the show on my in ears and a set of headphones to double check them.

Those kind of things I travel. Usually if I’m traveling somewhere, I have a universal audio interface and some Apollo satellites to have some of their processing to do remote stuff. Or if I’m working on a project, I can take it with me. A laptop. Usually if I’m going somewhere like, oh, yeah. I need all this stuff and then I like, don’t use any of it. I drag around a lot of things I toured for years with too much, like, hard drives and all these adapters and stuff.

And I mean, the thing these days is like my laptop now has USBC, and I need adapters to get anything plugged into it. So it’s the adapter farm of that sort of situation. But, yeah, it’s constantly evolving. I don’t have, like, a set gig bag these days. It’s just sort of like, what I need to do, whatever I’ve got going on. If I’m coming to and from work, it’s usually I’m taking my locks home and then I’m bringing them back to work. And that’s about all that.

I move back and forth, which is nice.

Sean, what’s one book that has been really helpful for you?

Well, I don’t know. I appreciate a lot of different books. I’ve been reading more lately. One of the books that I found had the most amount of indepth information about audio engineering is a book called Recording The Beatles that I got probably ten or twelve years ago, an acquaintance of mine, Sky, Brian Kwh, who works with the who is their keyboard tech, but also as a studio engineer remixing. Besides and outtakes, he goes through the archives and different record labels and does these weird releases. But put together this, like 20 year project of researching every single recording The Beatles ever did and what gear they use and how they bounce the tape down and all the gear from Abbey Road.

And it’s like this Bible. It comes in like a sleeve of an old tape reel. And so that’s like one of my prize possessions. And it’s like an amazing book. And I’m like, if anybody’s ever geeking out about something like here, check this book out. And it’s like, supposedly they’re worth a ton of money because they stopped paying them and they’re kind of pricey when they came out. But that’s one of the coolest books I’ve ever seen. And I love that reading chairman of the board, which is Bill Schnee autobiography, really famous producer and engineer.

I don’t know. I try and dig into some audio related books and some self help and leadership books and stuff like that if I’m trying to get motivated to do something. But other than that, if I see something that comes out or if I hear someone talk sort of chase down what they’re talking about and then read about it usually. Okay. Cool.

Sean, do you listen to the podcast?

I do. Okay.

So I want to know what are the one or two that you have to listen to every time a new one comes out in the audio field.

Working class audio podcast that interviews a lot of mostly studio engineers, but super interesting because it sort of removes a little bit of the technical side of things and then talks about how people have navigated their career path and more of like, how did you get through this financial situation? How do you deal with having a job and doing this? And it really is a really interesting perspective on it. I mean, obviously, I love gear and I love geeking out about stuff, but it’s also nice to hear how other people survive in this industry, how they work in this industry and how they get work.

So I usually am pretty religious of watching that one. And then Russell Brand got a really interesting podcast called under the Skin, and that’s a weekly thing for me. I usually get into that, man. It’s all over the map who interviews on that, and the topics could be anything. So it’s super interesting. So that one is one that I listen to every week as well.

Maybe there’s something you can help me with.

Yeah.

When I listen to working class audio, one thing that sticks out to me or struck me compared to my own podcast and compared to everyone else’s podcast is how he manages to have really honest discussions about money. And as soon as I heard that the very first time I was like, oh, I want to do that in my podcast never had the balls to do it. I don’t know if I just can’t figure out the right language, or I just come from a background where I have a tough time talking about that.

But I don’t mind asking you, like, how did you get that job or how did that thing work out for you in your life? But to say, can you tell me about the economics of being Sean Daly? I’ve never really figured out how to ask that. Well, so do you have any ideas? Like, how could I ask live sound engineers? How would you appreciate starting a conversation saying, like, how do you put together your financial life so that you can sort of survive and have the things that you want?

Yeah. How do you have somewhere to live and eat on a monthly basis?

I want to say, Where do you get money? But this is really interesting for people, right? Because we all put our careers together in different ways, and that’s actually a really interesting topic. Some people have other jobs where they are, like, landlords or they’re selling stuff on the side or whatever. And that’s all really interesting. So it’s like, how much of your life is getting checks from doing an audio gig or something else.

Yeah. Man, to engage with that subject, you kind of have to have an idea of what you’re getting into. I feel like on that podcast, he does a good job of sort of dancing around and also engaging. So it’s not like, well, how much money did you make on this? But it’s like, were you successful in this? And how did it come together? And I mean, I have never really worked for a company. I’ve been my own company. I’ve been an independent contractor. I’ve always done stuff where I’ve had to negotiate and ask people for money and chase them down invoice and all that stuff.

And so this is my first time having a job where I get a paycheck every two weeks. And trust me, it’s awesome. But that being said, the amount of full time positions in the audio engineering field that are consistent salaried positions with health benefits and all that stuff are few and far between. So I mean, it’s a touchy subject because I think some people are really they got to grind and grind and grind to make ends meet. Some people have some success. And I mean, I think that’s where the relationship aspect of all of this stuff comes into play is like, who do you know that can get you your next gig?

How does that sort of play into all of that and I mean, without actually asking about money, I think those are the kind of things like, how do you get your next gig? How do you make sure you get paid and those sort of things? I don’t know to me, I’ve always kind of been bad with that, too. I’m not a great businessman, and I’m not super inclined in that situation. So for me, I like the fact that I have a safety net of a company that’s supporting me and behind me and paying me, that’s a really amazing thing.

Whereas when you’re an independent audio engineer and you’re like, okay, cool. So I was in the studio today. I made, like, $300. And then I went and mixed at a bar, and I made $100 and some of its cash. And some of it goes to my business and all those sort of things, like the accounting for being an audio engineer is a mess. It’s tricky. You try and buy gear, you try and write stuff off, you sell something, all these sort of things that kind of come together to allow you to live and do what you like.

I think it’s a fine balance, but, yeah, I don’t know how to breach that subject in general capacities. I mean, some people are super transparent about it. Some people don’t like talking about it, but I think it’s a struggle. And I think that that’s something, too, that I see it’s difficult for me to speak at a couple of local colleges and stuff where people are in audio engineering programs. And it’s like, if you’re not already hustling, you got to start hustling, this is not going to work.

You’re not going to get work in this industry. And I think that that’s something that it’s not an easy field to get into. There’s a lot of competition. There’s a lot of people that want in on this and doing what I do. I think that it’s amazing. I love my job. There’s nothing I would I can’t think of anything else I’d rather be doing than what I’m doing right now. So that being said, I’ll fight for my gig. I’ll fight for what I need to, but people want to do this, and people are competitive with it.

I think a big part of it is the journey. And maybe the identification with you’ve gone through the same struggle that I have. And so maybe I could approach it by asking people about their business journey or their financial journey, because I know that for me, I spent most of my life kind of going through cycles of going broke and just sort of living from paycheck to paycheck. And it was only in the last few years that I started getting enough money coming in that I’m not just sort of constantly worried about money.

And so maybe I could approach it that way and sort of ask people, like, tell me about your financial journey because this is not a job where you can survive without understanding the business side of it and where you can survive without understanding kind of the economics of touring and the economics of shows. You kind of have to have a grasp of that. And I think there are potentially other jobs where you may go through your entire career and job and never really understand how the company makes money.

You just understand that you do your job and you get a paycheck, but it doesn’t really work that way for us. So you’re giving me some good ideas. I think that would be a thing to ask. And maybe we’re already going pretty long in this interview. But if you just had, like, a short answer, could you tell me about maybe a time in your life when you started out? And there was probably a period where you were just kind of living from paycheck to paycheck? And, like, will I have enough money to buy food and pay the rent?

Was there a transition where just you had enough work coming in or you had enough money saved where you weren’t? So that wasn’t the dominating fear of your mind of having enough money?

I’m still waiting for that to happen.

This is where we introduced the GoFundMe for Shawnee.

No, I’m good. But that being said, there was always a motivating factor for finding work, my touring career. I don’t think I ever regressed in my pay scale. So every tour that I moved to, I was making more money. So I was, like, growing my value, and people were recognizing that. So at least my touring world was consistently growing. Now, in the recording side of things, I was always an uphill battle of figuring out how much money people had and what I could get them to pay me to work for them.

And that was always tougher to negotiate. At least with the tour. I could be like, hey, I’m going to work for you guys. What is the pay? We can negotiate a price. And then I’m locked in with that. And depending on what happens, I got pay raises and stuff like that, depending on what was going on. So that was always cool. But my driving factor is that it was always like, okay, cool. Well, I need to pay rent at the studio. I need to pay rent at home.

I need to eat. What am I going to do? And I got to find work. And that was kind of the thing that drove me. It’s like, I got to buy gear. Okay? I bought more gear than I should have, and I need to eat, and I still need to pay rent. So what am I going to do? And those sort of things just always pushed me to kind of keep pushing and always trying to find the next thing and be like, okay, well, then selling gear is what we’re going to do this month and move a few pieces of few pieces of gear that I don’t use or that I actually didn’t need, and that’s something else I learned later on in life that you don’t need to own every piece of gear on Earth.

But those are the kind of business decisions that I should have paid attention to early on when I was trying to build a studio and buy a bunch of gear and own more stuff than I needed to, where I could have saved money and established my future rather than putting it all into things that really didn’t get me anymore business, which is something that I learned later on in life that it wasn’t as much about the gear you had, but more of the attitude you brought to the table.

But that was always my driving thing that drove me was that I needed to continue to live, and I needed to continue to find work. And so I networked and made friends. I went to shows, found bands to record, and all that stuff sort of put me in the situation that I am in today. But through a connection I got this job at Sweetwater that I met through being on the road and all these sort of things. I have a career trajectory that I think I can link to three people for 15 years of work.

I met the people from the average tour when I was doing my first tour, that tour manager tour managed to Googles and the production manager on the average tour did The County Crows. And so those people got me my work that lasted 15 years. And so those kind of connections where you establish something and you can do a job and keep in touch with them. That’s pretty invaluable to have a few connections to trust you and are willing to put their name on the line for you.

But, yeah, that kind of is not very short, but that’s kind of where I’m at.

Shaun. Where’s the best place for people to follow your work?

Sweetwaterstudios. Com under the Team tab. Check in with me there. My email is there. And then also, the only social platform I’m active on is on Instagram at Sean Dealy and lots of photos of beer. Yesterday it was eight at I was transferring ads, and I was hooking them up through Dante, and it was kind of an interesting throwback, but I engaged with new technology and old technology, and it was actually quite simple. And so that was kind of fun, but yeah, mostly some geeky pictures of some studio stuff and people that we’re working with here, but yeah, no, it’s good.

So I like to try and share going on.

All right. Well, Sean Daly, thank you so much for joining me on Sound Design Live.

Appreciate it. Thank you for your time.

Do subwoofers need time alignment?

By Nathan Lively

It’s really important to get the low end right at live events. Research has shown that 3/4 of what people consider to be high quality sound comes from the low frequency content.

Subwoofers are a big part of that low frequency content, supporting and extending its capabilities. However, subwoofers also require careful setup and alignment to ensure optimal performance.

If you’ve ever had trouble getting your low end right, then you might want to read this article. It will explain why subwoofers need to be aligned properly and how to do it.

What is subwoofer time alignment?

Subwoofer time alignment is the compensation for arrival time differences between sources at the listening position. The difference in arrival times may be caused by a physical distance offset or an electronic delay. It is not frequency dependent.

The journey of sound from transmitter to receiver is not instantaneous. If two sources are separated by any distance then their sound arrivals will also be separated. This is the common situation with mains in the air and subs on the ground. From the listeners perspective the subwoofer is closer and must therefore be delayed (or physically moved) to be time aligned with the main.

distance offset

Does high frequency sound travel faster than low frequency sound?

In short, no.

The speed of sound in an ideal gas depends only on its temperature and composition. The speed has a weak dependence on frequency and pressure in ordinary air, deviating slightly from ideal behavior.

Wikipedia

To compare the speed of sound a different frequencies using the Rasmussen Report PL11b at 20ºC, 50% RH, 101.325kPa:

20Hz: 343.987m/s

20kHz: 344.1206m/s

From 20-20,000Hz the speed of sound changes by only 0.1336m/s.

What causes subwoofer time misalignment?

Subwoofer time misalignment can be caused by acoustic or electric latency. Acoustic latency occurs when two sources do not have matched distances. Electric latency happens upstream in the signal chain, often in a digital signal processor (DSP).

Unless the receiver sits equidistant from both sources, some amount of acoustic latency will always occur. Ignoring any boundary effects, imagine a situation where the entire audience stands at 1.6m height. With a subwoofer on the ground and a main speaker at 3.2m height there is no difference in distance from each speaker to the audience. Everywhere else, there is.

Electrical latency can occur anywhere in the signal chain, but often occurs when one source is processed separately and differently than the other. If two matching copies of a signal are sent to the main and sub then there is no latency, but if the signal for the sub is processed independently through ten plugins then there will be a difference in latency.

How much latency or misalignment is too much?

When a main is stacked on top of a sub we don’t usually worry about the acoustic latency. When the sends for main and sub are split in a DSP we also don’t usually worry about the electrical latency.

Why is that?

Acoustical latency

The wavelengths of low frequencies are relatively large and require a big change for misalignment to bother us. For the purposes of this article I will define a significant misalignment as anything beyond 60º or 17% of a cycle because it will produce a reduction in summation of 0.5dB.

How far apart do our speakers need to be to create a 60º misalignment?

The operating frequency range of a Meyer Sound 750-LFC is 35-125Hz. The highest frequency has the shortest wavelength and therefore the greatest risk. The wavelength of 125Hz is 2.75m, about the height of a male ostrich. 17% of 2.75m is 0.46m, about the length of your forearm.

If we return our example of a seated audience with a sub on the ground then the main would need to raise up to 5.15m to be 0.46m farther away than the subwoofer from the mic position.

offset

Don’t try to generalize this example into a rule. You could just as easily put the sub in the air with the main, but 0.46m behind it to create the same misalignment or change the microphone position.

It is difficult to generalize, unfortunately, because the relationship between source and audience will always be different. However, I can see how it is helpful to translate alignment to distance. This is why the SubAligner app includes maximum distance offset in the Limits pop-up.

limits

The opportunity here is that after you have performed an alignment for a single location then you can move out from that location in any direction while observing the change in distance offset to find the edges of the area of positive summation (aka the coupling zone, aka <120º).

Electrical latency

Matched electrical latency is maintained by splitting the send to main and sub at the last moment necessary. This doesn’t mean you can’t mix to subs on a group in your console if you prefer, just make sure that the sends to main and sub are coming out of the console with the exact same latency. You can verify this with an audio analyzer. 

Time alignment vs Phase alignment

Subwoofer time alignment can be confused with subwoofer phase alignment because the two are interconnected. Time offset causes phase offset, but phase offset doesn’t necessarily cause time offset.

In most cases the timing is set to “align” two (or more) signal sources so as to create the most transparent transition between them. The process of selecting that time value can be driven by time or phase, hence the relevant terms are “time alignment” and “phase alignment.” These are related but different concepts and have specific applications. It’s important to know which form to use to get your answers for a given application.

prosoundweb.com

Time alignment connotes a synchronicity of sources, e.g., they both arrive at 14 milliseconds (ms). Phase alignment connotes an agreement on the position in the phase cycle, e.g., they each arrive with a phase vector value of 90 degrees.

prosoundweb.com

We have already seen how acoustic and electronic latency can affect time alignment. Let’s look closer at what can affect phase alignment.

What is subwoofer phase alignment?

Phase alignment is the process of matching phase at a frequency and location.

If a sine wave is generated starting at the 0º position of its cycle and then fed into a subwoofer, will it come out at 0º?

That will only tell us the story at one frequency, though. How can we look at the story of the entire operating range?

What does sound look like before it goes into a subwoofer?

This video compares the input and output of a microphone cable passing sine waves at 41, 73, and 130Hz with an oscilloscope. Traveling at the speed of light the mic cable appears to create no time offset.

I could insert a video comparing the input and output of a microphone cable with an impulse response, but without anything in line, they look the same. I added a 1ms delay to put the IR in the middle of the graph.

This image shows the transfer function of a microphone cable with a magnitude and phase graph. The magnitude and phase trace are effectively flat. Exactly what we want from from a cable.

What does sound look like when it comes out of a subwoofer?

This video compares the input and output of a subwoofer passing sine waves at 41, 73, and 130Hz with an oscilloscope. I have removed any latency so that we can focus on phase shift created by the sub.

This video compares the input and output of a subwoofer with an impulse response (IR). The IR seems to get stretched out as the amount of phase shift changes over frequency. This is the normal behavior of a transducer who’s group delay, and therefore phase shift, is variable and unable to reproduce every frequency at the same time through the operating range.

This video compares the input and output of a subwoofer with a magnitude and phase graph. Unlike most full-range speakers, the phase response of a sub never flattens out. It’s a moving target.

Do all subwoofers have the same phase response?

A subwoofer’s response will change with its mechanical and electrical design. Matching drivers in different boxes may have quite different responses. Even the same combination of driver and box might have a small contrast in response because a typical manufacturing tolerance is ±5dB.

For this reason it is important to avoid making assumptions based on a manufacturer’s spec sheet, but instead measure the final product and prove it to yourself.

Does the phase response of a subwoofer change with level?

A cold subwoofer operating within its nominal range should maintain a steady phase response against any change in level. But, as a sub approaches maximum SPL or begins to heat up, its response may become non-linear. This behavior will vary from subwoofer to subwoofer so it’s important to avoid driving two different subwoofers with the same channel.

Unfortunately, I don’t know a rule of thumb to guide you, but it would make sense to compare the response of a subwoofer when it’s cold to when it is hot. When I worked at Amex in Slovakia and we were setting up a new system, Igor would punish it outside playing loud music for a few hours and listen to it afterwards.

Of course you can measure this change with your audio analyzer, but another fun test is to push on the driver with your hand when it’s cold to feel how rigid it is. Run it at a maximum level for two hours. Push on it again. Feel how it has become less rigid (increased compliance).

Here is a graph from Heat Dissipation and Power Compression in Loudspeakers from Douglas J. Button showing a sample loudspeaker before and after full-term power compression. The solid line is the one with more heat and a worse quality rating.

Does the phase response of a subwoofer change over distance through the air?

…allow me to remind you that the loudspeaker’s phase response, within its intended coverage, typically doesn’t change over distance, unless you actually did something to the loudspeaker that invokes actual phase shift, i.e., applying filters of some sort which you should be able to rule out!

merlijnvanveen.nl

Here is the magnitude and phase response of a subwoofer measured at 1m and 100m. The only thing that has changed is the level due to the inverse square law.

1mV100m

Room interaction however, will make it appear like the loudspeaker’s phase response is changing over distance because the room makes the traces go FUBAR.

merlijnvanveen.nl

Here’s what that above measurement looks like if I enable four boundaries. At 100m the reflections have transformed the phase trace (blue).

1mV100mWithBoundaries

Where is the acoustic center of a subwoofer?

Why does distance offset not correspond exactly with phase offset?

All other things being equal, the distance offset measured from your microphone to your subwoofer may not exactly correspond to the measured phase offset in your audio analyzer. This is due to an interesting acoustical phenomenon documented by John Vanderkooy.

As a useful general rule, for a loudspeaker in a cabinet, the acoustic centre will lie in front of the diaphragm by a distance approximately equal to the radius of the driver.

J.  Vanderkooy, “The Low-Frequency Acoustic Center:  Measurement, Theory, and Application,” Paper 7992, (2010 May.)

This fact becomes important when estimating delay times for subwoofer arrays where a small distance in the wrong direction could compromise the results. It may also be important if you are attempting to estimate subwoofer phase delay from far away without prior access to its native response.

What is the subwoofer crossover frequency?

A subwoofer’s recommended crossover frequency may exist on its spec sheet, but when it comes to subwoofer alignment in the field we must look beyond a single frequency to the entire crossover region affected by the alignment. To make an exaggerated theoretical example, imagine if you turn the subwoofer up by 100dB. The crossover region where they interact will also move up.

crossover
crossover+50

The crossover region is commonly found where magnitude relationships are within 10dB because you have the highest risk of cancellation and the highest reward of summation. To find this region in your audio analyzer, insert a 10dB offset and find the magnitude intersection. Some audio analyzers offer other tools like cursors.

What causes subwoofer phase misalignment?

The most common reason for subwoofer phase misalignment is user error. This may seem like a bold or aggressive claim, but manufacturers have historically placed their responsibility on their customers.

There are many subwoofers in the world and only a small number of them have detailed instructions on phase alignment within a narrow set of limitations. The rest require the user to discover an optimal alignment for themselves. This is further complicated by the fact that reflections can make measurement and listening tests misleading or impossible when performed under typical field conditions.

We saw above that what comes out of a subwoofer is not what goes in due to system latency and phase shift. Some products take this fact into account and are specifically designed to work together and are phase aligned when equidistant, therefore only requiring compensation for any distance offset. Other products are designed to work together, but are not phase aligned when equidistant. The third, and most common, scenario is that sound engineers like me and you end up combining products from different generations, families, and manufacturers that were never designed to work together.

I should pause here for a moment to say that I’m not passing judgment or point a finger. I don’t have enough aware of all conditions to say why things are this way, just that the complications exist. And honestly, I enjoy the puzzle. See any of the video on my YouTube channel from the past couple of years for evidence. 🙂

What are the consequences of subwoofer phase misalignment?

Let’s ask Nexo.

Consequences of badly aligned systems
Mis-aligned systems have less efficiency: i.e. for the same SPL you will be obliged to drive the system harder, causing displacement & temperature protection at lower SPL than a properly aligned system. The sound quality will decrease. The reliability will decrease as the system is driven harder to achieve the same levels. In certain situations you may even need more speakers to do the same job.

NXAMP4x1 User Manual v3.1

Do subwoofers need time alignment?

Yes, subwoofers need time alignment any time there is a distance offset creating acoustic latency. They also need phase alignment in any event when they are being combined with another source that is not already phase aligned when equidistant.

Do not assume that your main and sub are phase aligned when equidistant just because they came from the same manufacturer. You have a 33% chance of creating cancellation instead of summation.

How do you time and phase align a subwoofer?

Although there seem to be many methods, I have only ever found one that works reliably and has all three unobtainable characteristics: fast, cheap, and good. It may sound like I’m about to go into some wild conspiracy theory you’ve never heard of, but the method I use is also recommended by L-Acoustics, d&b audiotechnik, RCF, and Coda Audio (and probably more). It involves two steps: first in the phase and then in the time domain.

  1. Create a relative equidistant alignment preset using filters, delay, polarity, etc. (this is the fun part)
  2. Modify that preset in the field using the speaker’s absolute distance offset by adjusting the output delay time or physical placement.

The method goes by various names, but I’ll give Merlijn van Veen the credit for the Relative Absolute Method since he introduced the idea to me. I then packaged the idea into an app called SubAligner. It not only includes alignments for many major brands, but a total of 39,183 possible combinations between different brands.

How do you verify subwoofer alignment?

How do you know if you’ve done it correctly?

A listening test should reveal higher SPL and a tighter response around the crossover region. SubAligner offers a black and red pulse to focus your ears in the right area.

An audio analyzer should show matching phase response between each speaker and expected summation in the magnitude response through the crossover frequency range. Appropriately filtered IR peaks should be aligned.

All of these methods should work, but can be ruined by reflections. In these worst case scenarios, I still rely on the Relative Absolute Method because I’d rather use something I know to be true than try to speculate on what might be true. I have written more about this in Don’t Align Your Subwoofer to a Room Reflection and Can you remove reflections from live measurements for more accurate alignments?.

Have you tried this method? What were your results?

Acknoledgements

I want to thank Francisco Monteiro for the feedback and patience with my many questions and misunderstandings.

  • 1
  • 2
  • 3
  • …
  • 30
  • Next Page »

Search 200 articles and podcasts

Copyright © 2022 Nathan Lively

 

Loading Comments...