Sound Design Live

Build Your Career As A Sound Engineer

  • Podcast
  • Training
    • My Courses
  • Archive

What happens if you’re not familiar with a speaker?

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live I’m joined by Manager of Technical Services at GerrAudio Distribution, Ian Robertson, independent consultant and designer of AV systems, Arthur Skudra, and previous vice-president now board member of the AES for the Latin American Region, César Lamschtein. We discuss career advice coming out of the pandemic and how Tracebook can support your work in the field when you run into speakers you’re not familiar with.

Tracebook is an independent public non-profit community that promotes the open exchange of loudspeaker system reference data measured by audio professionals for audio professionals.

Notes

  1. A Downloadable Speech Track… The Royer Track
  2. Tracebook Measurement Procedure
  3. Tracebook Forum
  4. Quotes
    1. Keep that hunger to learn.
    2. it’s nice to know what the room is contributing versus what the loudspeaker naturally is doing it’s by itself or what the other loudspeaker is doing and contributing in the context of that measurement. So having a baseline measurement of a loudspeaker is a highly valuable thing.
    3. it can help me select the correct speaker or select complimentary speakers to go on a particular job site.
    4. One thing that the trace will give you an no manufacturer gives you is the availability to, to mix and match different brands of speakers.
    5. The first discussion is…verification. A lot of people say, “Oh, ours is fine.” Nine times out of ten, I find something wrong. But what happens if I’m not familiar with the speaker? And I don’t know what I’m walking into, how do I know exactly? And Tracebook is a great resource for exactly that.
    6. I think this is really an a prime opportunity for any rental house to measure their entire inventory. And that way you have a baseline of what those speakers should be doing.

Transcript

This transcript was automatically generated. Please let me know if you discover any errors.

[00:00:24.490]
I’m Nathan Lively, and today I’m joined by the manager of technical services at Garre Audio Distribution, Ian Robertson, independent consultant and designer of AV Systems, Arthur Skudra and vice President of the AES for Latin American region, Caesar Lamschtein. Ian. Arthur Caesar, welcome to Sound Design Live. Okay, so I just gave it a tiny snippet of something that you guys are known for, but I’d love for you to talk a little bit more about your background, where you are in the world today so that people can relate to you and your experience. So maybe I’ll go to Ian first.

[00:01:01.240]
I’ll give you the short form history. 1970 is when I started all this. I started fiddling around with audio equipment. When I was in high school, a couple of friends of mine and I had a moderately successful DJ outfit that we ran around the high school dances and Church dances and things like that in the started a production company with a couple of friends in the went freelance and I was a Turing sound engineer. I worked at a production company. After I finished up, I had children, which brought me away from touring and I took a gig at a production company in the East Coast. I’m hail from Halifax, Nova Scotia originally, so I did that until the mid two thousand s. And then I made the move up here to Brock, Ontario to work with Gear Audio Distribution. So as a company, Gear Audio represents a variety of respected professional audio communications equipment, test and measurement equipment, pro audio equipment, console speakers, et cetera. And the primary function I hold here is I do technical support. I do a lot of system design and commissioning and education, basically for our product lines.

[00:02:16.320]
Awesome. And so I know that you’re teaching digital consoles smart. Anything else?

[00:02:22.090]
Myers Sound? I do training in Map 3D.

[00:02:26.750]
How excited are you about the Panther?

[00:02:30.290]
Quite.

[00:02:31.120]
Yeah.

[00:02:31.570]
Very. Yeah. I think it’s going to be a really fantastic Swiss Army knife type of enclosure that’s small enough and light enough, portable enough to do mid size gigs, yet have enough horsepower to do real full size shows, Stadium shows.

[00:02:49.390]
Cool.

[00:02:50.290]
I think it’s going to mean a lot of people can invest in a particular loudspeaker and use it across a wide variety of productions.

[00:02:59.030]
I immediately messaged Myers Sound and I was like, Is this in Map 3D already? And they’re like, no.

[00:03:05.210]
Okay.

[00:03:05.850]
So I don’t know when that plan to come out, but I’m excited to play with it in Map three D. Okay, let’s go to Caesar. So, Caesar, tell us a little bit more about yourself and where you are in the world today.

[00:03:17.030]
You may have the Panther in Facebook first before I have anything that’s possible.

[00:03:22.440]
If someone has it, they could measure it. That’d be great if they could upload it.

[00:03:25.530]
Okay. Now I am for one more week in my hometown, Montevideo in South America, Uruguay. I have to correct yourself. I am since a month, no longer the Vice President for the Latin American region of the Audio Engineering Society. I have been thrown away because I was not very good. So after four years of taking the duty, I passed the torch to Jorge Sama from Lima, Peru who is in charge today of the office. I am now in fact a member of the board of directors of Das. So I guess I am a director, which seems nice. It feels important.

[00:04:12.230]
Okay, alright, thanks. And then, Arthur, what about you? Where are you? And tell us a little bit about your professional experience that might help people get to know you.

[00:04:22.660]
Oh gosh. I’ve been an independent consultant for 20 years now. Been here in Canada based out of Hamilton, which is about an hour west of Toronto. Just to give a perspective of where I’m located in this planet. But I’m an independent audio visual consultant. My realm is more in the install industry, although I do occasionally live shows and concerts, things like that. But really my realm is more in the install business and commercial audio. But it’s kind of hard to pigeonhole what I do and where I am and what projects I’m working on. Because one day I could be working on an airport paging system and the other day I could be doing a performing arts hall and another day I might be working on a house of worship. My background actually started in the house of worship market and that started as a full time tech director for two big huge megachurches. So I was in charge of all the technical aspects of 4005 thousand seat auditory. So I’m well aware of all the challenges of lifestyle and making things sound right and appeasing some very demanding circumstances in terms of trying to make a sound system work and also these very picky ears and taking care of a lot of people complaining and trying to make everyone happy.

[00:06:00.420]
It’s not easy and anyone that thinks Church is a walk in the park sadly mistaken because you’re melding the two things together. You’re not only trying to make things musical, but also very intelligible. The music is just as important as the spoken word and designing a sound system that does both is particularly challenging and to tune and optimize the system to do both is also very challenging, but it’s always finding the right compromises. Lately I’ve been doing all kinds of interesting projects in the install side of business and with COVID I’ve been doing a lot more remote work. That’s the first. Normally I insist on being there in person to Commission a system and kind of forced my hand. And now I have actually a couple of portable kits that I fly around that essentially is a full measurement rig that’s in a Pelican case. And I can quite literally go in and remotely optimize and program the system. So I’ve been doing a lot of Nathan Lively, much to my chagrin. I still want to be there in person to actually hear the system. And I think the opportunities are really right now to get back into the AV industry if you’ve taken a break.

[00:07:33.970]
Thanks, Arthur. I didn’t know this about you with the House of Worship. I may have some personal questions for you later about some challenges I’m dealing with.

[00:07:40.940]
It’s a very unique market and it’s quite unlike what we’re used to in live sound. It’s very different and lots of unique demands.

[00:07:53.870]
Once you get a sound system set up for the first time, what’s one piece of music you want to play through it?

[00:07:59.550]
Yeah. There’s one in particular that I tend to rely on. It’s a tune by Chris Jones called Sanctuary, and it has a really nice low end and it’s really well produced track. But in particular, the thing that I like about this track is that his vocal mix and his vocal tonality is very broad range. And when you get the low mid OPA system nicely in line with the upper registers of the local range, there’s this lovely magic that happens with this track and what’s right. It just falls in and what’s right and it’s really good because if it’s wrong, you also know it’s wrong.

[00:08:42.680]
Okay, Caesar, what about you?

[00:08:44.610]
I would say that my tune that is always good in any system. When I need this, I pull out my Joker card, which is Street Worker by Michael Jackson that has great loudness potential. The arrangement is so well done that when there’s bass, there’s nothing else. When there’s no drum, there’s nothing else. When there’s the voice, there’s nothing there harming. So it’s like having a system per musical component so clarity is perfect. Bruce did a great job with that.

[00:09:18.130]
Great.

[00:09:18.700]
Good.

[00:09:19.140]
Arthur, what about you? What’s something you like to play on a system once it’s set up?

[00:09:23.040]
If I’m setting up a speech system, I have a couple of speech tracks, of course, the ubiquitous Synodcon speech test tracks that you can download for free.

[00:09:34.940]
Modern electroacoustics began in 1015.

[00:09:38.890]
Those are really helpful, but one in particular that I play back quite often is a Garrison Taylor track from Prairie Home Companion called Giant Decoys.

[00:09:51.610]
Okay.

[00:09:52.060]
There were some men in Lake Wabagan who were having a high old time notice last week. And I’m talking about the Sons of Canoe up at the Suns of Canoe Temple. They were busy all week down in the basement building duck decoys for duck hunting season, which starts in just a little bit, which is such a big deal for all those old guys.

[00:10:10.240]
I love the voice of Garrison. It’s like it has that deep rumble to it as well as the presence to it. And it’s just fantastic for setting up the sound systems. One thing, when you’re playing music through a system, your ears do tend to be a forgiving instrument in terms of any faults that are revealed in the loudspeaker deployment. If you throw a speech track through the system, your ears are a lot more attuned to whether something is right or wrong. And I find that to be absolutely invaluable to set up a system, especially if intelligibility is important. Run some speech through it and listen to it really carefully and critically. But other than that, for music gosh, I have a couple of Toto tracks. Of course, Steely Dan is the tried improving sound test tracks. Gaslighting Abbey, it’s fantastic, especially if you’re trying to pick up reflections or delays. You’re trying to time delay fills that snare drum and Gaslighting Abby is absolutely golden to be able to hear timing issues in the system.

[00:11:40.450]
Okay, we are going to talk about Facebook here pretty soon. If you don’t know what Facebook is, you’re going to be excited to hear about what it is. But I do want us to talk for a little bit about career advice. You guys have been in the industry for many years. Currently, there are a lot of people who are struggling to find enough work or getting back into work coming out of the pandemic. And anytime listening to this in the future, there’s going to be people listening to this who are working on finding more work or just finding a better fit, the work that they really love, that they are really good at. So I want to know from each of you one piece of career advice that you would like to share with people listening who might be struggling right now to find enough work because of the pandemic. So I’ll go to Ian again. First, only one last you guys for one track, and then you each gave me ten. So it’s hard to it’s hard to stop.

[00:12:32.880]
I think there’s a couple of things. Part of it is related to the pandemic, and the other part is what about the regular world when we’re not in pandemic times? So what I’ve seen or I guess the piece of advice would be flexible. Take this opportunity to learn some new skills, learn some different equipment, maybe not audio. There’s a lot of companies that we’ve seen that have made a shift to hosting, streaming production, particularly for large corporations. So we’re taking what would normally be our AGM and all the money that gets paid to fly people into major centers and put them in hotels and feed them. That money has become available to do these events virtually. And there’s a few companies that I know quite well that have done an excellent job at making that shift to hosting virtual events. They have basically television production facilities set up in their warehouses, and they’re running these events. So that’s one thing.

[00:13:39.340]
Yeah. If you already know how to work on shows, it’s not a huge deal to switch from doing an audio part of it to doing another part of it that might be in higher demand right now because of the lives.

[00:13:49.530]
Exactly. You might not get to mix your favorite band. You might be on a corporate gig, which may not be a spot, but you’re still paying the bills. So that’s one thing that few of my friends have had some good success with in the not pandemic times, if you will, or the regular times on gigs or all the time. Really, one of the chunks of information one of the suggestions I have would be is the importance of keeping it simple. Balance your keen interest of the latest fancy new thing with the complexity and the risk that might create as you implement it. I’ve seen a lot of folks get themselves in trouble. Basically, I do tech support, and I’m the guy on the other end of the phone that people are talking to, and it’s not working, and it’s an emergency. It’s because somebody has overreached, basically.

[00:14:46.090]
Can you give me a specific example? What’s something that someone has tried to what’s the latest new thing that someone tried to install that didn’t work out?

[00:14:54.430]
It’s hard to put a finger on something like that because you don’t want to speak ill of any particular technology. But I’ve seen people make a lot of mistakes in digital networking.

[00:15:07.570]
Okay, so maybe they switched from completely analog transmission to we’re going to switch over to Dante, and it’s a mess or something like that.

[00:15:14.940]
Basically, you’re either you’re sending signals out of a console over a piece of cat five to another piece of hardware and then back again, or you’re doing a bunch of different things. And there’s some benefits to doing that. But just keep in mind that whenever you do something like that, you’re adding another failure point. And the kind of goofy example, for lack of a better one, that I like to throw at people. Sometimes if you have to run 100 foot mic cable, do you go to the bin and take out 100 foot mic cable, or do you take out ten ft. Mic cables and plug them together? You’ve got nine more points of failure if you use ten foot Mike cables.

[00:15:54.860]
Small anecdote. I used to work for a company in the Bay Area where the guy didn’t own any 100 ft. Mike cables. He just always bought ten ft. Mice cables. Exactly what you’re saying. And we would just string them together. And it hurt me so much.

[00:16:07.210]
Whenever you’re in a heated, troubleshooting situation and you’re under the gun feet to the flames and you have to figure out what’s wrong, the fewer variables that you have in your mind, the faster you’ll be able to solve the problem. It’s a game of minimizing the variables and being able to come up with the answer quicker because you don’t have seven or eight different things to consider. There’s only three things to consider and you can get through those three things pretty quickly in your mind or in your methodology.

[00:16:41.190]
Awesome. I think that will connect well with what’s coming back to talk about Facebook. So Caesar, what’s one piece of career advice?

[00:16:48.750]
As pilot says you have to fly the airplane from the nose else the airplane is flying. You what I mean regarding this, I have always worked a lot in pre production because I love and because for me, everything is done before. When you go to the same for cooking, same for everything. Everything is done before doing it is just assembly of things already done. So when you have taken care of everything, usually there are no problems to be solved. Everything runs smooth. Music is the King. You don’t have any more technical issues. Shit happens. But just imagine what can happen if you didn’t do the pre level work. It would be a nightmare work before the gig. That’s my advice.

[00:17:48.460]
Thanks either. What about you, Arthur?

[00:17:51.270]
Gosh, there’s just so many things that I can think of, but I think if anything, just keep that hunger to learn. I think that’s really key to being relevant and staying relevant in our industry. Things are changing so fast and I think there’s a lot of us that have been dragged kicking and screaming into this whole new audio networking craze that has pretty much taken over the industry. And let’s face it, yeah, analog is great, but I think that ship has sailed and it’s still sailing. But networking of audio and video, all the AV systems now are all going through network switches. And whether you like it or not, that’s here to stay and it’s going to continue to advance more and more. I think if anything, the adaptability that you can present yourself in terms of learning how to harness this technology and be able to deploy audio networks properly is absolutely key. And I know in my work and in installs, we’re dealing with it every single day. And in fact, yesterday we wasted so many hours because one project that I’m working on, they never bothered setting up the network switches properly and all that.

[00:19:22.830]
And quite literally we were just baffled because there were IEMs on the network that were dropping offline. And yeah, we reset the switch, put in the QoS and everything like that. But it was so troublesome getting all the straggling pieces that still had the old network credentials try to get all that corralled into the new network set up was just an absolute horrendous mess. And the earlier you learn about how to do all this and do it right, the better off you’re going to be. And there’s so many seminars. Yeah, one thing about COVID is I got seminars out. I got absolutely fed up watching podcast after podcast after seminar after webinar. It just really got tiring after a while, and it was like webinar overload. But there are some really good seminars out there that are free for the taking. And you can learn so much about networking and being able to harness those things and getting those skills is just absolutely invaluable. Whether you’re doing live sound for a tour or doing a big, huge installation in a hospital or an airport or whatever, those skills are so transferable, and they make you that much more valuable as a person to be hired onto a job.

[00:20:56.750]
And I think if you want job security, face the fact that you’re going to have to get some data networking chops behind you. Otherwise you’re not going to be able to stand out from the rest. So that’s my piece of advice.

[00:21:14.610]
Thanks, Arthur. So I’ve been teasing that we’re going to talk about Facebook this whole time, and we’re finally going to get to it now. So what is Tracebook? Facebook is a website. It is a community, an online community for the exchange of loudspeaker reference data. So the first question that I have for the three of you is, why is this important to you? Like, why did you get involved? Why are you here today? Why do you care? And what’s important to you? Why is Facebook important to you?

[00:21:43.960]
One of the things that I try to impress on people whenever IEM doing a class is the importance of verifying that things actually work correctly before you start making decisions based on the measurement data that you’ve captured. So Facebook is a way to help you do that. I think that it’s valuable and important for people to have access to good quality actionable data. They can take a loudspeaker that they’ve never encountered before or one that they even have encountered before, but they’re not sure if it’s working correctly. And they can go grab the measurement off of Facebook and they can repeat that measurement procedure, and they should get pretty much exactly the same result. It may not be perfect because of the environment that they happen to be working in versus the way the measurement was originally taken, but it should be pretty damn close. And that gives you some confidence that the piece of equipment that you have in your hands or sitting on the floor or whatever is actually a good piece of equipment, it’s wired correctly, it doesn’t have damaged, low driver, high driver, or whatever. The preset is correct all the different variables that might pop up relating to a particular given loudspeaker.

[00:23:02.540]
You can cover them off and you can go, okay, I’m now confident that this is a good quality loudspeaker. And then when it comes to looking at a loudspeaker in situ, when you stick a microphone up in front of a loudspeaker that’s sitting in the corner of a room somewhere. Whenever you measure something like that, you’re not only measuring the loudspeaker, you’re measuring the interaction between that loudspeaker and all the boundaries in the environment that it’s in. Or you’re measuring that loudspeaker in combination with maybe some other sources, other loudspeakers it might be on as well. So if I need to address something, it’s nice to know what the room is contributing versus what the loudspeaker naturally is doing by itself or what the other loudspeaker is doing and contributing in the context of that measurement. So having a baseline measurement of a loudspeaker is a highly valuable thing.

[00:24:02.710]
Yeah, for me, that’s super relatable. We have all been in the situation, especially at the beginning. But even when we forget or in a hurry or we’re doing like a last minute gig, you walk into a room and you’re like, okay, let’s get things set up. Get my audio Analyzer set up, take a measurement. Oh, wait, IEM just looking at data in the abstract. I have nothing to compare it to. What am I looking at? Is this correct? Is this incorrect? Why does it sound this way? Is my audio Analyzer said there’s all these questions come up that could really be helped out by having some sort of a comparison. So I appreciate you bringing that up.

[00:24:39.570]
I guess the other part of that is if you’re in a shop and you’re either in charge of a bunch of loudspeakers in a shop somewhere or you actually own them, you can verify that your stuff is not broken. But also, if you’ve been handed a pallet, okay, you’re going to go and do this show and you’ve got these four different loudspeakers that are going to get pulled off the shelf and put in the truck, and you’re going to go to a show. Are they going to work together? Do they have complimentary faces characteristics? If I’m going to put an infill in an outfill with a big speaker on a stack, am I going to be able to make the marriage between those two speakers in the acoustic crossover area? Is that going to be nice or is it going to be nasty and it can help me select the correct speaker or select complementary speakers to go on a particular job site?

[00:25:32.330]
That’s really interesting. I ran a poll recently on Twitter and YouTube asking people, do you typically assume that two loudspeakers from the same manufacturer are compatible face compatible with each other, or do you typically assume that they’re not until proven otherwise? And I thought pretty much everyone would say that they assume that they’re compatible, but 75% of people said that they assume that they’re not compatible. And that really, I think people understand this problem that you’re describing, which is that should be the first question that comes into your mind is, oh, wait, you want me to set up two speakers together, or how do I know that they’re going to play nicely together and how do I prepare for that and how do I prove that being able to do that? Preproduction using Facebook sounds like a really valuable tool. Caesar, what about you? Why is Facebook important?

[00:26:26.240]
One thing that Facebook gives you and no manufacturer gives you is the availability to mix and match different brands of speakers. You talk about a single one, but that’s not the reality for a lot of us. You are sometimes giving something that it was not meant to work together and you can have data that would prolong what you normally use to predict performance. I don’t know either Map for my sound or Vision for this or Rainbow for that, or Easy, etcetera. There you have actual data that is vetted that you can use regarding different things where you can mix and match. I think that’s pretty valuable from one point of view of interest of Traceable. The other one is exactly the contrary. It’s a place in the learning side where you feel empowered to give to share. Am I measuring okay? Is this good enough? Etcetera. I think that you get a boost of confidence in how you measure when some guys, clever guys like those that are there scrutinize your measurements and you follow guidelines and you understand the guidelines and you go through this process for getting accurate data and at the same time it helps you to get also this other thing that is obscure arcane knowledge of what is accurate enough so you lose some fear.

[00:28:09.590]
It’s okay, we all agree, and I don’t know if he or Nathan or Merline or Arthur says it, so it’d be okay. You can do a measurement in this situation, you just take care of this, take care of that. No more than ten degree of repo, etcetera. Blah, blah, blah, blah, blah. So I think it helps you benchmarking your measurements, your own measurements, your own way to measure and to know how you get better at it. Basically, that’s it.

[00:28:41.510]
I think it’s pretty common that AV companies will reserve a sound system for the show they’re going to work on, but then for the little pieces for the front fills, side fills, delays, whatever. They might just use whatever is available at the time. They might be companies that are large enough or busy enough or just have a bunch of desperate systems that it’s just going to show up, whatever shows up and then it’s our job to make it all work together. And so the better that we can predict how that’s all going to go down, the better. Arthur, what about you? Why is Facebook important?

[00:29:12.260]
I think it’s great database to be able to do reality checks with. Oftentimes I get thrown into commissioning systems that I don’t know what the history is behind that system prior to me showing up on site and saying make something work here. Classic example. I had a house of worship and they had a cluster of two speakers, a center cluster, two identical speakers, and they were passively crossovered, and they were sharing the same signal on the power amp. So one amp channel powering, two speakers in a cluster, and we’re having a passive crossover. So to a loudspeaker from ex manufacturer, I come in, and the first thing I do whenever I come into an unfamiliar system is I’ll put a speech track through the system and a music track, and I’ll just walk the room, and I’ll just listen to the system and make some mental notes on what’s going on with that sound system. So I did that, and I started walking the coverage, and everything sounded fine on one side of the room. And then I went to the other side of the room that was covered by the other speaker, and things just didn’t sound right.

[00:30:34.770]
It was hollow. It was weird. And the transition between the two speakers was just really very odd. So out comes my smart rig, and I start measuring, and I’m going, okay, what’s going on here? And Unfortunately, I didn’t have the ability of turning one speaker off and just listening to one speaker by itself. Here I am speaker is 30ft up in the air. I don’t have any access to it. And I had to call the contractor and say, what happened to the system? We had a high frequency driver go out on it, and the alarm bells started going off. So I took some measurements of the system, and then I called up the manufacturer, and I knew personally the engineer there, and I shared them, my smart traces. And I said, is this real or is this not real? And if I had these traces in Facebook, Facebook obviously didn’t exist at that point in time. But if I had them available at that time, it would have saved me a phone call to the manufacturer, going through X number of different levels of support to get my question answered. When I’m on site, the clock is ticking, and I’ve got to be done X number of hours.

[00:31:59.830]
And here I am faced with get the system up running. And right today they have a service coming in that night, and I figure everything all figured out and fixed and everything like that. So I had to call them up. But if I had Facebook, I would have instantly recognized, okay, on this side, that trace looks plausible. On this side, the trace doesn’t look plausible. There is something really wrong. And it ended up that they replaced a driver, and the silly contractor wired the high frequency replacement driver out of polarity, which is not that hard to do, because when I know you go inside a passive two way box and they’re going to use all kinds of different colors for the wires, and you don’t know whether a yellow or green are plus or minus. And it’s an honest mistake that contractors make all the time. Even manufacturers make that mistake coming out of the box. And I’ve seen that happen. And it’s frustrating. That kind of thing is absolutely invaluable to be able to figure out what is going on. And I had to call the contractor, and they had to come out with a scaffold, set it up and go up there and fix the speaker the second time.

[00:33:20.930]
And once they did that, then everything started falling into place. But I think Facebook could prove its value over and over again. When you find those problems and measure up to what a good measurement should be and be able to establish whether something is correct or not, I would.

[00:33:43.820]
Love to hear some other specific examples from each of you. So, Ian, I’d love the story that you shared at Live Sound Summit.

[00:33:50.060]
Yeah, sure. That’s a good example, Nathan, because it’s not always a defective loudspeaker or it’s not always miswired cable. In this particular instance, it was a Meyer Lion rig. And we went in to do the tuning on the rig, and the first thing I saw was six phase wraps in the lion, and it’s a lot more than a lion has when you take it out of the box brand new. So then the investigation started. It’s like, where? Why? What’s going on? There’s something wrong with my measurement rig. What is it? So ultimately I asked the question, is everything flat in the console? Because I was running into the console? And so I said, give me an output from the console and I’m going to do a transfer function of the console itself just to verify. Again, I didn’t verify it on the front end. And sure enough, there was five phaser apps in the console, and it turned out to be a plugin that was inserted on the master bus of the console. And there’s a couple of ways to build those compressors. And one of them is to insert a bunch of crossover filters in the signal chain, which results in a bunch of phase rap.

[00:35:13.900]
Or you can have a bunch of crossover filters and you can side chain into the compressor from the crossover filters, which results in the signal path that doesn’t affect the phase wrap. So that’s what it was. It was saying it was a compressor. So there’s a whole bunch of different things that can pop up. Thanks for bringing that up, Nathan. That’s a fun example.

[00:35:34.430]
You know what a lion is supposed to look like, but for someone that doesn’t know, how would they know? They would just be like, okay, yeah, six page reps. I guess that’s how it is.

[00:35:44.550]
Yeah.

[00:35:44.760]
This speaker sucks. And I had another one very similar to Arthur’s, where it was a center cluster of UPAs. It’s a very long time ago, but it was the center cluster of three UPAs up top and then two UPAs aimed down into the Orchestra. And it was exactly the same as Arthur’s example, I walked across the balcony and the entire you lost all the low mid out of the cluster in the balcony. Okay. And then when I went to investigate, I found that somebody had inverted the polarity of the channel. They actually was physical. They stuck a phase invert on the input to the minor processor running to that speaker because they had discovered it was out of polarity. But in fact, it wasn’t out of polarity. The Horn was out of polarity and they had measured it and decided that the entire speaker was out of polarity. So they flipped the entire polarity of the speaker, thus putting the twelve inch out of polarity. So the horns all work nicely together. The twelve did not. So the answer, of course, was pull the cluster down, fix the speaker and put it all back, take the polarity invert out of the input to the M one, a process.

[00:36:54.430]
But yeah, things like this happen quite often. Gas to go in and do a tuning on the sound system. And one of the first discussions is we’re going to need some time to do verification. And everybody not everybody, but a lot of people say, oh, no, it’s fine, our system is fine. You can be confident. And nine times out of ten, I find something wrong. And Facebook, if I can find something wrong and IEM familiar with the speaker, that’s one thing. But what happens if I’m not familiar with the speaker and I don’t know what I’m walking into? How do I know exactly. And Facebook is a great resource for exactly that.

[00:37:30.640]
Caesar, what about you? Can you tell us about some way that Facebook has shown up in your work?

[00:37:36.980]
I have been asked as a consultant to see if they all have the same behavior, the same response. It was an RCF system that I didn’t know anything about. If I ask Myerson for data in order to buy a used Meyerson, what they are going to try to sell me a new one. They are not going to help me to buy a used one.

[00:38:02.950]
This is helping you do shopping because you’re thinking like, oh, if I buy a used system, I want some way to check to make sure that it’s performing.

[00:38:09.730]
You get confidence to get that.

[00:38:11.370]
Yeah, yeah, that’s great.

[00:38:13.060]
Exactly.

[00:38:13.600]
And I’ll just share personally, like this has happened. Day before yesterday, someone was asking me about two DMV speakers and they said, hey, how are you supposed to use these two DB speakers together? They don’t look like they match. And I was immediately suspicious because I’m not familiar with these speakers. I don’t know if maybe you did the measurement incorrectly, maybe you’re doing something wrong. It’s probably you. But I was able to go to Facebook, download those two speakers, look at them in my audio Analyzer, and move them around and adjust the delay and add filters and things. And I discovered that yes, he was right. These two speakers out of the box are not compatible, and you need to add an all pass filter for them to work together. Arthur, what about you? You already shared one story with us, but is there another thing that comes to mind for you for where Facebook has shown up for you and your work?

[00:39:02.200]
I’ve been using the database to compare different speakers. Now that we have the ability to do comparisons within Facebook, it’s been invaluable for me to evaluate different products to see whether from a design point of view, it will play together nicely. So that’s been one other area that I’ve been taking advantage of without having to Commission systems in the fields because we’re locked down. It’s been a very good design tool as well, not only for testing and verifying in the field, but also evaluating product that I’m considering to be part of a system. So very valuable in that regard, too.

[00:39:48.620]
We’re getting close to the end here, and I guess I want to see if there’s any last things that you guys want to say to people about potential ways they could be using Facebook or how to get started. And so one piece of news that I’ll share with people. Well, I’ll share two pieces of news. Number one, we have a little bit of new functionality on Facebook, which is that now you can make some adjustments to the measurements on the page so you can add and subtract delay invert polarity. And we have coherence blanking. And soon we’ll be able to look at two measurements on top of each other. So if you look at two things in the database, you can just grab one of them and put it on the same graph. And so we’re curious what other features people might want to make this a really valuable tool moving forward. We want this to be the de facto place that people go to when they have questions about what is this supposed to look like? Do these things work together? Any other exciting piece of news I want to share with people is that Caesar has spearheaded this work of translating the Facebook measurement procedure into Spanish.

[00:40:58.670]
And so there’s this whole other huge part of the world of sound engineers that we want to have access to Facebook to be able to use it, be able to do measurements and upload them to the website. And very soon we will have that document available in Spanish as well. So, Caesar, thank you for working on Turkish, not Spanish. Oh, Turkish. Sorry. Turkish.

[00:41:20.850]
You’re wrong. We are finishing it. The good news is that we already finished translation. We are reviewing it. We have input for the reviews from top of the top. You have Sauron Castanela, who already submitted his review. We will have people from Spain, from Peru, from Chile, Argentina, in order to be sure that it is neutral enough. So not anyone from a different place will find strange reading with some wording that could be locally. So it’s well done.

[00:41:58.510]
Awesome. Thank you for all of that hard work. I wouldn’t even have considered that we need to have Spanish speakers from different regions looking at it to make sure that is understandable globally.

[00:42:07.620]
I think I’d probably like to put a little bit more towards Caesar’s topic of community. I think Facebook is a wonderful community for like minded people to share and learn the skills of accurately measuring a loudspeaker and the procedures that we have put in place. And the document that has been drafted is a really great resource. It’s a really great guideline for how to measure a loudspeaker and get a good reference data trace out of it. It also clears up certain misconceptions and explains why the guidelines result in a quality measurement with actionable data. So it’s a learning document and it’s a growing document. It’s going to be something that evolves over time based on what we see as moderators with the submissions that are coming into the site. For example, I don’t know if other people agree, but the two things that I see that the mistakes I guess that people make is they either measure the loudspeaker too close to a boundary or they don’t get the tilt, so they don’t have the maximum high frequency on access in particular line of loudspeakers. And those are both outlined very clearly in the document.

[00:43:30.890]
So I think that Facebook has a great value on the educational side to help people in a very practical way to get really great data and learn more about measurement.

[00:43:41.300]
I’m really glad you brought that up. I forgot that this is a great opportunity for us to give tips to people who are interested in Facebook and want to use it not only to just like browse records and download things and load them into their audio Analyzer, which is the big first easy step, but for people who are going to take some measurements and go through that process, the people that we have here in the group today are the people that are going to look at that. For people who are totally new to Facebook, what happens is that you upload something, but then to actually have it be approved and accepted into the official database for trace book, it has to be approved by two moderators. So that means that someone like Ian and then someone like Caesar are going to look at it, compare it to any best known data that we have, and look at all of the details and decide if it meets the Facebook guidelines, which are all outlined in the measurement procedure. And that’s really helpful to know. Ian is saying that you got to get the vertical aim correct.

[00:44:43.870]
So that’s a big thing we’ve been working on in Facebook when you’re doing and we don’t tell people how you have to do the measurements. The ultimate result is that they have to meet the guidelines, and that’s it. But we are recommending that if people don’t have access to some sort of anechoic environment or other way to make a measurement that will meet the guidelines, we are recommending that they do ground plane measurements, and we go into a lot of detail on the measurement procedure about exactly how to do that. But Ian is just highlighting that one of the things that most commonly is done incorrectly and that we have to ask people to remeasure is when either the vertical aim is not correct and that means that you don’t have the speaker perfectly on access with the microphone, which we talk about how to do in the measurement guide, or what was the other one that you mentioned?

[00:45:37.630]
Too close to boundaries.

[00:45:38.570]
Too close to boundaries. So you see people measuring too close.

[00:45:41.080]
To the wall, too small a space.

[00:45:42.950]
So you either need to be in a large room typically, and then near the center of the room, but not exactly in center, but nine times out of ten, I just have to go outside, especially if it’s anything but a very small speaker. So I’ll throw it to Caesar and Arthur. Do either of you have some common mistakes?

[00:46:04.150]
The biggest mistake I see everyone making it is getting into audio. They should leave audio and go to something else. Where the money is and where happiness is. Video and lightning. But video mostly happiness isn’t video.

[00:46:26.690]
Yeah, I say jump in and make some measurements. I think if anything, too often people rush in and throw some gear together and make the measurements. Maybe they put some effort into it, but not enough effort. Let’s put it that way. I think if anything, we want to keep the data as pristine as possible to present to the world. We don’t want to put a speaker in bad light because of a bad measurement. And that’s why sometimes we’ll reject a measurement because we know that it doesn’t represent what that speaker can truly do. Don’t think of rejection as a personal thing against you, or don’t make it a point of frustration. Oh, I’m giving up on this because it’s too hard. It really isn’t. And I’ll be honest with you, any of us that are involved with Facebook would be more than happy to come out and help you make those measurements if we’re close enough.

[00:47:26.840]
Yeah, we can get on a Zoom call, whatever.

[00:47:29.060]
Absolutely. We’re all approachable, and we would love to help you make those measurements. I think this is really a prime opportunity for any rental house to measure their entire inventory. And that way you have a baseline of what those speakers should be doing whenever they go out on a gig and take advantage of this downtime that you have now, waiting for the show to come back to start prepping your inventory, measuring it all, making sure it all is working correctly, and take advantage of Facebook as your database to keep all those measurements stored somewhere and your gear ends up on a show. My goodness. This is a valuable resource for anyone using Facebook to be able to look at your equipment and be able to figure out, okay, how do I make these different systems work together nicely but jump in and really take some measurements and don’t take rejection personally. We all want to help each other and produce some great data here that everyone can use.

[00:48:40.370]
Yeah, I think you are. There. There. I hate to see people go to all the trouble of setting up their measurements and then there’s some really obvious problems. So what I’ve been recommending to people is to do something easy. Do something at home that you have easy access to So that you could just set it up in your driveway or a tiny speaker you could even just set up in your living room or your warehouse, take your first measurement, go through the entire procedure so you can see what it’s like. So that way we have problems with and we say, hey, you did this slightly wrong or can you do this a little bit better? It’s easy for you to set up again. If you have to get out of forklift and get down some giant speaker and clear out the room and go to a bunch of trouble and then we’re like, oh, but you had this thing upside down or whatever, then it’ll be disappointing. So do something easy first for your first time and then once you have hang of it or have the entire process then get out the forklift. All right.

[00:49:31.930]
Ian Caesar, Arthur, thank you so much for being with me here today. Final question where can people follow your work online?

[00:49:39.910]
I worked at gear audio distribution in Canada. We have our own internal Facebook and some stuff on YouTube and Instagram and whatnot Caesar, what about you?

[00:49:50.840]
I am on Facebook sometimes and the music I do in the studio using Spotify.

[00:49:56.840]
What about you, Arthur?

[00:49:57.910]
You can find me on Facebook or Instagram by all means find me there.

[00:50:02.740]
Or on LinkedIn trace book [email protected] and obviously I’ll put all the links links with the show notes for this interview but in Caesar and Arthur, thank you so much for joining me today on sound design live.

[00:50:17.450]
It’s great to be here.

[00:50:18.780]
Sound design. Bye.

Nothing in an audio analyzer tells you how good it sounds

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live I’m joined by Advanced Systems Specialist at d&b audiotechnik, Nick Malgieri. We discuss the self-aware PA system and the future of live sound, cardioid subs, and why there’s no polarity switch in d&b amps.

I would like our d&b users to be thinking more about the artistic goal and making adjustments based on what they’re hearing and not getting lost in the science and the measurement and the verification. We’re trying to build a platform that doesn’t require that. And we can just focus on mixing or show.

Nick Malgieri

I ask:

  • What are some of the biggest mistakes you see people making who are new to sound system design?
  • The self-aware PA system and the future of live sound
    • If the most destructive part of the signal chain is between the loudspeaker and the listener, then what is the most powerful tool we have to deal with this destruction?What are some specific ways that d&b helps us with directivity?
    • Array processing reduces or eliminates the need to measure the PA on site.
    • Chris Medders : I’d be curious to hear how accurate he feels the phase prediction feature is when measurement values are precise in the field, and how effective that is for eliminating the need for TF measurements in varying sized rooms.
  • From FB
    • Chris Tsanjoures What is the best theme for a bar, and why is it Tiki?you seem to do a lot of traveling and consulting. If you think going a different direction than a clients current plan would be best for their situation, what are some of the things that you are able to identify a client needs, without them realizing they need it?
    • Christopher Patrick Pou What does a “typical,” mix bus section on any given mixing desk look like in an object mixing-based environment?
    • Gabriel Figueroa I’d also like to know why some deployments are not using the desk as the control of the objects and what the pros and cons are of this approach.
    • Peter Jørgensen whats the behavior in a endfire sub array with internal cardioid subs, like SL-sub.edit: What happens when you build an end-fire array with a cardioid subwoofer like the SL Sub?
    • Johannes Hofmann Whats the minimum distance of a cardioid sub to reflecting surfaces behind the sub to avoid cancellation in the lowend?
    • Istvan Kroki KrokaveczWhen will games be available on D40 amps?
    • Tomasz Mularczykhighest scores in D80 games
    • Benjamin Tan “How does engaging Array Processing change your tuning approach?”
    • Michel Harruch: Is there any plan to incorporate polarity inversion for the design of complex subwoofer arrays like gradient or end fire on ArrayCalc?
    • alexdanielewicz Why can’t you flip polarity in the d&b ecosystem?
    • Robert Kozyra How to identify the problem speaker(s) in a large array hang.
    • Daniel Brchrt How do I combine speakers from different series with unmatched phase response, like the T10 and Y7P?
    • Sunny Side Up Why have external amplification rather than built in amps?
    • Steve Knots what do you think about renting cranes to hang PAs rather than rigging them from truss?Well, I’ve seen photos of big festivals where it’s being done already so I’m curious about the whole thing — safety, rigging for crane lift, stabilizing / aiming the array, and of course security around the crane base to make an un-climbable fence wall type deal. I guess. Seems innovative
    • Wessley Stern What is their philosophy with sub/main crossover? It seem to me that they let there subs LPF be much higher than other companies, well above where the main cabs HPF is in most cases, resulting in a lot of LM summation. I really enjoy their systems and the perception this results in.
    • Vladimir Jovanovic Subwoofer driver sizes and uses. Is there a trend of releasing 21″ subs? Not just from d&b, but other brands too. Did the needs of events changed to drive this trend? (Pun intended, I know where the doors are) If there is a trend at all.

Notes

  1. Quotes
    1. Nothing in an audio analyzer tells you how good it sounds.
    2. I have never found a discrepancy in what the alignment says in ArrayCalc versus what I found on-site.
    3. Our whole design ethos is little, light, and loud.
    4. If you’ve done all the alignment in ArrayCalc, we don’t need Smaart.
    5. We’re trying to do as much of the science as possible for you ahead of time, so that when you get onsite, you can focus on your show and for the vast majority of applications, there’s absolutely no need for a polarity button because we already have cardioid subs.

Transcript

This transcript was automatically generated. Please let me know if you discover any errors.

I’m Nathan Lively. And today IEM joined by Advanced System specialist at DB Audio Technician. Welcome to Sound Design Live.

How’s it going, Nathan?

Going good. Just for you. I see that there are some special tools here that I have that would really welcome you. That’s the last time I’m going to do that.

Do we need a director to call sound effects on the show or what?

Stage manager. All right, Nick, I definitely want to talk to you about object based mixing and firearmies and combining speakers from different families. But before I do that, after you get a sound system set up, what’s one of your favorite pieces of music to play through? It to get familiar with it.

Actually, the first thing I usually play isn’t music at all. It’s a very simple recording, a very dry recording of a simple snare drum. For me, that’s a great way to check system timing. And when we play with soundscape systems, with emulated room acoustics, it’s a good way to hear the nuances of Reverb tales and stuff like that.

Cool. I would actually like to add that to my list of things. I’ll send you a link near Drum Sounds.

There’s a couple of things. I’ll send you a couple of things. Great.

Okay, so we had a lot of questions come in, so we’ve got a lot of technical topics that people want us to hit. But before we do that, we should talk about career and business stuff for a minute. I wondered if you could take a look at your career so far, Nick, and pull out some lessons that you’ve learned and that have helped you find more of the work that you really love. So what are some of the ideas that you can share with people that might help them look beyond maybe some of the typical front of house, mixed positions that people think of and just maybe some career advice that you have found over the years?

I think probably the first thing I like to tell people is never in my career did I get hired off of a resume submission.

You’re saying that my plan to just make a beautiful looking resume and send it out to everyone and then do no follow up is not a great correct.

Yeah. Not recommended. Every single job offer I got was like a verbal offer off of someone that I knew or met or we knew someone in common and I came as a reference or something. So I’d say as general career advice, just be around people, make friends with people, make connections, find an excuse to visit a company, find an excuse to visit a show site. Maybe you have a friend with an Inn or something, shake hands, make your smiling face known, and just be the person who is on the forefront of their mind when they’re in a last minute scramble and need somebody.

Yeah, that’s a great point. If the audio is based on personal referral, that’s such a great point about staying top of mind. How can you do that in sort of non manipulative and fun way, showing up, being places? Yeah, that’s great. It’s not a recipe, but it is something that is probably the opposite of just me sitting here at home waiting for the phone to ring.

Yeah, absolutely. And don’t forget, there’s a lot of markets in audio other than like, touring front of house engineers.

Tell me about it. What are some things I may have forgotten?

Let’s not forget about the in house gigs, right? In your hometown. There’s a lot of performing art centers, clubs, all that kind of stuff. And there’s a whole other world of audio called installation, which, by the way, was largely unaffected by Kofi. And a lot of people from the touring world just segue right over to installation, and only some of them are going back out on the road now that they’ve gotten used to spending the evenings at home with their families.

Now, is installation a place where I could continue working as a freelancer, or is it mostly employees? And so should I be going and looking at job boards or looking at their websites for openings? How do you recommend I get started with that?

The installation companies are probably more likely to accept, like, a cold call resume if they have an opening, but knowing someone there is still going to be the inside track. And in my perception, there are two kinds of audio installation companies. You have ones that maybe also have a touring division and really specialize in performance audio, and they have staff on hand that are audio ninjas to be able to really do high end systems. Then there’s a lot of installation companies that are really just responding to bid requests, and they’ve got the labor for the physical installation, the rigging and the wire termination and all of that stuff. But they might not do performance audio systems frequently enough to have an audio Ninja on staff. And a lot of those companies are either leaning on manufacturer people like me to come help Commission it, or a freelancer to come in and be their Ninja for that one off gig because the other five gigs are going to be like low voltage alarm systems and camera systems and stuff like that.

Maybe doing a little bit of research could help or at least knowing gain in. Oh, this is a place that focuses on performance, audio or this is not. And then coming into that conversation intelligently. Hey, I know that you guys don’t focus on this, and so I could really bring that to the table and be helpful in that way.

Yeah, that’s right. You can’t just ask for a job. You have to propose your value to somebody. So figure out what they’re missing and what you can provide for that.

Now, when you proposed to your wife, was it similar you’re here is the value. I bring this cow. I have a car.

No, something like that. Right.

Okay, let’s talk about technical stuff. What are some of the biggest mistakes you see people making who are new to sound system design?

I think the most common mistake always happens on show site, and it’s just poor prioritization on how to manage your day.

Like what?

Like spending too much time thinking about what’s happening on a Smart trace and not enough time thinking about just having a good physical layout of speakers. Or maybe this isn’t a great time to make noise because I’m pissing off other people who are working in the room. Hey, I’ve got a rigor in the air. I probably shouldn’t be blasting a speaker next to them or just spending too much time tuning a PA and not actually getting to a sound checkpoint, which ultimately is just as important as tuning the PA. Let’s just get it most of the way there. And if we find some time later, we go back and do some touch up.

What is the bad thing that’s going to happen if I don’t prioritize my time correctly?

Yeah. First of all, if you can’t prioritize your time and manage it, you’re going to end up missing meals or something. Then that makes a long day extra hard and unhealthy. So let’s take care of ourselves at some point during the day. Also, you need to be thinking about what’s the content for the show. How am I going to mix it? How am I going to route it all with this kind of stuff? What is the artistic priority as opposed to trying to make a PA perform quote perfectly on a screen?

Now I’m remembering back. I remember you have a pretty good story about prioritization and its relationship with smart. Do you know what story I’m referring to?

Yeah.

Can you tell us about that?

Yeah. I love this story. We’re good friends with the folks over rational acoustics that make the Smart software. And I had a really fun experience when they got a d&bA, and I was going to come out and help them tune it. And first of all, is a little bit of pressure because I use Smart sometimes for tuning. And the last thing I want to do is get caught using it wrong from the people that make the software. And I’m like you guys are going to provide the Smart rigorous. And it showed up. And we had two days allocated for commissioning and training and all of this kind of stuff. And because of scheduling and travel conflicts, day one, I was working with Jamie Anderson from Rational, and on day two was Chris Andrews from Rational. And so two days we got to tune a PA with two separate people that work for the company that makes the tuning software. And I showed up to a very well installed PA. First thing I did is make noise, come out of all the right places, and verify that it’s wired correctly and stuff. And then we had the standoff moment with each other where it was like, So how do you want to do this?

And they looked at me, how do you normally approach this PA? So we started to get into it. We walk in the room making some changes by ear, showing them how rate processing works and this kind of stuff. And after the end of day one, they were like, this is great. Let’s go to dinner. And I realized never once had we looked at Smart. Well, it was sitting there running the mic was there. We just never placed the mic. We never did anything. We never heard something that we needed visual feedback to correct. And so then Jamie leaves. In day two, Chris shows up, and Chris is like, I don’t care what you did with Jamie yesterday. Let’s reset it and do it again. And so we tune the whole PA differently, but in a similar style, using our ear, walking the room, and never once looked at Smart. And it was a really good reaffirming moment to me that even the people that design it, they like to say there’s no such thing as Smarting up here. It’s a tool. Use your judgment, be influenced by the artistic goals of the show and the logistic constraints of your venue.

And Smart is there if you need it.

Yeah, that makes me think that the audio Analyzer. Maybe I should be thinking about it more as a verification tool, as a problem solving tool, unless as a qualitative tool that says, IEM going to tell you what’s good. Nathan, I’m like, okay, all right. Audio Analyzer. You tell me what’s up and I’ll do it.

Yeah. I’ve heard really well measuring Pas that I don’t like the way they sound. And I’ve heard very poorly measuring Pas where the second I push up some channels on my mixer, like, this is great. I love this. So, yeah, nothing in an Analyzer tells you how good it sounds.

Okay. During the last Lifetime Summit, you gave a great presentation called the selfaware PA System and the Future of Live Sound. And if people want to listen to that, they can go to Livesonsum at 2021 Soundslive.com. But a couple of follow up questions about that presentation. If you say that the most destructive part of the signal chain is between the loudspeaker and the listener, then what is the most powerful tool we have to deal with this destruction?

Yeah. And just to clarify that these days, year 2022, we have these pristine signal chains of all digital, high bit rate, low noise floor, virtually zero cross talk. And then the sound leaves the speaker, and we’re still subject to the same pesky physics that we’ve always been subject to, and we can only control so much of that. And that is what is turning into feedback. That’s what turning into intelligibility, lack of impact, all of that stuff. The best tool for us to avoid this primary source of degradation is directivity. The more that our loudspeaker can focus on where we want it and avoid all other directions, reflective surfaces and open microphones, the better the PA is going to sound before we’ve touched an adjustment at all.

Wow.

Okay.

And so what are some of the specific ways that d&b helps us with directivity?

On the subwoofer side, it’s all about cardioid subs, not just to cancel sound on the rear at some frequencies, but equally across all frequencies, so that even if you are on the back side of the sub, you’re still getting a proper representation of the frequency response, just quieter. Then we have the SL series line array cabinets, which have side firing low frequency drivers that not only add more energy to the front, but cancel on the back, which is great as you walk off axis one of those rays, all frequencies get attenuated evenly. And then even on point source cabinets, we rely a lot on what we refer to as a dipole, which is two smaller low frequency drivers instead of one larger low frequency driver. And those two smaller low frequency drivers are spaced out in the cabinet so that they create summation directly on access, but cancellation in other directions. Not only do we get good directivity out of the frequencies coming out of the Horn, but we get added directivity of lower frequencies as cool.

Okay, so another thing that you said during that presentation is that array processing reduces or eliminates the need to measure the PA on site. And that connects with one of the questions that came in from Chris Meters, who says, I’d be curious to hear how accurate he feels the phase prediction feature is when measurement values are precise in the field, and how effective it is for eliminating the need for TF measurements in varying size rooms. Funny way to say that. And the subject he didn’t mention there is I think he’s referring to a Ray Calc. Would you agree?

Yes. So it sounds like there’s two questions there. One is about a rate processing, and let’s put a pin in that for a moment. The other one is about the ability to tune your PA quickly and accurately within the software before you’re on site. And to answer the question simply, I have never found a discrepancy in what the alignment says in a Ray calc versus what I found on site. Even when I put up a mic to verify it’s within ten degrees of phase wrap between the subs and the tops and anything there couldn’t be much more perfect. And why would I want it to be more perfect at a specific location anyways? The idea of alignment is to make it work for a larger portion of the audience as possible. And one of the main benefits of using the software to do this is you can very quickly with a couple of mouse clicks, pick multiple points for your measurement microphone, and verify if the timing decision you’ve made translates not only to the 100 level, but also to the 200 level and the 300 level. Whereas if you’re on site with a microphone that just turned into a 45 minutes process just to get the mic from the 100 level to the 200 level to the 300 level, and who’s got time to do that?

When tuning, you load in at eight and sound checks at noon or something. So it allows you to be more informed from the comfort of your home. And as long as your file is accurate to the way the PA is deployed on site, you just push those settings to the amps and then bust out smart if you find yourself with some extra time and energy that day. Now, rate processing is very similar for anybody who doesn’t know. Rate processing is our technology, where each cabinet within the array requires its own DSP, path and amp channel. But this allows every cabinet within the line array to have a different signal sent to it, so that the behavior of the array as a whole matches the geometry of our venue better than we could with just mechanical splay. So this means we need to have an accurately represented array, proper height, proper spray angles, all that kind of stuff. And within the array calculation, we need to make sure we have accurate venue geometry. Then the software can say, okay, now I know the relationship between the PA and your audience areas. Let me optimize myself for perfect spectral response from the front row to the back row.

This does a couple of things. One, it corrects for weird HF peaks and dips and all this stuff. It fixes Farfield HF reduction because of air absorption, and it makes the PA hit a target curve at the listener positions, so it will hit the same target curve in the front row as it does at front of house and at the back row. And if you have a rate processing enabled on other parts of your PA, like delays and sides and 270, all of those parts of the PA are now hitting the same target curve at their respective audience positions. So this way now you don’t have to worry about level matching and spectral matching different parts of your PA, which is the biggest part of measuring the PA. Now you can just say, oh, the whole thing is too much lowmade. I’m going to pull out some 250 Hz and apply it to all parts of the PA. And they’re all going to respond much more similarly than they would without a rate processing.

That’s so cool. And I’ll just add that it is really fun and so powerful to be able to check all those different alignment positions really quickly. If you’re like me and you want to try to calculate the best alignment position ahead of time, and then you do that, and however you do that, then you just have to accept, okay, this is going to work. It’s really nice in a Ray calculator. You can then verify, oh, yeah, this is the right one. Okay, great. And so I like that tool a lot.

Yeah. There’s only one gig I’ve ever had where I really cared about making the whole system align in one specific mic position. And in a previous lifetime, I worked for a rental company out in California that Myers Sound used to hire for their internal events, like their parties and stuff like that. And then my question was always like, Where does John Myers sit? He’s the one whose name is on the check. He’s the one that can hear the difference. Let me make it a line there, and everybody else can just deal with it.

Tell us about the biggest or maybe most painful mistake you’ve made on the job and what happened afterwards.

What’s the old joke? In our business, I’ve screwed up bigger gigs in this one. Expertise. What is it saying? Wisdom comes from expertise, and expertise comes from failure. We can do these one liners all day long, but it’s true. Making the mistake is the way to learn and be a better person. And we’ve all done them. I was working a show. We’re Loading it in. I was working in the doghouse of an analog console, if anybody remembers what those are. And I was pretty pissy. It was a rough day. It was a gig at a winery where we loaded it on grass, and I was trying to figure out how to make this console sit level on a grass embankment next to the stage. It was hot. There’s mosquitoes. I’m just pissy. And the voice behind me, someone on the stage, it’s empty. They’re like, no audience. There’s no artist yet or anything, but his voice is like, how are you? And I was like, this is fucked up. And I just totally went off and just, like, verbal diarrhea on how I was feeling and turned around. And it was the main headliner artist singer of the show, and was like, oh, God, what have I done?

And he ended up being really cool. I hear your brother. This is a hard work environment. Just keep going. I really appreciate it. That was when I turned around who would say something so nice.

He could have been like, who is this guy? Get him out of here.

Totally. And that’s all it takes. Just rub someone the wrong way and he’s thinking about his pressures of performing, and he doesn’t want my negativity involved. And I’m the monitor engineer. So if I’m going to be like that during rehearsal, it’s gain to ruin his vibe. So he could have just said, yeah, get them out of here. And then that’s it. I’m fired. And once that happens, you never get that gig back.

I don’t know if this is great career advice, but a friend and student of mine got a new job once, and it was a really important one for a big, well known company. And I said, hey, I think one of the best things you can do from my experience is to figure out as quickly as possible what things are going to push your buttons and then figure out how to deal with that. Because the worst thing is that it becomes a surprise that’s when it’s really painful is. Yes. All these circumstances. Yes. All this pressure and stress, and then also a surprise, like something falls on your foot or something is late or whatever, things go wrong. And so if you can sort of get ahead of that somehow, man, it can really help because that’s the difference between saying something you really regret to a manager or something, and then you have a whole thing to deal with.

Totally. I feel like one of the best professional advances I’ve made came as a byproduct of moving to the Southern US, where I just had to learn how to keep my mouth shut more than I’m used to. I think people in the south tend to be a little bit more cordial, a little more polite, and they complain in a different way. And that’s been a good career and life skill for me.

Christian Giroud says, what is the best theme for a bar and why is it Tiki?

Oh, yeah.

What are you talking about?

Yeah, one of the things I missed most during the COVID era is hanging out in some town where there’s some trade show like Info.com or name or something, and ending up at Tiki bars with rational acoustics guys. Okay, they love a Tiki bar. I love a Tiki bar. And we need to get back to this trade shows just for the Tiki bar. I couldn’t care less about the trade show. As soon as 05:00 hits and we’re all looking at each other, am I going to get a blue drink or a green drink? That’s what that week is all about.

So Chris says that you seem to do a lot of traveling and consulting, and this question that I’m going to paraphrase, which is basically, how do you handle these situations? Or have you been in a situation where it seems like the client wants something and they’re saying, this is the result that I want, and so here’s how I want to do it. But you know, that’s not going to get them the result.

Yeah, that’s the hardest thing about audio, right? Human beings are visual thinkers and audio is invisible. So everybody has an idea of how to do it, and there’s no real way to prove it. And even your average person might not know how to listen to the PA to know if it was achieved or not. So it’s all about being a bartender and playing psychology and just having good verbal interactions. And there’s a way to advocate for what you think is the right decision without knocking down a client’s request. I think there’s a way to verbalize that there’s a certain approach. Just don’t be the annoying it guy who’s just no, that’s not how it works. You don’t know what you’re talking about. No one wants that kind of audio person. Just speak normally with them and say, So what I’m hearing is repeat what they’re saying. It makes them feel heard and say, how about this? What if we tried an approach to do this and explain in simple terms why you want that approach? And I find it’s really hard for a client to argue with that. It almost makes it feel like it was their idea to approach it the way you want to approach it.

And you’ve told me in the past that a rake out can be a tool to facilitate these discussions. Sometimes it really helps to have a visual element. This is what you want. Here’s how we can do it. What about this? What about that?

Small churches and clubs and venues that want a line array, but it’s too small of a room for a line array? Let’s look at it in a Ray calc. Let’s show you how a line array performs versus a point source, and it will be immediately apparent that there’s a really good discussion there. And if in the end you want to win array whatever it’s your PA, you can buy whatever you want. But at least I advocate for what I think is best.

There’s a bad movie podcast called How Did This Get Made? And I don’t listen to it that often, but it comes to mind in this moment because we’ve all been in music venues all over the world, but even here in Minneapolis, I’ve been into several music venues where the PA does not fit the room. And you’re like, how did this get made? These two big arrays, half of it’s just playing into a balcony and a wall, and it doesn’t seem to fit it’s.

Funny, this is the number one theme of being a support person for d&b Audio Technic, because our whole goal, our whole design ethos is little light and loud. How do we get very high directivity, high bandwidth and high output out of the smallest cabinet possible? And our clever Germans do a pretty good job. Meanwhile, we have people coming and saying, I don’t want that speaker because I don’t think a pair of tens are big enough. Woofers, which used to be the simplest method of evaluating loudspeaker. You have to explain to people, no, you don’t understand. This pair of tens has more low frequency extension than our old speaker that had a 15.

So you’re finding some preconceptions about just things people think about the size of related to power quality. Okay. Christopher Post says what does a typical mix of section on any given mixing does look like when in an object based mixing environment.

So let’s be clear. When you’re using soundscape and object based mixing, there is no master bus in your console. We need different performers, different types of signals to hit that process. Or the processor works is like a summing matrix with the spatialization data and renders that to the PA using delay and level distribution. So then this is a great question. How do you feed the processor from the console? The short answer is there’s no one way, there’s no one way to sound scape. But I could give you a very kind of simple anecdote that represents a lot of projects that I work on. Let’s say it’s a typical band. So maybe Kick, Snare, hat come out of a mono bus and send it to a processor where we can place Kick, Snare, and hat within our mix using a sound object. Then maybe a stereo mix that has all of the Toms and other drums. Those come in as two sound objects, and we can make those Toms big and wide or accurate and sound like they’re coming from the drum set. And then maybe another stereo bus for overheads and chimes and percussion stuff that maybe wants to go wider than the Toms.

Then maybe you have a bass player who has an electric and an acoustic and a Di and a mic, and I don’t know, a foot pedal organ thing or something. All those inputs can come down to a mono bus called Bob the bass player, and then Bob the bass player’s bus comes into a sound object that we control in our one called Bob the bass player, and we can place that where Bob is located might do the same thing for guitars, keyboards, bust them down, but then send them to the processor in a way that represents an individual performer. And then as you get to your money channels, your lead vocals, your pastor Mike, your CEO for the corporate event, those might be post Fader direct out of the console. All your channel strip processing works. Your Fader affects the level of that, but it immediately leaves the console and gets summed in the processor where each singer can have their own sound object. That way, when people sing together, they’re not stepping on each other in the mix. You want to listen to the Alto or the tenor you can DeMask it binaurally just like we do in an acoustic world and retain clarity headroom and require less processing on the channel strip to get it.

Related to this, Gabriel says, I’d also like to know why some deployments are not using the desk as the control of the objects and what the pros and cons are of this approach.

Yeah. So if you have a soundscape system and you’re using an Avid S Six series console or digital SD series console, you can control soundscape natively from within the console. And I know it sounds awesome and it can be your object parameters are being saved within your scenes of a console, and that’s really nice. But for a large venue and we have 100ft of travel where the sound object could be through the mains and maybe sides if you have them. And that 100ft is now represented by a three inch by three inch quad panner on your screen, it’s not as meaningful as you would think.

The scale is off, right?

And we can scale the stuff separately from what the console sends into what the processor receives. But yeah, three inches to represent 100ft is pretty coarse no matter what. So I always tell people, let’s think of it as like a wave control computer. Let’s just have our one running on a touchscreen, hovering right over your console like your wave screen does, and you can just touch the object to move it. You get a full size screen, you can visualize the room better, you can put in a seating chart. So you really know when you’re placing a sound object exactly where it is instead of just placing it in this vague square on the console.

Peter Jorgensen says, what happens when you build an infrastructure with a cardioid subwoofer like the SL sub?

Yeah, I’ve done it with the SL sub and other subs and from other manufacturers subs, because I’m not just a DB guy, I’m also just a sound guy. It works well. It’s cool. You don’t have to make an end fire out of omnidirectional subs and you can mock this up in a Ray calc. There’s this myth out there that you can’t do in fire Subaru in a Ray calc. You most surely can. It will automatically calculate your delay times for you as well. And if you want to learn more, send an email to [email protected] and we’ll show you how to do it. But to answer the question, we have some cabinets that are cardioid by themselves and then we put them into an end fire. And of course it depends on your spacing and the number of cabinets within the array and the delay times, etcetera. Etcetera. But essentially it turns it into hypercardiot. And I did it. I do a gig every year at the Monterey Jazz Festival and I run the main stage there and I do an end fire of cardioid subs. And the reason is twofold one, it’s a wooden stage that resonates.

I think it’s right at 78 Hz. Oh, wow. Yeah. And it rolls pretty slow. It used to be years ago, the stage would hear the feedback long before front of house did, and they would just hit the call button on calm. And if I was at front of house and saw that call button lighting up, I would just immediately pull the subs back because I knew it was coming. And so when we put ourselves in an NFL array, it allows me to change the delay times so that I can take that 78 Hz null and point it directly at center state, so that it’s really trying to cancel that one frequency in that one direction to stop the stage from resonating. And that feedback. And the second reason I do it on that gig is because I don’t have anywhere else to put the subwoofers. So it’s a win win in that I can’t stack them high because it blocks sight lines. I can’t do them horizontally across the front because their VIP section would be like their knees would be touching the subs and they wouldn’t be very happy about that. So they have to be up on the deck, but only one high.

And so then putting one in front of the other is the only way to make them fit.

All right. Johannes Hoffman says, what’s the minimum distance of a cardioid sub to reflecting services behind the sub to avoid cancellation in the low end?

Yeah, this is a really common question, and I totally get where it comes from, because when you have a speaker firing in the back of the subwoofer, it seems like it needs some breathing space. And it does, but not as much as you’d think you can. Actually all the d&b cardioid subs, they have the casters on the backside, so you flip it up to roll it. So then when it’s lying down, the casters point backwards and I just tell people push it all the way up to to the casters touch the wall. It only needs that four to six inches that the caster represents.

Okay.

However, most people don’t realize when you have a cardioid sub, you really need to maintain 2ft of open space to either side. It actually needs more space on the sides than it does in the back. And that’s because we need the sound to wrap around the sides to interact properly between the rear driver and front drivers. So, for example, we see people all the time that might have, like, an SL sub, but they’ve decided to place it up on end so it’s higher. Maybe that’s because they want to put a front fill on top of it or something, and it works and you can do it, but it eliminates one path length around one side of that cabinet because the side is now obscured by the ground. And undoes a whole bunch of the cardioid effect and it ends up turning into kind of like a loose cardioid.

We don’t want loose, we want tight.

That’s right. In a breakout, you can select between an SL sub and an SL sub upright, and you can look at how that affects the rear rejection.

Okay, Eston says when will games be available on D 40 amp? So this is news to me. Apparently there are games on some amps, but not on other amps. Tell me about that.

Yeah, all d&b amplifiers have games built in, and you should know that if you perform a firmware update on a d&b amplifier, it will reset all the settings, as you would expect with a firmware update except for its IP settings. So it doesn’t reset the network card, which is very convenient. And it also does not reset your high scores in the games. Critical and even the really old amps had simple games. Then we came out with the fancy four channel amps with the color touchscreen and the games got way better. And now we have this brand new amp platform that I suspect will eventually get the games. But to be honest, our software team has been working really hard making all of the audio features work correctly in the brand new amps, and I would rather they prioritize that than the games at the moment.

So Thomas wants to know the highest scores in the D 80 games, and I’m guessing these amps don’t report back to you and you don’t have a list, but I think we were talking about how it would be fun to have a leaderboard so we could see self reported who has the highest scores.

Yeah, or log it within our one, since you’re already on the network with your computer so you can have your own list, you don’t have to go back to the amp to find your high score reported back to Dbau.com, so we can keep track of who’s winning the games. We also get a feature request quite often that people want to be able to play multiplayer games across the network on front panels amp so the stage right fly guy can play against the stage left fly guy during the show.

Benjamin Tan says how does engaging array processing change your tuning approach?

It’s all part of the PA performing nicely and more like each other. So even if we have a main hang of 24 GSL and a side hang of twelve V, those are voiced to the similar target curve. So I don’t really have to worry about matching curves, even though they’re different box counts, display angles and box type and all that. And it’s doing things like mostly or completely fixing the kind of HF peaks you get right down in the front row underneath the line Ray, that kind of Fresnel effect. It gets rid of that. Which, by the way, really resolves feedback issues. If you have an artist that ever goes out on a thrust in front of the PA. It fixes the HF absorption issue in the back rows, so I don’t really have to worry about tuning for that. At the end of the day, I just need to voice the PA overall for whatever my overall mix is going for. We already have controls, like a coupling filter is what we call it in our one where we can change kind of the overall voicing of lows to highs. Do you want a flat response or do you want the case stacked low end for a lot of power?

And we can just make those broad adjustments and then maybe put in an EQ filter or two, depending on what I’m feeling, what I’m hearing, and you’re done. And if you’ve done all the alignment and Raycock, we don’t need smart soundscape systems are similar. This is why I talked about the self aware PA on the soundscape. The processor knows where every loudspeaker is located and how it’s pointed, and so it times itself. You never enter a delay time into a soundscape system. It realigns itself based on where you want the sound to come from. So I would like our d&b users to be thinking more about the artistic goal and making adjustments based on what they’re hearing and not getting lost in the science and the measurement and the verification. We’re trying to build a platform that doesn’t require that, and we can just focus on mixing our show.

Yeah, that’s cool. It sounds like there’s this idea of letting the computer do what computers are good at, and let’s have the humans do the creative decisions that the humans are good at.

I love it.

Michelle or Michael says, is there any plan to incorporate polarity inversion for the design of complex subwoofer arrays like Gradient or In Fire into a Ray calc? And they are expressing this sort of surprise that I remember having as well the first few times working with d&b systems and realizing, oh wait, there’s no way to insert a polarity inversion. But referencing back to the clever Germans, there must be a reason for excluding this.

Yeah, we don’t have a polarity button. The amplifiers and the filters available to you within our one do play with polarity as needed to get the behavior we want out of the cabinet. And this is a contentious issue. We’re used to having a Polarity button. And why would a high end manufacturer like d&b just take that feature away? And in general, this kind of comes back to this ethos that I just described, where we’re trying to do all of as much of the science as possible for you ahead of time so that when you get on site, you can focus on your show. And for the vast majority of applications, there’s absolutely no need for a polarity button because we already have cardioid subs, we already have full broadband connectivity. We already have all these benefits built into the PA, as is and we all know a lot of sound engineers that can dig themselves a whole pretty quick by hitting polarity buttons and not entirely knowing what they’re doing. With that being said, I do recognize there are kind of niche setups where this would be handy. And if you want this as a feature, please don’t be shy.

Send us an email [email protected] And what would be really helpful is if we could understand what you’re trying to achieve that requires you to need the polarity button because we’re really good at trying to figure out what you’re really asking for and if there’s a set up that you want that’s common. Maybe we would think about just building an amp preset or something to achieve it so that you don’t have to know how to use the priority buttons and it just works. But either way, we’d love to hear from you. The feature requests are always welcome [email protected]

Robert Kazeera says how to identify the problem speakers in a large array hang. He’s referencing a feature in DB where it has some self verification built in. And he also told me later about sometimes he had maybe trouble where he felt like maybe some of the speakers were not making true reports because maybe there’s a reflection because they were too close to the ground. But anyway, maybe you could start by just talking about this self verification feature that is built in.

Yeah. Another excuse why you might not need a measurement Mike. So when we go online with our d&b system, with our one talking to the amplifiers, or even without our one, you can do this. Through the front panel of the amplifier, there’s a function called system check, and this will send almost inaudible low tones and completely inaudible high frequency sounds to the speakers. The amplifier then measures the return impedance and will graph out the impedance measurement of low frequency and of high frequency and of a rear firing driver or a midrange of that cabinet to verify that all of the drivers are operating as a circuit correctly. So this tells us that something is plugged in. It tells us if there’s a broken wire, it tells us if there’s a blown voice coil, all this kind of stuff, and it makes it very quick and easy without making any noise to verify that every speaker is performing electronically up to speed. Now this doesn’t test for things like torn cone or a cabinet rattle or that kind of stuff, but we’re going to get there once we start making noise. So we run system check that verifies the electronic circuits.

Then with vertical line arrays and sometimes other types of arrays, we run a test called array verification, which is just about the most clever thing I’ve ever heard of because we designed the system in a way calc and opened that same project. And R one now knows what amp channel is supposed to be driving which cabinet within our line array, and it initiates a test process where the amp channels, one at a time, will make a low level kind of noise. And while this is happening, it uses all of the adjacent loudspeakers within the array of microphones.

That’s cool, right?

And so by the time it runs this whole test, which takes 1020 seconds for a large array, it will tell you if your line array is wired the same way it expected it to be wired based on how you built your file. And with technologies like array processing, if we had a pair of cables swapped within our fan out, this could have horrendous and unpredictable results. So making sure that every box in the array is actually fed by the right DSP channel is crucially important. So not only will it tell you if it’s patched wrong, it will tell you how it’s patched wrong, which cables are plugged into the wrong cabinet. But what this user is referring to is we have seen times where people run this test before the pace at trim height, which is floating right off the ground. And some of those bottom cabinets are basically firing right into the floor. And this can create reflections, which throws off the test. And in my experience, it’s only happened with J series. There’s something about the LF sensitivities of that box that make it have this issue. And as soon as you take it, like more than 6ft off the ground, then you can run the test without that reflective for being an issue.

Daniel says, how do I combine speakers from different series with unmatched phase response, like the T Ten and the Y seven P? And he sent me a couple of measurements, and I was like, I wonder if those are correct. And I looked them up on the d&b site, and they were, yeah, talk about combining speakers from different families and different series.

Yeah. There are manufacturers that when they come out with a new generation of loudspeaker, they adopt a new phase profile. And this makes it hard to incorporate newer systems and legacy systems into the same PA. Our approach is to try to keep that phase plot as consistent as possible. Over the years, even when we came out with newer apps that are more highly capable, processing wise, we didn’t take that opportunity and just change the phase response to existing speakers. We wanted it, but J series on a D 80 new fancy amp to be exactly the same as a J series on the old two channel amps. We lock in that performance and make it consistent across the world across the decades. And mixing most d&b loudspeakers works really well right out of the box with complementary phase profiles. Now, there are exceptions. The Tseries is a great one. The T series has a very unique acoustic mechanism that affects its phase profile. And here’s how this works. So the T series for everybody doesn’t know it’s a small speaker, and it’s convertible between a point source and a linear box. And it has a rotatable Horn that doesn’t just turn the dispersion on its side.

It actually changes the way the Horn interacts with a secondary acoustic lens, which you can see on the front grille. You see these kind of stripes, this different perforation hole pattern on the front grille. And behind that front grille is a multilayered grill. And these metal perforated metal grill stuff. Multilayered actually affects path length of high frequencies. So when we turn the Horn and it changes the way the HF dispersion interacts with the secondary perforated metal mechanism, it changes the path length, the high frequency, and changes the curvature of the wavefront. So a point source speaker radiates an outward rounded wavefront. And when a Tseries is in a point source mode, it’s 90 X 50, I think. And then when we turn the Horn and we turn the cabinet, it’s now 105 degrees wide by a proportional vertical directivity with a flattened wavefront appropriate for a line source. And the way this works is because of this perforated metal slowing down HF frequencies by extending their path length, which is why the HF phase profile of a Tseries changes depending on the mode it’s in as a byproduct of this mechanical system. And yes, we do have the ability to change it with fancy technology that’s in all these amplifiers apply some Fr filters, all pass filters, all this stuff.

But it would incur latency. So now we have part of our PA at a different latency than the rest of the PA. And it would make Tseries on new amps be different than T series on old amps, which is not something that we want to introduce to our users. So people ask me all the time, though, this is such a cool thing, how come you don’t do this Tseries rotating a Horn perforated metal thing on all the speakers? And now you know why there is a downside. And it works well for a small speaker like a T series.

But that’s not something we want in our Stadium PA. And I remember you saying that in the rare occasion that you would need to combine these two speakers, you just need to make a choice, right?

Yeah. So what part of the frequency bandwidth do you want to have it be aligned? Do you want it for good LF steering and the kind of low, mid and lows want to be perfectly aligned? Or is the T there? For Intelligibility, people commonly use a single T series in the line array mode as a high powered front fill. And in that case, we really care about the HF. So let’s make the HF part of the frequency response align better for alignment with our main system. So, yeah, you make a choice. There’s no such thing as a free lunch and audio. And if you want, like a cool feature like point source to line array, which is highly valuable for small, mid sized rental companies. Then you got to give something else up on the other end. In this case, it’s a non complimentary phase profile.

Yeah, and I’m sure there were conversations on the production side before anything ever happened where they’re like, okay, if we do this, then we’ll have this consequence. And they said it’ll be worth it.

And that’s just another reason why d&b makes 100 different models about speakers, so that you can pick and choose these trade offs as needed for your application, Sunny says.

Why have external amplification rather than built in amps?

Sure. The timeless debate. I see strengths both ways. I used to work for a rental company, a couple, actually, that only had self powered speakers. And from an inventory management point of view, it’s perfect because you never have to think about I’m sending this many speakers, and so how many amps do I need? And every speaker is an amp. So problem solved. Send them out. Don’t have to think about it. On the other hand, if you’re a rental company, it’s a lot more expensive to have an amplifier for every speaker, whereas a lot of rental companies have enough amps to run the A system or the B system, but they never have to run them at the same time so they can buy half as many amps. So there’s that stuff from the commercial side. Then from the technical side, of course, having an amp and a speaker makes it way more. And the question is, do you want that weight in the air? Do you want it on the ground? And amps do fail from time to time. When that failure happens, do you want it in the air? Do you want it on the ground?

Being able to hot swap an amp without having to bring in a rigor or lift is pretty valuable. So there are positives and negatives both ways. I like having one type of cable go up to the array instead of signal amp power. I like having the electronics down on the ground where I can monitor them more easily and troubleshoot them more easily. I like having a lighter array so I can get away with using less rigging and all of that stuff. The roof can only support so much or whatever. So having a light array allows me to use the array I want, not the array I can hang. So that’s easy for me to say. I work for d&b.

And one interesting point I hadn’t thought of before that I remember you telling me about is that if the amp weighs more than the rigging also is going to weigh more because it has to be higher rated to be able to carry heavier weight. And so it’s not just this increase in the weight, but also then the whole thing goes up.

Let’s say we have a really big line array, a maximum hang of 24 boxes. And Germany decided actually for this crossover, we have to use this coil of wire instead of this coil of wire. And the coil wire they want to use is £2 heavier. Not only is the box £2 heavier, but the array is now £48 heavier. And because the array is £48 heavier, the rigging has to be upsized to hold 48 more pounds. But not just the rigging at the top box where the extra £48 happens. But every box has the same rigging, so every box has to have Upsized rigging to hold 48 more pounds. That Upsize rigging now also added 48 more pounds, which means the rigging has to be upsized again to hold an additional 40. Everything is interconnected. So literally every ounce we can shave off of a speaker means £100 in the end or something. Maybe that’s exaggerated, but it’s not just an individual box. It’s quite a lot in the amp. Then at an additional £20 per box is a pretty massive hurdle.

So my friend Steve Knott says, what do you think about renting cranes to hang PA rather than rigging them from Truss? And I said, what specifically do you want to know about? And he said, I’ve seen photos of big festivals where it’s being done already. So I’m curious about the whole thing. Safety rigging for crane lift, stabilizing, aiming the array, and of course, security around the crane base. To make an unclimable sense, wall type deal seems innovative.

I love it. It’s not new either. Doing this for years before line of rays. Even like all rigging, as long as it’s done safely by a qualified and experienced professional, I think it’s wonderful. Personally, I think cranes are a little ugly, so the aesthetic of a giant yellow tractor isn’t my favorite show business aesthetic, but it certainly has logistical benefits. It’s a lot cheaper than paying a crew to come build a tower. I’ve done a lot of outdoor shows where the PA really needed to be in a place that was not conducive to rigging, like on a slope. And with a crane, you can rig it and then drive the crane into position or turn the crane into position. So that’s a huge benefit and it can be totally safe. I strongly suggest at night, between days on site, you bring it in and touch the PA to the ground just in case there was a hydraulic failure. At some point when you’re not there. A lot of times these hydraulic systems, they can have a very slow week and a regular operator wouldn’t notice because a regular operator doesn’t use the crane that just holds something in the air for four days straight, but it can slowly droop.

So let’s be aware of some things like that. But yeah, have a great time. Also, driving cranes and forklifts and lifts is just super fun.

Speaking of driving forklifts, I know you have used an NSL Five, I believe. Can you talk about that for a second.

The MSL Ten.

Msl Ten. These giant Meyer sound speakers.

Yeah, I don’t know. Myers an old company, so I don’t even know if I’d call it an early Myer Speaker, but they’re long gone at this point. But they were so large, a single MSL Ten barely fits into a 53 foot truck like it clears with a couple inches on either side. That’s how large this giant array speaker is. And it was brilliant in that they built slots for Forks from a forklift into the speaker. So you drive the Forks into the speaker. It’s now rigid on the Forks. You pull it out of the truck, you drive it in a position, you take it up in the air, and you turn off the forklift. Congratulations. You’re raised Hong. From logistics point of view, it was amazing. The sound quality could probably be debated. It’s still innovative for the time.

Believe it or not, the first place that I worked for when I moved to the Bay Area had some. They got them second hand somewhere from someone else.

Right. Good times. The last time I was using them was like the amplification for NASCAR, where it’s really about vocal band blunt force SPL. It’s not exactly a nuanced show, and they want it cheap, so being able to rig it without a single hand or crew person helps that be a cheaper installation. It was a great fit for that.

Okay. Wesley Stern, what is their philosophy with main sub crossover? It seems to me that they let their subs low pass filter be much higher than other companies, well above where the main cabs high pass filter is, in most cases resulting in a lot of low midstymation. I really enjoy their systems and the perception this results in.

So he likes that bump in the crossover range. It’s a bit of a misnomer out there that d&b doesn’t allow you to mess with the crossover. We do, but in limited ways. We don’t allow you to actually visualize or just the slopes, but we give you buttons that allow you to tailor the crossover point. And this user is right in that the subs generally go higher in frequency response than most users prefer. We leave it available to you if that’s your approach. But depending on the subwoofer model, it will either have a button called 100 Hz or it will have a button called Infra. Both of these, they lower that low pass filter to cut out some of the upper base. 100 Hz is approximately 100 Hz. Infra is closer to 70 Hz, but changes based on the capabilities of that subwoofer so that you can throttle down the frequency response of that sub and let it focus on the real low stuff, which is more common these days. And Conversely, all of the high mid boxes have a button called Cut, which is a low cut, and it moves up the high pass to cut out some of the well end response at the top.

And between these two buttons, we have four options on how to run this crossover. We can have summation in the crossover point for additional power. We can carve it out to have a little bit less magnitude in the crossover point because maybe we just feel like it’s muddy in that room or with that mix or any combination thereof, and we just toggle the buttons until we like how it sounds. And we have confidence that we haven’t skewed the phase response or made some kind of other compromise because the predetermined friendly buttons that are still compatible and you don’t have to think about it.

Vladimir says subwoofer drive sizes and Uses Is there a trend of releasing 21 in subs, not just from DB, but other brands, too? Did the needs of events change to drive this trend?

I don’t think the needs of the events have changed, but DB has gone to generally larger drivers’than we did in the past, and this is because I think it’s less about the needs of the act and more about the capabilities of the speakers. That’s the thing that’s changed when we had the J series, the kind of gold standard d&b large format PA the tops could go down to. I think it was like 90 Hz or something. Then we had a J sub that was 318s and a J infra that was 321s. A lot of people ran the systems without 21s because the three by 18s with enough low end. Personally, I think once you hear one of these big PA with even just a single infra, it’s hard to use it without because that extra low stuff really feels good. But the reason why there were two models of subs was because the 18 inch drivers could go fast and be high impact, but they couldn’t go very low, whereas the 21s could go really low, but they couldn’t go fast and be high impact. And what’s changed is voice coil technology, particularly with the SL series.

That whole voice coil magnet structure is really reengineered and requires a higher voltage to the voice coil, which the d&b amps are capable of providing. And all of this, in turn, allows the main speaker that goes down to 45 Hz. So we got rid of the upper base requirements out of the Subaru and allowed a 21 inch driver that now has full power even at full excursion, which means as that speaker pushes out, it still has full power to get pulled back to its neutral position as quickly as possible. So now the 21 inch driver can go faster, like an 18 with higher impact, which allows us to be like, oh, the 21 can now do the upper base and the lower base with more impact than the J series could do total. This is a huge win. Let’s go with the 21s. So now that SL sub with 321s not only has the same frequency response as a J sub and a J infra put together, but has almost identical SPL output as a JCB and a junk put together, but weighs less than a J infra by itself.

Okay, so there were some rumblings on Facebook. It seemed like there were a couple of people who are like something about they don’t like d&b phase response and they’re like something about it makes them upset. And our assessment of that is maybe this trend in the market towards flatlines magnitude response phase response. And so I just wanted to give you the floor on that for a minute to maybe address what you think are some of these preconceptions.

Yeah, I think we’ve seen a big marketing push from some manufacturers who are making their face response quote more linear. That is to be like more of a flat line without wraps in the phase response. And DB is not doing this. We’re not into it. We don’t like it. The reason there is we don’t really believe that you’re hearing much of a difference. In the end, we think it’s more of a visual improvement than a Sonic improvement. And there’s no such thing as a free lunch and audio. So just because we can preemptively mess up the signal in exactly the opposite way that the speaker is going to mess it up doesn’t mean we get that for free and doesn’t mean that we don’t incur other side effects in the process. And the main obvious one when it comes to fixing phase response is latency. I think Meyer has a really cool product called the Blue Horn that has a very flat phase response on like 50 Hz or something. And it’s very cool. But as a necessary compromise there, that speaker takes 50 milliseconds for sound to come out 50, right? Chris from Rationale says, yeah, if you want the base to come out the same as the high frequency, you need to think of it like a restaurant.

If the high frequency is your entree, the midrange is your appetizer and the base is your cocktail, you can have them all at once. You just need the kitchen to keep your cocktail and keep your appetizer until the entree is ready. And so same thing with F IR filters and fixing phase. Right? We need to make the high frequency wait, and then we need to make that mid frequency wait until the low frequency is ready to come out of that frame, and then we can align it. And then you end up with 50 milliseconds of latency, which for Bluehorn is totally fine because that is a post production studio environment product where latency isn’t an issue because it’s all playback. A concert, on the other hand, is a different story. That Snare drum already stopped by the time 50 milliseconds goes by. Maybe there’s situations where you could argue that’s, okay, and that latency is still good, but it does come back to my earlier point to the d&b amps have all the ability to make flat phase response right now as is and we could fix it. It takes one of our DSP people like 5 minutes.

It’s not hard. But then that speaker on a D 80 will sound different than the same speaker on an old D twelve and the world of change. And in the end, we don’t really think. We think if we did two versions of the same speaker and we AB them. One had flat phase response, the other one didn’t that you wouldn’t pick the right one if asked to in a blind test.

Nick, where is the best place for people to keep up with you and follow your work?

You can find me on social Nick makes it louder on Instagram, see some pictures of some d&b rigs, a whole bunch of soundscape systems. Otherwise, feel free to send me an email. You can send an email to support us at Dbaudio and just say hey Nick, I had a question about that thing you were talking about or tell me more about this. Anybody anywhere in the world can send an email to [email protected] tell them where you live. That email will get sent to your local support team in your time zone in your native language. Also we have a ton of tutorial [email protected] Everything from software use to wreaking and hey come and say Hi see me at a trade show if those ever start up in postcode come say Hi otherwise you’d be on the internet bar. Yeah. If you bring up the Tiki bar thing to me at a trade show there’s a good chance you’ll end up drinking Tiki drinks on the d&b credit card.

Well, Nick mail. Jerry, thank you so much for joining me on Sound Design Live.

Thanks, Nathan. So much fun.

How to fight power alley using end-fire arrays

By Nathan Lively

If you don’t like the power alley that results from uncoupled subwoofer arrays and you do have six or more subs and enough real estate, you can try a compromised approach by aiming the left and right energy away from the center, improving isolation and lowering variance across the audience.

I should make it clear that the result appears to reduce the power alley only in contrast to the rest of the audience. There is still interaction in the middle, it’s just lower in level compared to on-axis with the sub arrays.

A reduction of 4dB at 63Hz is found at the center of the audience.

Download the MAPP3D file and run your own tests.

Why six or more?

The 2-element end-fire array is a one note wonder. It cancels at a single frequency in the rear. A more efficient option would be the gradient array, although there are exceptions.

from Subwoofer Array Designer

How do you design an end-fire array?

Space the elements in a line so that their operating frequency range fits nicely in between the preferred filters recommended in Subwoofer Array Designer.

What’s the least amount of cabinets that can be used effectively?

As the number of cabinets goes up, so does the range of cancellation and consistency of coverage.

  • 2 cabinets: Cancellation at 1 frequency. Could be useful for a fighting a single resonant frequency on stage. Otherwise prefer gradient array.
  • 3 cabinets: Cancellation at 2 frequencies. Better an 2. Option to convert to 3-element inverted gradient stack.
  • 4 cabinets: Cancellation at 3 frequencies. Now we’re talkin’.
  • 5 cabinets: Cancellation at 4 frequencies. Even better.
  • 6 cabinets: Cancellation at 5 frequencies. Begin to approach the point of diminishing returns.

Four elements is the most common end-fire quantity because it is effective and reasonably practical. Economizing to three units sharply reduces the randomization in the rear, leading to frequency-dependent reduction. Never end-fire with just two elements. It’s a one-note-wonder on the back side. Use the gradient in-line instead (same physical, different settings). We don’t have to stop at four, bearing in mind that the horizontal pattern narrows with quantity. Get crazy! RF antennas will end-fire 10+ deep.

McCarthy, Bob. Sound Systems: Design and Optimization: Modern Techniques and Tools for Sound System Design and Alignment (p. 321). Taylor and Francis. Kindle Edition.

A more consistent polar pattern?

I had never considered this before Tamas asked about it and I was excited about the possibility. Unfortunately, my experiments do not reveal a significant improvement using 2nd order all-pass filters over pure delay.

FrequencyOpening Angle w/DelayOpening Angle w/APF
40Hz172º160º
50Hz152º152º
63Hz134º144º
80Hz120º116º
100Hz106º100º
100 – 40Hz66º60º

One interesting side effect was the development of a MATLAB script to calculate the ideal frequency and Q parameters for each APF. Let me know if you’re interested in hearing more about that and I can update the article or send you the script.

How do I use SubAligner with end-fire arrays?

Measure the distance to the main array as you normally would, but measure the distance to the furthest subwoofer. All of the other subs in the array are aligned to it as well.

Distance measurements
SubAligner recommendation
SubAligner plot

Here’s a direct link to this alignment if you’d like to use it in SubAligner.

Resulting phase alignment in MAPP 3D
Prediction at 63Hz

Don’t we need to add 4th order filters, as suggested in Subwoofer Array Designer?

Normally, yes, but in this case there is already a native low-pass slope of 24dB/oct.

Have you tried end-fire arrays on your shows? What were your results?

How do you calculate cardioid subwoofer spacing? (gradient array)

By Nathan Lively

First, the wrong way: forward maximum summation using geometric mean. ¼c/(√(F1 * F2))

It works, but it’s not the most efficient method.

After a discussion with Merlijn van Veen I learned to space the gradient subwoofer array (commonly known as a cardioid array) using arithmetic mean to match the region of greatest summation to the operating range.

Now, the right way: forward maximum summation across the entire operating range.

  1. Find the center of the operating range using arithmetic average. eg (F1 + F2) / 2
  2. Find the wavelength. eg. λ = c / Fc
  3. Find one quarter. eg. spacing = ¼λ

For more on different kinds of averaging, please see Know Your Audio Analyzer Averages.

geometric mean: the nth root of the product of n numbers.

Wikipedia

arithmetic mean: the average of a set of numerical values, calculated by adding them together and dividing by the number of terms in the set.

Oxford Languages

Example

Nexo RS15

“Hey Siri, what’s the average of 35 and 100?”

It’s 67.5.

“Hey Siri, what’s the wavelength of 67.5 Hz?”

It’s 4441 km.

Hmmm, that’s not very helpful.

“Hey Siri, what’s 345 divided by 67.5?”

It’s about 5.11.

“What’s a quarter of 5.11?”

It’s about 1.277.

Ok, let’s try using a spacing of 1.27m in Merlijn’s Subwoofer Array Designer to see if we can fit the operating range in between the preferred filters (yellow triangles), which designate 3dB of summation.

Bingo.

Here’s an excerpt from my recent workshop: Follow the Sound System Tuning Roadmap

More Questions

What about the aim and spacing between gradient arrays?

You could space two gradient arrays so that they meet in the middle of the audience at their off-axis (-6dB) points, but you risk introducing further asymmetry into the design my moving the arrays out from under their respective main arrays. Instead, I would prefer to leave the left and right subwoofer arrays underneath left and right main arrays, but aim the sub array out until off-axis left (OFFAXL) matches crossover left-right (XLR) in the center.

Here’s a prediction with effectively the same result, but achieved through spacing and then aiming. Only one sub array is on to make the result more clear.

gradient spaced vs aimed

Here’s a top view showing both arrays on together, demonstrating reduced interaction and power alley in the middle.

gradient array aiming

Alternatively, if real estate for rotating the array is not available, you could rotate the alignment position in the rear in towards the stage until the desired aim in the front is achieved.

How do you space an inverted gradient stack in landscape mode?

Don’t space them evenly across the front of the stage. You may be unnecessarily lengthening the line and narrowing the coverage.

Do leave 6 inches between enclosures to improve efficiency. For more, see this article.

9-element gradient array comparison

Why do you polarity invert the rear sub in a cardioid array?

Do you have other questions about gradient arrays? Let me know!

When will this ‘immersive’ fad be over?

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live my guests are the Director of System Optimization and Senior Technical Support Specialist at Meyer Sound, Bob McCarthy and Josh Dorn-Fehrmann. We discuss the perception of immersive sound systems from marketing nonsense to power system design tool.

I ask:

  • When is this fad going to go away?
  • How is it possible for each audience member to receive uncorrelated signals? If every array is source is covering the entire audience, won’t every audience member experience a 5x comb filter?
  • From FB:
    • Robert Scovill: Is Galaxy, when it is used in immersive systems considered a “spatializer” by a given definition? I know Meyer are incorporating delay matrixing with in the unit to achieve the spatial aspects of their SpaceMapGo application, but I’m curious if units like Astro Spatial and L-isa, Timax etc., are functionally – or mathematically – different than what Galaxy has to offer. How does Meyer define an “object” – is it a speaker output? Or an input source to the spatializing device?
    • Aleš Štefančič: I was wondering how far into the audience the immersive experience can be achieved before all those separated signals become combined and does that then cause cancellations in the back if the room?
    • Lou Kohley: When will this fad pass? 😉 Seriously though, Does Meyer see Immersive being commonplace or as a special thing for specific spaces.
    • Gabriel Figueroa: What do you see as the pros and cons of immersive in theaters that cater to both music and spoken word? Especially rooms with difficult dimensions, where traditionally you would add a speaker zone/delay, but now you could theoretically add not just coverage but imaging as well!
    • Robert McGarrity: Total novice for immersive programming, but where do you delay to? Is there a 0 point?
    • Angelo Williams: Where Do we place audience mic’s in the room for capture as objects?
    • Lloyd Gibson: I thought 6o6 was against stereo imaging in live sound because of the psychoacoustics and delay/magnitude discrepancies seat to seat. Does this not apply here or is there a size threshold where it can be successful?
    • Sumeet Bhagat: How can we create a good immersive audio experience in venues with low Ceiling heights ?

It’s a totally different experience mixing in a wire versus mixing in the air. That’s the beauty of immersion, but you have to be able to pull it off.

Bob McCarthy

Notes

  1. All music in this episode by LXIV 64.
  2. Spacemap Go
  3. Quotes
    1. Noise is the number one complaint at restaurants.
    2. There’s no upside to unintelligibility, but…intelligibility isn’t the only thing. We’re willing to give up some of that approach [of mono center clusters] in order to get some horizontal spread. People are willing to give up perfection and intelligibility in order to get that horizontal experience.
    3. Spacemap is a custom panner, basically.
    4. Can I use smaller arrays if I use more of them? The answer is yes. Consider the Fender Twin Reverb. It does only one thing: reproduce the guitar, and it can ruin the experience for everybody because it’s so freakin’ loud. So how do those two twelve-inch speakers out-do our whole $100,000 PA? It’s an object device that only streams a single channel, while [the sound system] is reproducing 32 channels or something like that.
    5. Time doesn’t scale.
    6. It’s a totally different experience mixing in a wire vs mixing in the air. That’s the beauty of immersion, but you have to be able to pull it off.
    7. One place I throw up a big red flag is people wanting to play matrix games with their under balconies and front-fills. It’s like, stop it stop it stop it.

Transcript

This transcript was automatically generated. Please let me know if you discover any errors.

Welcome to Sound Design Live, the home of the world’s best online training and sound system tuning that you can do at your own pace from anywhere in the world. I’m Nathan Lively. And today I’m joined by the director of system optimization and senior technical support specialist Bob McCarthy and Josh Dorn-Fehrmann. Bob and Josh, welcome to Sunday Design Live.

Hi, Nathan. Welcome. Thanks for welcoming us, I guess.

Yeah, good to be here.

Okay. So I definitely want to talk to both of you about Immersive system design. That’s what we’re here to talk about. A lot of people sent in questions. It is an exciting or polarizing topic, depending on how you look at it right now. But I hope by the end of today’s conversation, you may have some more information about it, and you may feel differently about it. We’ll see I may feel differently about it, but before we do that, I would like to know from each of you what was the very first concert you ever attended.

Can you remember whoever can remember first for me?

That’s easy. If you consider a concert at my elementary school gymnasium, that was Charlie Fer and his band, and they played and I don’t give up blank about a green back dollar. And they literally did that because it was in the Catholic school auditorium, so they couldn’t say damn. And I thought, wow, this is really cool. We’re all at this concert together and everybody’s cheering and they’re playing Peter Paul and Mary songs just like my records. And I didn’t even know such a thing was possible. This is really cool.

Oh, man, that’s way better than mine. Mine was a Christian artist of some sort. I was really involved in the Church when I was a kid, and I think it was Rebecca St. James. Maybe it was like a first Christian concert was like, what I did first.

And then you were both steeped in religion from a young age.

Yes. I grew up in Louisiana, and I moved to Texas.

The thing about my first concert was it was people that I knew that I went to school with their younger brother. So it was like, real people. So that set the seat in me that real people can play music for an audience. And that was like, okay, this is awesome. I want to be part of this. And there you go. Right there.

Then, of course, my first thing go ahead.

My first big rock show was Grand Funk Railroad.

Oh, yeah.

Iem your captain.

I’m glad you pointed it out because I think it seems like a magic trick for a long time. Right there’s, like, these magic things that are happening on stage that are making us feel feelings. And it kind of seems distant. We put the artist up higher, like we’re down lower. We’re disconnected from them in a way. So when you start to meet those people and see that they were once like you and also maybe knew nothing about music or how to play music or audio or physics or anything.

And then they learn that stuff, then your brains, you start to see, like, oh, maybe I can get involved.

Yeah.

Totally. I also got into theater really young. I remember watching shows and just at high school productions and being in elementary school and going to go see Anne Frank or whatever. And it was funny. We saw Anne Frank in Lafayette, Louisiana, at the Big Performing Arts Center. And at the end of the show, we got on the school buses. And the person playing Anne Frank was smoking a cigarette outside, and it totally ruined the magic and the spectacle. And that was probably the first memory I have of, oh, this is something that people actually do that are human beings.

It’s very interesting.

There’s a person inside that mouse costume.

Yeah.

Well, another seminal event like that for me was when John Huntington wrote his book on control networks and control system. Exactly. And it’s like, I had known John for ten years, and it’s like, Well, Gee, whiz, if John Huntington can write a book, I can write a book. Seriously. It was that much of a join on the head. And that was a big piece of pushing me forward to right above that’s fun. Yeah.

It’s really helpful when we see our colleagues doing something like, oh, this person can do it. I can do it.

You don’t have to be a College Professor or have mixed the Beatles albums to write a book about audio or to be Harry Olsen. You can write if it’s got something to say.

Yeah.

And I remember when he then went on to self publish a future edition. So he’s been a good role model for a lot of us who want to, like, publish and stuff.

Exactly.

So, Josh, when you’re coming out with your book.

Oh, man. I wrote a thesis for graduate school while I was on tour, and that was hard enough. And that was about 50, 75 pages. And it’s on restaurant sound design. So it was a great excuse to tour around the country and eat at great restaurants and talk about noise and how to elevate the dining experience.

Would you mind sharing a couple of pieces from that? Like, what was one of your biggest takeaways from looking into a lot of restaurant sound design?

Well, yeah. So noise is the number one complaint in restaurants. Right.

And they tend to just make that worse by putting sound into the space.

Oh, yeah. And it comes down and we deal with this all of the time and installs churches, theaters wherever. But the same thing happens with restaurants. And one of the cool things about noise is it sort of activates a certain SPL. It starts activating your fight or flight sort of mentality. And they see that as things get louder, the rate of consumption and food and drinking actually goes up. And I think it’s somewhere near, like 20% in some of the studies I was looking at.

That’s actually good for the bottom line.

So imagine something like a Chipotle.

People are stressed out.

Yeah. That’s why you go into a Chipotle, and it’s just concrete walls and glass everywhere.

Really. They did that on purpose.

Partially. I don’t know. You can walk into these fast casual restaurants and that’s the architecture. And then that architecture trend is carried over. And so there’s all sort of anesthetic synesthesia type research going on on how frequencies affect, taste and all sorts of different things. It was very interesting thesis. I went to grad school at UC Irvine in California and for sound design for theater. But, yeah, I was very interested in that. And then it sort of all came together list right before the pandemic. At a restaurant called Verse Restaurant in Los Angeles.

Manny Mariquin, who owns Laraby Studios. Very famous mix engineer, took over a restaurant space right next to his recording studio, and we put a Constellation system in there for full acoustics. You can use space Mapp, go, and you can also have PA. There’s an X 40 system of PA on sticks. It’s basically my thesis in a restaurant, and it actually exists now, and they actually have a fiber line connecting to the recording studio. And the RT time of the room is like 0.5. So it’s like a studio inside the restaurant.

And we adjust the acoustics for whatever bands are playing. And then we also use a Constellation technology that allows us to we call it voice masking, so we can sort of isolate the tables and make them that way. You’re having a nice conversation with someone and you don’t have to yell across the room or hear other people’s conversations.

I feel like we should do a whole other podcast on this, because now I’m wondering I was thinking that I can sort of pitch customers and clients on my work and on sound systems in general by saying, hey, the better the sound, the more money you’ll make. But it sounds like that’s not always true. So really, we should make the sound worse to help them make more money. But then their customers are also going to be stressed out. Like, Where’s the connection there?

Yeah. I think it depends on the goal of the restaurant. It’s like a good system design. What are your goals and what are you trying to accomplish? And then physics gets involved as well. Everyone will love a better acoustic up into a certain point of a room. If your room sounds too dead and you don’t energize it with Reverb and it sounds like an almost anacoic or like you walk into a cinema and you’re eating. That’s not a good dining experience. But if it’s got a little bit of an uplift to where it elevates and it has a little bit longer Reverb time and more early reflections, then you have an energetic room.

The problem with restaurants is you have too much reflections, you have so much hard surfaces, so many things. And then you have people that start trying to talk over each other.

Got it.

And it just creates white noise. So, yeah, there is a balance and you have to find it architects. That’s the job of the architect and acoustician. The cool thing about constellation is you can build a dead room, and then we can make the room whatever you want it to be and change it at the push of a button. So you can do the tables when people are dining and at the push of a button when the band gets on stage, it now can be a concert hall or a theater or whatever you want it to be.

And Bob, would you agree that there is this balance where it’s like the sound needs to be good enough in a commercial or restaurant space so that you feel safe and you want to stay there. But then not so dead. I guess that you’re not interested in being there and you don’t want to like, I guess, drink and eat. Have you seen that in the wild?

Well, I think that if a restaurant has overly absorbed of acoustics, which is so rare, where do you find that? Maybe in the old school? Well, this is an old school Steakhouse with the furry booths kind of thing. So if it’s really dead, you’ve created an environment that’s very you better have people far apart, because if it’s crucifixely dead and people are close, then you’re literally hearing everything exactly clearly that everybody else is saying. So the dead restaurant in the booth, those sort of go together because you got separation then.

But what I find is that you have this situation where as the background noise that tries to fill up that void to making people feel like they are not alone, that the place is alive. But some of these places will have these sensor mechanisms that raise the background music to make sure that it’s over the talking and that’s, of course, in my mind, a reverse. It should go down. If the place is already so full of people talking and have a good time, don’t send the music up because they’re already having a good time, just bring that thing down and de escalate so that then people don’t have to have the shouting experience and the what and the what?

And that feeling where you’re with a party of eight and you really are only able to talk to the person on your left or on your right. And that’s really all why.

We’Re talking about immersive experiences. And a restaurant experience is an immersive experience. You’re surrounded by people. You’re dealing with the various acoustics in the room versus restaurant in La, and a couple of other restaurants have a ton of speakers, and they do a lot of other crazy things with it. But the experience of being surrounded and experiencing what’s happening in the restaurant is key. And so we do things like we’ll raise the acoustic and make it a little more vibrant and energetic in the bar area. So it’ll be more vibrant by the bar.

And then the rest of the restaurant will be a little more quiet, less Reverb, so that people can have a better conversation. And you can sculpt all of this with the technology constellation. But that’s one of the many tools of Immersive audio. And I think Reverb and reverberation and room acoustics is a side of immersive audio that people are starting to get into more and more. But then you have the other side, which is more helical of speakers across the stage, speakers all around you moving sounds around and doing things like that.

And so what is Immersive audio? That’s a big question to me. It’s a marketing term, and whatever term you use, it, whether it’s hyper, real immersive or whatever, it all goes into the same bucket of it’s an experience for people live and in the real world.

So for me, the ultimate restaurant, plus, Immersive auto experience has got to be Chuck E. Cheese man.

Exactly.

To those animatronic cheese balls on the stage.

There you are, so in it, there’s no windows. It’s just a warehouse, everything’s blacked out. So it’s just this experience they’ve created.

That’s probably my first concert experience, actually robots at Chuck E. Cheese.

Okay, Josh, thank you very much. You’re a great co host. We needed a transition into Immersive from restaurants into Immersive. And the first thing we need to talk about is sort of the tough stuff because there are a lot of people listening right now like me who are thinking, when is this fad going away? Why do I need to care about this? And those people who are like me typically try to ignore things until it’s something they have to know tomorrow. So I’m not going to look up the directions to the airport until I have a ticket to leave tomorrow.

And so I’ve been ignoring all this stuff about Immersive for years. So a couple of years ago, I was in Orlando for the name of a conference I can’t remember, and everyone was showing off their Immersive systems, and I thought, this is really fun, but I don’t need to worry about it, because this will never be a part of my life because it’s so escalated in terms of complexity and expense that I’m never going to work on something like that. Fast forward to this year’s Lifetime summit.

And we’ve got Robert Scoville presenting about why he thinks immersive systems are so cool, why he’s trying to pitch them to producers, event producers for tours that he has coming up. And it turned into kind of this polarizing thing where we had it felt like we had people who had drunken the koolaid or were on this side of the fence. And we’re like, this is so cool. And then people who like me, who are still kind of on the other side of the fence or on the fence and are like, but wait, is this just marketers trying to sell me more speakers?

So we’re all friends here. So I know you guys don’t take any offense to me saying things like that, but I feel like we kind of need to go through this conversation before we get into some more of, like, the fun system design stuff. So I wanted to give each of you a chance to say, I guess I want to give each of you a chance to say what excites you about this idea of immersive sound, what we can do to sort of allay people’s fears, what we can do to take the fear away, that this is something that is going to be forced on people.

Is that a weird thing to say?

No, I’m there with you, man. And I was there with you up until about two years ago. And, yeah, what’s interesting is from our company’s perspective, we’ve been doing Immersive audio for 30 years. One of the first products John Meyer made was a subwoofer for the touring production of Apocalypse Now in Quadraphonic. That was the first cinema world has been doing it for a long time. Theatrical sound designers have been doing it for years and years and years. And so this live audio world where we have a stereo environment, and now we’re moving into something different or even a mixed mono environment.

It’s scary. And I think people have every right to be scared. But before I talk, let’s bring it to Bob because he’s been working in stereo and mono systems for all of his career. So Bob, go for it.

I am not an immersive evangelist. It’s not my role. What I do is try to give you guidelines to if you’re going to make an immersive system to make one that’s going to work and achieve your goals and not ruin your other goals. So for me, the laws of physics still apply. The laws of human perception still apply the acoustic realities, interactions with speakers. All those things still apply. So now you’ve decided you’re going to be immersive. So here’s the rules that you have to go by or the guidelines.

I more likely look at them as guidelines and rules because nobody wants rules, whole trade of Cowboys. And so what you want to do now is if you’re going to do this, these are some guidelines to help you succeed. So to me, I go back to the easiest thing to think of. Okay, if we’re going to make a successful system, the first thing it has to be is intelligible enough for people to understand the material in the world of theater, they have to understand the words in the world of House of worship.

They have to understand the words in the words of rock and roll. It’s pretty helpful to understand the words, although a lot of times it’s not sung with the kind of clarity and that you can bend on that. But it is really helpful to have that there’s no upside to unintelligibility. But if you look at why we go and we don’t have mono center clusters all around the world doing all of our shows, it’s because intelligibility isn’t the only thing. We’re willing to give up some of that perfection.

And the absolute bestness of that approach in order to get some horizontal spread. And I think a big piece of it is that most people are born with two ears, two functioning ears, and you want to hear a horizontal panoramic spread because that tickles your brain in a really positive and engaging way to have things spread over a horizon. So left and right is here to stay. It’s not going away. And people are willing to give up perfection and intelligibility to get that horizontal experience. And then that brings you to the next big chunk going to three channels.

Lcnr, the world of cinema crossed that road a long time ago, and they were very troubled by if you just go left and right, as soon as you’re one seat off the center that you image to that side, anything that’s panned to the center. And it’s an insolvable equation, no matter how much somebody tells you, they’ve just invented a new magic filter that time. Smears and blah, blah, blah. I don’t want to hear about it. You sit one seat off the center in an arena and everything mixed mono is on the left side.

Okay, everybody knows this. We don’t want to admit it, but everybody knows. So the deal is if you want the vocal or some center image to stay in the center, you need a center channel. That’s why you have a dialogue channel. But you have to now not go and put everything in all three channels where you can sort of put a lot of things in left and right. But when you start putting in left center and right now you’ve got a problem because they are going to have all sorts of fights.

The correlated comb filter fights that we all know about my life’s work is screaming and putting up flags about this subject. Once you go to this, you’ve crossed the line, and now you need to take a decoration approach. That is, I got to put different things in the center, then I put in left and right. And if I’m going to take that approach, that center channel has to reach all the seats. If it’s going to carry the big voice, the big star, the lead of the show, if it’s going to be theatrical, it’s going to carry the vocal content of the show.

It can’t just be a 90 degree speaker that covers one half of the room, which you can get away with on your left and on your right. So now you have pretty hard and fast rule that if you’re going to make a channel as a standalone to cover the whole room, it has to a cover the whole room. And that is the key thing. Once you’ve crossed the three and you’ve got left, center and right. Well, now the crossing over to adding surroundings on your sides and on your rears and on your overhead.

Those are just more versions that follow a similar set of guidelines.

Yeah. Bob and I joke a lot about, okay, you’ve spent all of your career separating coverage and making sure that everything is separate but equal. And now we’re doing the exact opposite and just overlapping everything. And people are like, well, what about the comb filter? And then that’s where the processors are doing all of the magic. And yeah. So on your question, is this a fad? I think it’s a tool. It’s not the right tool for everything. It’s not the right tool for every situation. And for the exact reasons that you laid out cost sometimes is prohibitive.

There are arguments from different manufacturers of why one is better than the other and how you can save money. Some people say you can have a smaller line array. Some people say, since your headroom is spread out amongst your five across the front or whatever, you can use smaller speakers because you’re distributing that through multiple loud speakers. And there is snake oil in the industry. As a mentor once told me, Audio is a series of compromises and snake oil salesmen, and you have to figure out what is true and what isn’t.

And there’s a lot of snake oil in our industry, from the gold platinum cables of power to all sorts of other things. And marketing is a thing. People are trying to sell speakers with this, and I don’t think they’re honest if they aren’t trying to do that. But with Immersive Audio Systems, what we did with Space Map Go is this is a technology that’s been around for almost 2025 years, and what we said with Space Map Go was okay, let’s just make it free. So it’s a free update to your Galaxy and where we get in system design, where we get really what’s really happened from a marketing perspective is these new up and coming Immersive systems require you to have a lot of fixed loudspeaker locations, and they say you must have five across the front.

You can have seven across the front. You can have this many on the sides. You can have this many above. You Dolby has a spec on how to design sound systems for cinema. And so people are used to these rules and that they’re like the static. I have to do this, and I have to have this many speakers in order for this to work.

All right, let me pause you, Josh, because we’re about to bust a myth. So let me introduce the miss, which is something that I believed until a couple of months ago, which is that Immersive meant five times the expense and five times the complexity, because you take your normal mono system, and then we’re going to upgrade to Immersive. Then everything gets multiplied by five. And that makes it really easy for me to ignore and say, oh, this is a fad, because no one can actually support this kind of expense and complexity.

We can barely get mono and stereo systems, right? How can we do this? And so you have been a big proponent of pointing out to people how flexible this is. And it’s a container for new system locations and system designs and not rules. Okay, so continue.

Yeah, not rules. The only rules that we like are physics. And those physics rules still apply. Pick the right speaker, put it in the right place and point it in the right direction. Now, that’s different for mixed mono and stereo systems than what it is for immersive systems. Those are the only three rules. Pick the right speaker, put it in the right place, point in the right direction. Now we at Myersound and Space Mapp. Go has a lot more flexibility in terms of what you can make an immersive system because our algorithm and we can get into the weeds about this.

But the space map algorithm and what a space map is is a custom panner, basically. So you can make a space map system out of one speaker, and that’s a Panor that you make. And the difference between space map and what everyone else is doing is that we allow you to make the panner. So let’s say you have a theater and you spend a ton of money on a five across the front system on the sides and around you like, you have a full 360 degree shell of loudspeakers where most of these immersive systems are failing is they only let you drive that system one way.

So if I use their GUI in my object panner and I move that object of my guitar to the top left corner of that panner, it’s only going to come out of the top left side of the sound system. The difference between that and space map is that we can say, draw the space Mapp to control the loud speakers however you want. So it’s like having a Ferrari and driving it like a Prius because you’ve spent all this money on loudspeakers, but then you’re only allowed to move sound around in very certain ways.

Whereas if I can draw a space Mapp for that room and have a sound zigzag and zip around every other loud speaker, send to all loudspeakers, send to just the vertical and then cross fade down to the sides. You can do some incredible things with the space Mapp technology, because space maps are abstractions of loudspeaker layouts that you draw. So instead of having one fixed location, you can draw the space map to be whatever you want it to be, which is very different than what this is.

But ultimately, the technology that all of these companies are using, including us, they’re big cross point matrix, and they’re using either level delay and delay or just level. And they’re also using level and delay together or just delay. And then there’s all sorts of other algorithms that people do and do not tell you about. Most companies don’t show you what’s going on behind the hood, whereas you can see the matrix values in Galaxy. While this is happening to see what math is actually going on. So, yeah, this is something we can get into, but we can make a space Mapp system, an immersive system out of three speakers, put them in a triangle.

And if you’re in the middle of that and those speakers can be on sticks, you can pan sound around those three speakers. It’s like a sandbox of system design compared to and the reason for that is very particular, because when space map first got started, it was designed in a geodesic Dome. Back in 1979, Steve Ellison was in Australia, and he had to work on an Apple two computer. There were speakers all along this geodesic dumb, and he had to figure out a way to mathematically move a sound around to each one of these speakers.

And it was inspired by the geodesic Dome. And then a couple of years later, he and Jonathan Dean started a company called Level Control Systems. And the first show that the technology got deployed on was for an arena touring show called the George Lucas Summer Spectacular Adventure. Yeah. And so there was over 150 people in the audience for the first show that they deployed this technology on. That was like in the 80s, like, early 80s. And so since then, what we’ve done is worked with sound designers, really in theater and big spectacles and started adding to these tool set that’s needed again, audio is a series of compromises and live sound.

What we do as live sound practitioners is incredibly difficult. And so we need to have a system that is flexible enough to overcome the challenges that we face on a day to day basis. Oh, I can’t put my speaker there because there’s a wall. Okay, well, just draw a virtual note and space map and make a virtual speaker there. So all of these tools that have been added to the space map over the years have really evolved with the mindset of it’s a live sound tool.

It needs to be flexible and scalable and easy to deploy. What we didn’t do for years and years and years was make it easy and accessible to use. It was very expensive. And some of the new Immersive processors out there from other competition and companies are incredibly expensive, and they require you to have two, and they almost handcuff you. So you buy this Ferrari’s worth of loudspeakers for your room, and you buy this processor, and then you can only drive it like a Prius because they make you only be able to move sound in the way that the room will look.

You’re getting all worked up.

I know it’s just frustrating because the rules, it’s a marketing thing that the companies are saying. These companies, these companies. What’s cool about the Galaxy is we can it’s just marketing.

You want to make something that people can reliably make work. So you put some guard rails on it and what their approaches is to make a thing that shoots down the shuttle down the middle of the road, and it would work in the middle of the road applications, and it would be repeatable, and it stays in this safe kind of repeatable thing. What we have done, because it goes back to the start of this creative place is to make a non guardrail version, but it comes in a kit form that you have to assemble yourself.

So you have to say, okay, here it is. There’s a pile of stuff on the floor. It’s like a bunch of Legos. You can build it into anything, but you have to build it. You have to conceive of the sound design. So it’s not something. It just pops up into your brain. And as far as that one size fits all sort of mentality. Now that runs into realities, such as the shape of the physical room of where you can put speakers. So if you make it so that it’s always just that it’s just for a standard arena shape.

Okay, there you go. But we have taken a thing that is ready to go in whatever shape that you do. Whatever you’re in. My first one was in the literal Planetarium. It was under the sea, the little Mermaid. And we had speakers around the circle, 360 degrees of laterals. And we had speakers in the center and speakers up in the Dome. And the Mermaid flew up and down, swam up and down. And the sound image came up with it as you turn on the lower speakers or the upper speakers.

And the characters all ran and swam around the Dome. You could image to these things. And this was in 2001 at Tokyo Disney Sea, and we literally built that thing for that place. And those trajectories are only for that application. So it’s not universal. It’s a custom fit. I don’t want to take the approach of really of talking about or disparaging other platforms. My thing is that we have a platform that can make a five channel with laterals and things that can also make six channels or four channels or two mains and 19 surrounds or whatever it is.

We’re ready to go give me an application. I’ll bet we can do what you’re looking for. That’s what I have to say. I bet we can do it. It just might take a little time, but we can build something to that shape.

Yeah, and we can shape the Playdoh, however we need to. If we need to make it look like the room, the Panor to behave the way all of these other Panters behave, then we can do that. But that’s just a fraction of what a space map can do. And it’s really a creative imagination. The other day we had someone come up to us and talk about a need for an escape room and a maze to sort of guide people along. It’s a very intricate move zippering around type of room with loudspeakers everywhere.

But the way you would do that with most Panters is very difficult. Well, with space Mapp, since they’re abstractions, we drew the layout as it would look with loudspeaker notes. But then we use what are called virtual nodes, and we just made a linear Fader. So as you drag your finger across the bottom of the space Mapp, it activated the speakers in a linear order that you want the user as they’re walking around that room to experience. So this abstraction is really cool because you can move beyond just the plan view 2D representation of the loudspeakers that these other products have.

You could do that. It’s still fine, and it’s totally useful, especially when you’re first grasping how to deal with immersive systems. But then you can do things like I want this sound to play out of the speaker in front and then the speaker completely behind me and then above me and then Zigzag. And you can make these really fun space Mapp. And I have one that’s called a randomizer that I show in some of our work. And the randomizer was designed to emulate crowd noise in the Stadium during the pandemic.

And what it does is it just randomly sends level to about six loudspeaker locations, and it adds random level changes to an existing room. In this case, we used it to represent Stadium audience sound with a mono signal with a mono signal.

We made it sound like it was all surrounding you, coming from everywhere, 100,000 people.

That’s cool.

I want to address for one moment. Can I use smaller arrays if I use more of them? And the answer is yes. Think about that. If you want to know an object lesson of that, consider the Fender Twin Reverb twin Reverb has only one thing. It reproduces the guitar and it can ruin the experience for everybody because it’s so freaking loud. Okay, so how does that one just 212 inch speakers can outdo our whole giant $100,000 PA because it’s object device that’s only streaming one single channel, and we are reproducing 32 channels or something.

Okay, so if you go to five mains and you partition your band into fifths, well, okay, now each of those has headroom available because of the decrease complexity and density of the waveforms that they’re reproducing. And I can tell you from the experience going back to 1019 and 74 listening to the Grateful Dead Wall of Sound, which is truly an object based sound system. Each instrument was separately had separate columns of speakers, and if you put them all together, it would have been a big giant blur.

But as separate events blended and mixed now in the air instead of mixed in the wire, there you go. Now you have the ability to spatialize and you can still fill the same amount of acoustic energy into the space. But of course, when you scale the thing and get too big and get too far apart. Now you’ve started to offset time and you have a band when you put the guitar that’s 100 milliseconds away from the piano. Now you’re starting to get the experience of listening to the marching band at half time at the football game, which let’s face it, it’s not tight.

And Marcy Bannon is not tight. So the thing about scale is that time doesn’t scale. So you get this thing overly large. You get it into stadiums and things. Time doesn’t go proportional. It goes in milliseconds and Hello. Hello. It’s a real issue issue.

Yeah.

I was watching something with Robert Scott, actually talking about when he first did Rush and Quadrifonic, and he tried moving Neil Pert’s drum kit around the room in the arena. And he said Neil stopped. Neil Fert stopped. And Robert will have to tell you the story. But he said he stopped and was like, what is that? And it was the propagation of time of the symbols or whatever going back to the arena. And of course, he said that Neil partt was good enough to where he had figured out the time off at set and adjusted drumming to match what was happening back coming back from the other side of the arena, which is amazing.

Yeah.

Thanks for bringing that up, Bob. I think that speaks to my question of isn’t this just a five X in expense, or is it more just like redistributing complexity and expense? Maybe there are some examples that each of you could share because I think the application for sound design when it comes to theater and circus events is really clear. The sound designer says, I want this, this and this to happen or it’s in the script. It says this happens and the sound moves around. But have you seen successful applications?

Are there interesting applications for concert corporate, some of these other places that a lot of us work in and might be wondering, is there an application that I should know about as an option for me, a sound designer, system designer. And have you guys seen that? Could it be a tool in those environments?

Oh, yeah, absolutely. We just did the AES Nashville event, and there was a spring training event. And one of the experiments that I personally wanted to perform was take someone who’s worked in stereo most of their life and just give them as minimal training as possible and put them in front of a fully immersive system and see how easy it is for them to work. And so we hired Pooch, and we invited Pooch to come in and work on it. And we did five across the front.

There’s also existing line arrays. So we tied into those as well. We did a full surround system only running on two galaxies, so really processing wise, it was two galaxies worth of outputs, 32 outputs, I think, and speakers. And that experiment seemed to work pretty well. What Pooch found was he had to reduce his dynamics, the amount of dynamics and compression he was putting on things he had to use less EQ, and he could space things out the way he wanted. One thing that also came from that was instead of using five across the front, we found ourselves wanting a little more width on the outside of the stage, and so we could easily have done a left center right and then had two sort of mid hangs to really bring out the width of the image.

To understand how all of these systems work. Let’s talk about what we’ve done in stereo, which is we’ve had inputs. Those inputs have gone into something like a console. And then out of the console. We’ve always had either one or two channels, stereo or mono or mixed mono. And those have been then distributed to loudspeakers amplifiers, whatever across the stage. Now, with Immersive systems, what’s happening is you have your inputs, they go into a console still, but then out of the console, instead of having one channel or two channels, you now have 32 channels, sometimes 96 channels worth of outputs, whether it’s buses, you decide, Auxes buses, whatever.

And so all of those new channels can be sent different things. So you now have 32 pipes that are going into the loudspeakers. And so there’s 32 separate pathways in the instance of Space Map go. So my drum kit could be on three channels. Maybe my kick snare is one, maybe my overheads are a stereo channel. And now I can move my drum kit around in a group of things while that sound is moving around those pipes. What’s happening is these Immersive processors are adjusting a level of a matrix row, and sometimes they’re adjusting delay of a matrix row as well.

That’s what’s called cross fading delay. So that’s sort of the basics of how Immersive audio works. And then everyone’s got their marketing term and secret sauce of what math they’re using for how to do it. You’ll hear terms like Mapp, Mapp, wavefield synthesis. We’re using space map, which is manifold based amplitude panning and barrier centric panning manifold base. Yeah. A manifold is a map, and you can actually look this up. There’s a white paper AES white paper on it called Me app manifold based amplitude panic.

So if you think about what a space map is, Bob, it’s a map of the room. A manifold is technically a map, and the Mapp that goes behind that is all there. So that’s all of that is to say that I think the expense of this is really in the processor. And the expense of this then carries over to other things. You’re now dealing with element for output. So in a system that has amplifiers, not in their speakers, you then have to have a lot more speaker cable, which is way more expensive than XLR, up into each line array.

Well, you have to have separate channels if you’re doing side surrounds. There are six surrounds going along the wall. If it’s cinema style, the old school that can be run off of three speakers per output, you can run on two channels, one, two channel amplifier. But if you’re going to do a full Immersive, you’ve got six channels. It’s going to take you three times as many amplifiers, and there’s no jumbo ring of speakers and cables to the next thing. So it’s all home runs. It’s all individual channels.

If it’s a two way now, there’s a crossover involved, all of those things, it all adds up. So you want to go and you want to make things move all around. It’s going to cost you channels to do it. You have to have a discreet audio location.

Yeah. And there’s this other concept in Immersive audio that Bob and I talk about a lot is granular movement versus sort of more wider movement. And the way to think about this, let’s say you have four speakers and you put those four speakers in each corner of the room you’re at. If I want to sound, to move around, it will move around. But depending on how far my speakers are spread apart, how close they are together, my ear brain mechanism and my internal FFT transfer functions that are happening will determine where that sound is.

And we have some fudge factor. They call it the audiologists. Call it the cone of confusion, which is like right after your, like, 180 degrees, your peripheral vision, it’s about 15 degrees. You can locate one degree in front of you, but then it starts going like, 15 degrees, and then behind you, it’s a little bit. We, as mammals basically visually, can really locate on the horizontal. But anyway, that’s the Sidebar. But you move it around. You have four speakers, you move it around. Now, let’s add three speakers on each side.

That is more granular. And if I move the sound around, I can locate a lot easier to where that sound is coming from.

It seems like you’re going from course to fine.

Yeah, coarse to fine gain.

I kind of look at it. As do you have hours on the clock? Do you have minutes on the clock or do you just have the Cardinal directions? Is it just East, north, southwest? That sort of thing? You look at your basic old school cinema surrounds your 5.1. That’s just the Cardinal directions. There are left surround, right, surround, rear surround. So it’s North, South, West. And then there’s the front, which is three channels. So the front is more granular, but the sides are not. Whereas as you break into more discrete channels, you increase the granularity and your ability to move and locate things individually and to have a separation.

I think it’s a really important thing to consider just from a creative point of view. What are we trying to do? Because that was what originally was Nathan’s question here is in order for this not to just be a fad. What’s the creative drive behind this? And so one thing is the ability to place audio content in locations. And those can be static, so you can separate out and you can go five mains across the front and you can separate out the band. You can hear a bluegrass band, and you can hear them all separated and then mixed and blended in the room very much like a magnified version of what you would experience if you were standing there with those musicians in your living room, like enhanced realism.

But I don’t need that mandolin player to be running around into the ceiling. That’s not really part of the creative event. Okay, so there’s moving things, and then there’s static separation in the left and right is not enough because we end up with that perpetual problem of as soon as you’re off the center, everybody pans things in their brain differently. So the panning things are just for somebody that’s exactly on the center, and everybody else is governed by the physics of your binaural listening system, and it’s never going to be solved no matter how much somebody tells you, they’ve solved it.

So then when it comes to motion, there’s a whole lot of stuff, but you’re getting into creative content and special effects. Now there can be things that are like in theme parks, like stunt shows or animation like Pirates of the Caribbean and animation where it’s basically this gigantic projection screen in front of you that’s 360 degrees. There’s a full Planetarium Dome that you’re in in your little boat that you’re in. Well, you can place the sound image all along that Dome and there’s video that is flying across.

You can make that movement of the Cannonball coming. All that fantastic usage of this median to make motion link up to video. Now we’re asking, like, is this all just a fad? I’ve concluded in terms of the five times expense, it’s video that’s the fad. And once video is done, people are tired of it. They’re going to give all that money that video people normally had to us. And then we can do our five times. So all right, the dream. But seriously, you have this capability to move things.

Now, what are you going to do with that? You have to have something that makes sense if you’re doing a classical music concert, moving things is stupid. But spreading them out is fantastic, because when you listen to a real Symphony, it doesn’t have it to be where all of the violins all come together with the Obama. It’s not that way. They are coming from separate locations. So it’s a beautiful thing to hear that I can tell you one of my most really, truly exciting immersive experiences.

I’m talking full goosebump experience was at Natasha and Pierre and the Comet of 1812, which is a theater production running on Broadway that had ten sound systems distributed through the room, each of them capable of covering the whole room. And the actors would come out not only from the state. They would actually have parts where they were on the balcony and singing to you from the balcony. Well, there’s this great wedding scene, and everybody is the whole cast of 36 or whatever is spread out over the room.

And then they sing this song together. And it’s this very gospel kind of chant thing, and it’s coming out of all ten sound systems, but it’s a coral blend that’s not all down into left and right or down into one tube. It’s literally blended in the room, which is what you get when you stand in a Church with a choir. And it was just like head blown. It’s like that’s through a sound system. That’s the thing there’s using the ability to mix in the space because it’s a totally different experience, mixing in a wire than mixing in the air.

That’s the beauty of immersion. But you have to be able to pull it off and have the things scale, right? Yeah. As a coral blend with a long sustains, it’s a beautiful thing. The same thing they tried to do was do a super tight Intelligible hiphop Hamilton wrap coming from ten sound systems spread all over the room. What do you say.

About the corporate and then the Church? You mentioned those two examples for why this tool could be important. I think corporate is very useful. We have a couple. The Audi Experience Center, I think, is one that just opened up sound art museums. But let’s think about these corporate car shows. That’s a great place for this. When you have your CEO, that’s about to walk out and they need a spectacle of sound and movement and stuff, that’s great. But when they start speaking, we need them all to focus on our presenter.

Let’s say you’re doing a big presentation of a product and your CEO is on a microphone and is walking around a system. Well, you could put them on a tracker and have them walk around. How distracting that is depends on how you feel about it. I find it extremely distracting sometimes when the sound is moving as the person is walking around, but it’s totally possible. But if there’s a band on stage, we can spread the band out and make them sound like where they’re coming from and be very realistic and add just depth to the feeling corporate that’s one way and the same sort of rules apply for churches.

It helps out with houses of worship to really bring in the focus to the pastor wherever the pastor is. And then during worship during service, spreading out that music and spreading out where the choir is and where the drummer is and where the bass player is, it really just helps immerse. And then on the other side of that with things like space map. If you have a couple lateral speakers that are out into the room, you can then goose in some Reverb from your console there.

And now you’re enveloping and using the Reverb on the outer and the dry on the inner. And you can really start mixing the room as a room. And that’s the thing we’ve been putting things down. This one or two very large pipes for so long, and those pipes are great. Stereo systems are great. Mono systems are even better in most live sound applications. But those pipes can only be so big. And what we’ve done is what our whole careers as mix engineers has been is carving out space and the only minimal frequency spectrum that we have for every single instrument.

And so what’s cool about this is you don’t have to do that as much, because now you have 32 pipes instead of two pipes. And now you can sculpt just by separating the pathway into the loud speakers. And that’s the most important thing is you’re no longer frequency masking. What you’re doing is overlapping your speakers and separating your signals.

Mixing in the air. But when you look at one thing, I just want to mention about the houses of worship and things is we need to talk to architects, because when you make that they love that fan shape, that super wide fan shape room. And then they close the volume down with a fairly low ceiling and those two things, then you want an immersive experience. Well, how are you going to do that? It’s a shape that really defies immersion because your audience spread across the super wide thing.

You’ve got 160 degrees of audience and to get from the far left all the way and reach the far right, you’ve got to go across the whole middle. And it’s a really difficult thing. So you have to be realistic and calibrate your expectations, balconies those create a real thing. And then there’s the other really important thing is let’s say, okay. They say you’ve got the budget for five mains, except that here’s this one little proviso. They have to have clear sight lines. You used to be able to have your left and right down nice and sweet in the right place in the room.

Now you’re going to have five mains, all just at the same height as you would a center cluster, which has to be so that the people on the third balcony don’t have their block sight lines. Right. And so now everybody is 100ft tall. And to me, that’s a trade off. That is really you have a hard time telling me that that is a good trade off because you’re so disconnected from the show, you can’t beat the physics that you’re late. You are to the floor where all your prime seats are.

The sound system is arriving tomorrow with today’s newspaper.

Well, that’s a great transition. And maybe we can look at Gabriel Fiero question, and his question is a little bit long there. But basically he’s working in a Church and he’s wondering, is this an opportunity for Immersive? He says right now there’s only two arrays and a couple of side fills, and some balcony fills. No delays or proper center coverage. So I’m looking at the differences between a new, correctly deployed system versus Immersive for our next PA. Now, I should point out that Immersive also has to be correctly deployed, but I actually have some pictures of his space, and I can send them to you if you want, if that would be helpful for you to talk about this.

But the important thing that you just mentioned is that the ceilings get lower and lower as you get to the back. And so to me, that seems like that’s probably not going to work for them, or at least not for these people in the back if they don’t have a good space to consider immersive right.

When you have a low ceiling in the back, you have to take an inverted delay approach. So you basically have a little speaker on the back that does a non granular approach. The more traditional surround approach in the back, and then maybe six rows forward, you Mount a larger speaker that’s high enough to make a granular surround to cover the main part of the room. I’d have to look at the exact physics of the room, but essentially those get linked together by space map as derived nodes as linked signals so that you could pan the signal around and it would light up in a granular approach, the big surrounds and then the ones that are on the outside perimeter.

Those light up as groups, so they perform a non just sort of an overall rear, whereas people in the center. I did a Church design recently. It’s a fan shaped Church that has a very popular shape. It’s a fan shape, and then it’s a fairly flat floor. But it has these ramps on the side that go up. So the ramps go up, and then it’s a balcony over the 160 degrees. Okay, so there you go. What you’re left with is the ability there’s the dog. So you’re left with the ability to do a full granular surround on the floor center and then a non granular Cardinal directions on the upper balcony and on the ramps, one on the side.

You’re forced by the physics. You’d have to kill people in the rear to get that to fire all the way to the front. And it’s not complaints always are that’s gain to stop your surround fantasies.

So, Gabriel, I think you should definitely take a look at there are three recent videos on the Myerson YouTube channel about system design with Josh and Bob going through some of the stuff, and that should answer some of your questions, because as Bob’s talking about here, you’ll see that all of the sources need to cover all of the audience, and if they can’t and they have blocked sight lines, as is the case with your people going deep under that balcony and the ceiling getting lower, then there’s going to need to be some reinforcement somehow.

And so as you’ll see in these videos with Josh and Bob, they explain how you solve all these problems. But it does start to generate some complexity as you have blocked sight lines and portions of the audience that are not visible to all sources.

Yeah, one thing that the Church market under balconies are another big thing, but there are tools to deal with with these systems built in the most immersive sound systems. I agree that I feel like the five across the front and a fan shaped room almost as a marketing dream, especially on those extreme sides. But there’s a way to do it in space map to have a left center right across each section of seating and then control each section as a left center right together from our front of house perspective, or even just stereo systems, but not stereo in the traditional way.

Stereo is where the left and the right of both that’s covering one section of that fan is overlapped. The one cool thing about this is like, let’s say you do have a smaller budget, something like a Galaxy. You have 16 available outputs. So if you did a traditional PA up front like you normally would, and then you did for your Christmas Spectacular production, you brought in a couple extra loudspeakers. Well, if you have extra outputs on your Galaxy, then just plug those XLR into those speakers, and now you can use those and send some sound for your special Christmas Spectacular sound effects around, as well as still maintain the mix that you’re using.

So yeah, there’s tons of options really depends on back to this course versus granular what do you want to do? What is the goal and the intent of the sound system?

Okay, so let’s get into some of these and let’s just see how far we can get. And then maybe we’ll even come back to some of my questions. But people are so nice to sending questions that I want to make sure we get to those. So Robert Scoville, I asked him, what do you want me to ask them about their system? And he said in Galaxy when it is used in Immersive systems, considered a spatializer by a given definition, and he doesn’t give the definition. So I’m hoping someone can say something about what a spatializer is.

He says. I know Mayer incorporating delay matrixing within the unit to achieve the spatial aspects of their Space Max application, but I’m curious if units like Astro Spatial and Lisa Tmax et cetera are functionally or mathematically different than what Galaxy has to offer.

Hey, Robert, the first question. I hope you’re doing well. First question, spatializer Space Map Go, and the Galaxy itself is a loud speaker processor. So the cool thing is, the Galaxy will still tune your PA and do all of the things that Galaxy has done for years now, with the free updates to Space Map Go, you can now use level changes. I don’t know what the definition of specializer would be, but it is an Immersive audio platform like all of these others. In addition to be a loudspeaker processor, and we’re not using delay, Galaxy does have a delay matrix, and you can set static delay times on a queue basis or snapshot basis.

But we’re using level based planning very similar to what all these other companies are doing. And the difference between the three companies that you mentioned is, yes, their math is different. They’re not talking about what math they’re using, and timeax is delay based. Lisa, I believe, is only level based with a little bit of delay. And then Astral spatial, I don’t know enough about to really, I think it does both. I think it does delay and level based, but there’s ways around it. I’m working on a project right now that is going to be using a sort of static cross fade delay matrix to move someone from an A stage to a B stage, so as they move, the delay time changes and steps for the outputs.

But Space Mapp Go is level based. It’s not controlling the delay matrix of a Galaxy. You can still control it with Compass.

I hope that answered that I can’t comment on the Astral spatial because I got Pfizer.

Okay, Robert says. Secondly, ask him how Myer defines an object. Is it a speaker output or an input source to the spatializing device?

Yes. So an object in general terms represents a channel. So if I have an output or a bus from my console that feeds that’s plugged into a Galaxy, so I have an XLR from my console and that XLR cable plugs into the input of a Galaxy, whether it’s an input. So an object is that input. So then that object moves around the space Mapp, and the space Mapp is the custom Panor that you design. So if I have 30 loud speakers, I can have a space map that has all 30 in it.

Or I can have a space map that just has four of those 30 loudspeakers in it, and you draw the space Mapp. Then on top of that, we have what are called trajectories and trajectories are pathways that you draw, and they control the objects automatically. So you can have tap tempo. So if I want to move a trajectory around at a certain BPM, let’s say I want a sound of my drum kit to go side to side. My symbols need to move in time with the music.

I just tap in that tempo, and then there we go. It’s moving left and right, and you can draw them to be as complex or as fun as you want. For example, I show an example all of the time called sound to source. Rex and my wife basically just drew a tree that is a trajectory, and I can load that on a channel and it controls the object and move that sound around whatever it is in the shape of the Trex. And since it’s on an ipad, you can expand it, contract it.

And this all happens in real time. That’s one thing that no one else can do in the industry right now that makes space map really fun. So that’s an object object is an input.

Okay.

So Alice Defancies has this question that I think we’ll need a little bit of unraveling because I think it expresses some assumptions, but I think it’s good to get into because probably some assumptions that a lot of people have. So most people are familiar with this phenomenon that as you move farther away from a speaker, that his coverage seems to get wider unless you have an asymmetrical Horn. So I think this is his sort of thinking where he asked this question. He says, IEM wondering how far into the audience the immersive experience can be achieved before all those separated signals become combined.

And does that then cause cancellations in the back of the room? Now, right away, we’ve already talked in this conversation about how a little bit about the system design that we actually want all of our sources to be covering the entire audience. So I think he’s thinking that we actually want them to be all separate signals. So, Josh or Bob, do you want to try to speak to this question a little bit?

Well, of course, a speaker from an angular point of view stays constant over distance, but as a width in terms of meters or feet or whatever, it’s getting wider. That’s the simple physics of it. So when you’re too close in you’re going to find yourself that you simply are prohibitively close to something because the inverse square law is going to prevent you from you’re just too close. If you get up on a ladder and stand next to the sides around. Yeah, you’re not going to have an immersive experience.

What we do is we define the room sort from a design point of view. We have this thing called the go zone, and that gives you a fairly good guideline up to where you’re going to have a 100% immersive experiences inside of that go zone. And then from there, it’s a Gray progression out of full immersion. There isn’t a place where it suddenly just locks in and you have it. As you get closer to the perimeters, you’re necessarily getting closer to those laterals and farther away from the others.

And simply the physics are going to catch up with you eventually. So the signals themselves, the more that a signal is individuated, the more you will have everybody be able to if they were all blindfolded, would point to the sound source, where is the frog coming from? And everybody would point in the same direction to the frog direction where you placed it in a space map. And that’s the key thing are people consistently showing you experiencing the same localization content. And if you then Mapp the things out so that you have immersed them into a swamp full of frogs and cicadas and all these things around them, then everybody could point to this and that the locations that’s really the goal and the more that you are towards the center, the more sure that experience is going to be.

Yeah.

And I would also say one thing that people get wrapped up on is like, okay, well, what do I do about fills? What do I do about all my subsystems and five across the front in a lot of rooms won’t cover the whole entire room, regardless of how pretty it looks in the prediction software and the subsystems are still real. So if I am sitting underneath a linear and for whatever reason, my five across the front are very high up, I could maybe have two front fills in front of me and do using what are called derive nodes.

Do a stereo mixture of what’s happening up above me. One thing that I use derive notes for a lot is, let’s say, for whatever reason, you can’t have your console in the room. So what you do is set up a stick of a five one surround system in the booth where your mix console is. And then that uses derive nodes. And so whatever happens out in the room translates and mixes down to a five one mix for your room. We do that with under balconies as well.

And so you’ll have this sort of main system that is covering as much of the room as possible. But then you’ll have these subsystems that are doing immersive mix downs, whether it’s down to Monos, stereo or whatever. And a lot of time. That’s very helpful for especially when these speakers have to get hung so high across the proscenium. Front fills become really important for imaging. Just imaging that voice down.

I’m just gain to mention, though, is this that you can’t get stupid about these things? Okay, front fills are only going to cover two rows and about five people wide. Right. So they are not part of your spatialization system. You’re not going to be zinging things around the front fields and have everybody go, wow, look at it across the front. That’s not going to happen. That’s not going to happen in your under balcony speakers, because if the under balcony speakers are designed correctly at all, they are designed for correlated signal for minimal overlap, because their job is to bring up Intelligibility.

They have a very clear mission. Do not go and start screwing with playing one of the places I really throw up a big red flag is people wanting to play matrix games with the matrix delays and silliness under balconies. And in front of us, it’s like, Stop it, stop it, stop it. Those things are combat audio. You must make them simple. And Intelligible let them do their job and don’t screw them up. Yeah.

And now with the 32 pathways. What’s cool is that speaker now becomes a multi use tool. It could be that delay doing correlated signal for the mains, but it can also be used in a separate pathway. For some version of a mixed down.

It can become an overhead and shoving. People are looking up, because now it’s not merging with what’s coming from the front. It’s suddenly all by itself is a Peter Pan over your head saying, look at me. Yeah.

And so under Balconies, of course, and above upper Balconies, you’re going to have less of a granular immersive experience, but you can still design a system to have an immersive experience.

Okay, cool. Let’s get to Robert McGarry’s question. He says total novice for immersive programming. Where do you delay to? Is there a zero point? And just for some context, IEM going to make an assumption here about what Robert’s talking about. I think he’s thinking of a practice in theater where we might have a center point on stage, in theater, where we want our voices to Sonic image source back to or we may have a concert sound stage where we want where we kind of time back to the drums.

I think that’s kind of what he’s thinking of. As I’m learning more from you both about immersive systems. I’m thinking that this question is actually not applicable to this, but yeah, what do you have to say about where is the zero point?

The same would be if that was a left right system or if it was five systems across. If you want to make it timed to events on stage and you don’t already have enough delay because you’ve got a digital console and a digital this and a digital that have already stacked up your latency. So if you’re going to actually add a little bit more, then sure drum kit is a usable place. Or if it’s theatrical, you can go to a point on these, but those become essentially in our world that’s a static event, or it can be set up through that delay matrix as a set of presets.

If you wanted to make it so that you had a moment where an actor was downstage left for some dramatic moment, you could have a separate delay matrix timing for that, but that’s a static part of the tuning process, and then the emergency facilities movement would come on top of that would be changed by that.

Yeah, I think of it as an immersive systems. I think of two different delay types. There’s system delay, which is what we’re going to need to do for time alignment of systems, whether that’s main subs relationship frontfill relationship that all gets handled, you can use the delay matrix on the Galaxy or the outputs for each speaker on the Galaxy, and then there’s artistic delay. So if I have an actor moving from proscenium downstage to upstage, I can fire a snapshot that changes that inputs delay time, or I could just do it on the console and have a snapshot on my console and adjust their input when they’re not singing that instantly swaps their delay to a new zone.

This is very typical of what we would do in musical theater, having three zones or four zones across the stage. There are fancy devices that are very expensive that do that automatically for you. But with a Galaxy, it’s a free update, and you can just do a snapshot change.

Cool. Let’s try to squeeze in two more questions here and then we’ll start to wrap up. I don’t know if you have anything to say about this, but Angela Williams says, Where do you place audience mics in the room for capture as objects?

I kind of don’t understand the question, but let’s talk about how do we capture surround information that’s happening in a room? There are two different scenarios. One scenario is I have an artist on making a recording or I just want to have some microphones laid out to capture the audience noise and send it back in. That could be wherever you want. And if you wanted to, you could put them on face map and then send them to all loud speakers or just some loud speakers. The laterals you can make them an object and move that audience sound around.

I think the question is for the analysis of the object placement. That’s the impression I got of that question.

Yeah, it could be. I also don’t totally understand it. And IEM realizing now I should have asked them to clarify a little bit, but it did make me think about mixing those in, but I don’t know how you would mix those in. So yeah. Do you want to say something about that, Bob?

If it’s mixing things in, my answer is no, I don’t do that. That’s Constellation’s job. And that’s what you’re getting into. If you want to start recirculating audience mics ambient mics in, that’s a whole nother thing. But if it was to in order to analyze, you place a virtual mic into a real mic to analyze the localization, my answer would be anywhere you want, anywhere you want to know the answer.

Yeah. And there are tons of different mic styles to do that. You could do that with a binaural microphone headset. You can do that with an Ambisonics microphone. You can do that. Whatever. If that’s just for capturing the recording of what’s happening in the room and in Map 3D.

We do it through virtual SIM mics. And I do that as part of the analysis. I’ll go and place a mic when I’m designing a mic in Map 3D, and then I’ll run the different speakers and I’ll be able to see as I lay one trace over the others. Like, okay, IEM consistently seeing within three DB. All of my laterals are all reaching within three DB in this location. Okay. That’s cool. I know that this has a really consistent specialization there. Okay.

I wanted to get to Lloyd Gibson’s question because even though we’ve already talked about this, some at the first part of the interview, I wanted to do it again because I would just want to make sure this is clear for there’s probably other people out there who have this question. And I want to give Bob a chance to maybe correct some misunderstandings about his own teaching. So Lloyd Gibson says I thought Bob was against stereo imaging and live sound because of the psycho acoustics and delay magnitude discrepancies seat to seat.

Does this not apply here, or is there a size threshold where it can be successful?

Okay, so stereo in live applications, let’s get into the semantics. There’s a left main and a right main. You can call that stereo. I call that left right, because stereo is something that happens when you put on headphones. Happens when you sit there in your living room because you’re inside of the five milliseconds that you have to play with in the world of physics, of your brain and its ability to make a panoramic stereo image. There’s very little of the room that is inside that five millisecond window.

In our world of PA, it doesn’t have to be a big PA doesn’t have to be an arena or Stadium. It’s like even in a small theater, there’s very little that fits inside that window so everybody else can call it stereo all they want. But I design systems left and right systems, and I design them to have no more overlap between left and right than they have overlapping into the walls. So that’s my thing. Basically, I don’t want to invite the wall into the thing any more than the virtual wall, which is the correlation point of where the two speakers meet in the middle and all physical acoustics modeling as a wall.

So that’s where I aim systems. I don’t aim your left and right deeply inward. Unless you can promise me that you’re going to put left completely separate material than in the right. Like if you’ve got left, center and right and they are now discrete and separate channels. Now I’m going to turn that thing inward. Now I’m going to cover the whole room with left and the whole room with right and the whole room with center and the whole room with left, center and right center and 17 whatever they are.

I’m back to the I’m the whole show. So if I’m the whole show fine. But if we are left and right and 99% of the material outside of the littlest bits are going to be pushed this way when all the really stuff that matters, the Fader with the big star on it is going to be mixed center. I’m going to make your left and right system so that it does the best performance that it can as correlated signal. Okay, so that’s my simple answer to that.

I haven’t changed on that. But if you go to a full multi channel as soon as you introduce two multi channel and that’s what happens when you add that third one, that center register functioning center channel. Now when you’ve gone to full multi channel, if that’s the way you want to address it. Now we can go and play decorating, but a lot of times what you really see in an LCR is you’re going to do LR are still going to be a very LR system. Very little gets panned out, but the center is its own thing.

So now you have a decorated center, but a semicolated left and right. So I hope that was the answer. That wasn’t too unclear.

I thought that that was clear. Yeah, that’s great.

I don’t tell people how to mix. Right.

And Immersive is a new way to mix. Instead of sending things down two pipes, you now have 32 or however many channels you have. You no longer view it as LCR, and you view it as my canvas that I can put objects on. And that’s really the way I have to start viewing it is IEM looking at a stage. Okay, now I’m painting where I want to put my artist or where I want to put my objects.

Okay, Josh and Bob, thank you so much for all of your time today. And I should end by asking, where is the best place for people to go who want to learn more about space, map, go and Immersive systems on.

Yeah. So Myersound. Com. Well, Myersound dot com is a great location for all information concerning Myersound. We also have the thinkingsound YouTube channel that’s our YouTube page. We’ve done about 6 hours worth of Space Map Go content as well as Map 3D content. There’s tons of information there, like every other company we participated in Webinar Wars.

And I never heard him call that. That sounds so violent.

Nick from DMV called it the other day while we were hanging out, and I thought it was hilarious. So shout out to Nick. But anyway, Webinar Wars was what happened. But anyway, there’s tons of content, not only just about Immersive audio and everything. And then last resource for Space Map Go is the Space Mapp Go Help website. That’s basically the operating instructions for Space Map Go. And the cool thing is, this is all free, so you can download Compass and download Space Mapp, Go onto your ipad and mess around with it.

Play with it. You don’t need hardware to start looking at what this can do.

I want to just throw in one more thing. Hope I don’t get in trouble, but there’s also some physical places where you can go to experience Space snap. There are some locations where there are, at least at the moment that we’re making this recording operating systems. There’s one here in New York. I believe there’s one in one still in Nashville.

Nashville.

Yeah.

At our office at Soundtrack in Nashville and then center staging in Burbank.

Yeah. So we have an LCR system left center and right there for the United States, and I think there might be one in Europe. I think there’s one in Europe.

Yeah. All across the world. Really? We have a touring Roadshow Space Mapp Go road show, which is happening across the US road Show.

When is that coming to mind?

I don’t know, man.

I think it should be called Space Mapp of Gogo.

Yeah, it should be called Space Mapp of Gogo, but yeah, if you look at the website on our website, there’s an article about it, and you can reach out to sales at Meyerson dot com to find out when it’s coming to a city near you. They’re thinking about doing one in Europe very soon. Australia has been touring around and New Zealand have been touring around Space Map Go systems for a while now.

So you can’t go to Australia.

They won’t let you leave that exactly.

Yeah.

And then our dealers and distributor network across the world, some have set up Space Map Go system, so reach out to sales at Meyerstown dot com. If you want to hear this, you want to hang out and then we’re open to give you a demo. And the New York room is really cool. And Bob might meet you there.

Oh, wow. Just throwing Bob’s hat in there. Great. Thank you.

The other thing is we will be at Infocom this year, and so there will be a Space Mapp system there. That will be we can’t talk about it. Really too much yet, but it’s going to be cool. I’m excited about it.

Well, Josh and Bob, thank you so much for joining me on Sound Design Live. Sound Design.

  • 1
  • 2
  • 3
  • …
  • 8
  • Next Page »

Search 200 articles and podcasts

Copyright © 2022 Nathan Lively

 

Loading Comments...