Sound Design Live

Build Your Career As A Sound Engineer

  • Podcast
  • Training
    • My Courses
  • Archive

14 online courses on live sound mixing, RF, and system tuning

By Nathan Lively

How do the 14 courses from Scott Adamson, Aleš Štefančič, Stephen Pavlik, and myself all fit together? We made a table to answer that for you.

They are grouped by teacher. Each teacher tends to focus on a topic of mixing, RF, or system calibration. Here’s a full-page view.

Please comment on this post with any questions.

The BEST subwoofer array for large concerts?

By Nathan Lively

During my interview with Adam Hill we discussed an AES paper he coauthored called Subwoofer positioning, orientation and calibration for large-scale sound reinforcement. During the interview, we focused mainly on the interference of stages and other boundaries on directional subwoofer arrays, but there is another large part of the paper where the authors describe a ground-based optimized subwoofer array that I thought would be fun to try to recreate.

Wondering how a gradient subwoofer array works? Check out this video explanation.

1 – Minimize spacing to avoid nodes

As a starting point, four single cardioid subwoofers were placed across the front of the stage on the ground with four meter spacing.

plan view

This initial setup gives very limited coverage across the audience area, although there are no noticeable nodes anywhere in the coverage area.

prediction @63Hz

2 – Expand horizontally and vertically

Since the additional subwoofers of the system will be off to the sides of the stage, they can each be stacks of three subwoofers.

3-element inverted gradient stack
3-element gradient stack
prediction @63Hz

3 – Angle outside subs 45º

The optimization routine shows that simply rotating the outside subwoofer stacks away from the stage by 45° gives very even results across the audience area while keeping SPL on the stage under control

prediction @63Hz

Note that I have not taken into account the potential gain difference between my 2-element gradient arrays across the front and the author’s “four single cardioid subwoofers”.

Results

15dB of F2B (front-to-back) rejection at 43Hz comparing the 1st row to DSC (down stage center). 6.75db of rejection at 90Hz.

Can this be improved?

The authors leave the design wide open for adjustment and make this recommendation about placement:

an even coverage in the audience area is best achieved when subwoofer spacing is minimized

If we add one more sub it will decrease the spacing and DSC should benefit from improved F2B rejection. Let’s also add a 2nd order Linkwitz-Riley LPF at 203Hz.

7 subs

1.75dB of improved rejection at 90Hz. Otherwise, similar results.

How would you improve this design?

What are your ideas for improvement? Comment below.

You can download my MAPP file here.

Can you mix better on speakers with flat magnitude and phase?

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live, I talk with the Co-Founder and VP of Products at Sonarworks, Mārtiņš Popelis. We discuss the magic under the hood of Reference 4, why you can only please 17% of people, and the true test of mixing on a linear system.

I ask:

  • What’s under the hood of Reference 4 Studio Edition? Can you walk us through how it works? I’m curious what the secret sauce is beyond measurement and corrective EQ.
  • What are some of the biggest mistakes you see people making who are new to studio monitor setup and calibration?
  • FB
    • Alex Benn – I’d love to know how it generates the phase plot of the measurements. My Genelec 8030a’s have one full wrap when measured in SMAART, but it doesn’t appear in Reference 4.
    • Kyle Marriott – Can you ask if they’re FIR based, and tackle phase or just magnitude? Also what’s the latency? Is it fixed or variable?
    • Jon Burton – What is the normal latency on the system? I found it quite long on my MacBook pro so had to keep turning it on and off to do Zoom calls etc, is this normal? Can it be reduced?
    • Ben Davey – Do they have any plans to add profiles for In-ear monitors to the headphone side of things? Reference 4 is amazing on my closed and open back headphones, but would love to be able to balance out my Ultimate Ears as well.
    • Brandon Crowley – I love Reference 4 and use it for both mixing and listening purposes. I’d like to know, what’s next for Reference 4? It seems like it’s already the complete package, could studio monitor emulation be in it’s future?

Once you do the calibration and mix your song on this new reference sound then the real test for bringing value to you is whether it is translating better.

Martins Popelis

Notes

  1. All music in this episode by Spare Parts.
  2. Organizational Culture and Leadership
  3. Martins on Instagram and FB.
  4. Quotes
    1. The objective of the reference product is to remove all of the sound coloration.
    2. The measurement microphone lives in its own reality.
    3. There is a huge DSP living in your head doing magic tricks with interpreting the world for you.
    4. The solution to sound quality we feel is up to the individual. It’s not an all sounds fit one.
    5. Invest your first $1k into getting a pair of speakers. Then invest your next $1-2k into room tuning. But from that point on, Sonarworks will be the thing that gets you to a better place than an additional $2-4k investment into room tuning.
    6. There is a genuine difference in what sound people like and there is a difference in our hearing.
    7. No matter how well a headphone company does with their R&D, they are going to hit the sweet spot for no more than 17% of the population.

Transcript

This transcript was automatically generated. Please let me know if you discover any errors.

I’m Nathan Lively, and today I’m joined by the co-founder and VP of products that sonar works, Mārtiņš Popelis. Welcome to Sound Design Live Nathan.

Hey, I’m excited to be here.

All right. So, Mārtiņš, I definitely want to talk to you about reference for and kind of the amazing stuff you can do with it.

But before we do that, what is one of the first tracks you like to listen to after you get a system set up?

That’s actually a good question, because there is a specific track. My and actually the favorite track of many of our team is Rage Against the Machine. Killing in the name, OK.

So the good thing about the track and the good thing, and I mean the important thing about the test track is its ability to kind of very quickly uncover you the reality of that particular sound system. So it’s actually very important for that track to be kind of full with musical information as much as possible at all times across all the frequency range. Right. I mean, the other extreme, if you take just somebody some track where there is a female voice singing, maybe without a musical instrument or maybe with some kind of small sound in the background, that kind of track is off like a very hard thing to use to understand what the sound system sounds like on this Rage Against the Machine track is kind of all over the place from the first second.

So you can kind of very quickly assess what’s going on and also very quickly understand the AB of the kind of calibration impact and very, very quickly get a feel for what’s going on besides being a great song, of course. But its technical ability to show you the frequency response of the system is what’s more important when you talk about the test track.

So sure. So Martine’s how did you get your first job in audio?

Like what was your first paying gig? Short answer is Sonarworks.

OK, I got into audio through founding, through cofounding sonar works into a paying job as audio. Actually, I don’t come from a very heavy musical background before I kind of personally see that as an asset that’s been extremely exciting to kind of learn to know what the world of audio looks from inside, from the creator space. I’ve been excited about music and audio technologies before joining, but I think kind of the fact that we actually came from outside the music industry was one of the reasons it kind of let us take a fresh look and get a fresh perspective about the problems and maybe offer some kind of new solutions to the problems.

But as I was thinking about this question, what came to my memory was that kind of before I got into paying jobs in audio in high school, I think the first gig in audio was that in the high school I was involved, we kind of had like this thing called Radio Center, which was small. And it was basically the small place in the school that hosted the school’s audio gear and the intercom of the school, etc. So it was kind of led down and forgotten.

But me and a few friends, we were really excited about that for some reason. And we were also sometimes setting up the school kind of disco kind of party audio for using the school’s equipment. But if we uncovered that place and for a while we had built a computer machine that when the break hit, then it started playing some music over the school intercom. So we had like a few few months of the happy time when the break from the lesson was announced by Pink Floyd being played out in the school’s corridors.

But then some some some teachers just kind of didn’t like that. Their lesson is interrupted. So we got shut down. But that was my first involvement, was kind of P.A. system, I guess, because the market is looking back on your career so far.

And a lot of things have have happened since in between your first time playing around with some electronics and music in high school and then cofounding sonar works and you had to make some choices along the way. So what do you think is one of the best decisions that you made to get more of the work that you really love?

We think my previous job, I guess. Sure.

How did you know that? I mean, as I did, you know that it was the right time to do that. Can you sort of take us to that moment?

I mean, it was kind of I think I was twice in my life in that situation, and it was always sort of a scary choice at the point in time. But both times I think it was kind of the right thing to do. So I was right out of the university. I had joined one company and I was working with them. It was an environmental consulting and I was working with them for like three years or something with at some point I just kind of realize that, hey, I really kind of don’t feel in that particular set up with us.

It’s right for me to work for the company. And I would really like to try building a company rather than working for one. And yeah, I took the kind of risky bets of quitting that job and joining with my friends to actually start another nonmusical related company that was, I think, also a very good choice that kind of opened the Intrapreneurship drive in me. So I haven’t really worked for a proper job since. And then and then at some point, this non-equity related company, I kind of was with that for something like seven years, I think.

And then at some point I just thought to myself that, hey, I’m kind of I think I kind of know. The thing about this business, and it’s kind of if I will be doing the same thing in five to 10 years, then I’ll probably hate myself. So I understood that there is there really is no choice. Sure.

Simitis, let’s get into talking about the software. So there’s a lot of other videos out there already. And you’ve done several other interviews about what reference four is and kind of how it works. And I saw that there are several videos on YouTube already of people sort of walking you through step by step of what it does and what the results are. And so I think there’s plenty of material out there already about that. But I haven’t seen a whole lot of people talk about kind of what’s under the hood.

And since the Sound Design Live audience includes a lot of live sound engineers and a lot of people that also do measurement as part of their work and output processing, I think they would be interested to know kind of how does it work.

So I was wondering if you could walk us through how it works and maybe what some of the secret sauce beyond measurement and correctives IQ.

Sure to begin. The objective of the reference product is obviously to remove all the coloration. Right. Can you talk about the speakers in the room situation then, having measured too many professional studios and bedroom studios alike? I haven’t really seen a completely neutral studio kind of from the first measurement, meaning that I mean, bedroom studios obviously have a ton of problems because they’re are not the bright room and the set of options are limited and the budget is limited.

But if you talk about the big studios, then they also have their own problems, like the gear keeps changing and somebody is bringing in a new sofa and a new kind of shelf. And then the console has a lot of reflections and there are different issues in the big studios as well. So every studio I have seen so far could kind of have some benefits, bigger or smaller, of removing that kind of unnecessary coloration to the sound to give you a more more accurate ability to hear of what you have really kind of mixed or produced or created musically in your track.

And that is the objective. But then when you start to ask the question really about so what is really the excess coloration, what is really that thing that you’re trying to remove, then you very quickly get to a realization that it really depends on the method of measurement. And then very one very important aspect to realize is that if you think about a measurement microphone and the human being, then these two things do not hear a like like measurement microphone lives and it lives in its own reality and in the measurement microphone.

The reality, if you measure a thing, you kind of in one spot and then you move it for a couple of inches and then you measure another measurement and then you move it far more a couple of inches in your studio. You will get three completely different measurements, frequency response wise, because the frequency response of the room is uneven as seen by the measurement microphone. But for a human being, your reality is interpreted by your brain also in the audio domain, obviously, and it’s a huge DSP actually living in your head, doing magic tricks with interpreting the world for you and the way your brain is interpreting audio.

It’s actually not showing you the world of the measurement microphone. It’s actually more like taking the average of all the sounds around you and all the kind of reflections coming from your body and from different areas of your room. So your brain is kind of constantly listening in to an area around you and then kind of averaging it out for you. So you don’t have these radical you don’t hear very radical changes in frequency response as you move your head from being to inch by inch and kind of sonar works.

Measurements also is really built into one of the cornerstones is it’s built around this inside. So we do not try to do like a single point measurement. We really work with the average of the area that’s kind of intelligently calculated to mimic the way a human being here. So we’re kind of working with the reality of a human being and not the reality of a measurement microphone. And then the other important aspect of the measurement software that is important is that we do like thirty seven measurement points around the listening spot of the engineer, and our software is unique with the fact that we can actually locate the microphone with each measurement.

So it’s actually kind of draws itself MAPP of the relative spacing of these points in the room. And that actually gives two important things. One is it actually allows the software to kind of understand these measurements in context and then of do the right calculation of the average profile. But the other thing is it actually allows for a really easy and consistent user guidance of the method so that. Actually, if I do the measurements in this room or you travel over here and also do the measurements in this room or you actually do the measurements in your room, the method that we apply using sonar works reference will actually be very consistent because the software is actually asking you to get the very specific points around very specific places in your studio and making sure that you actually do that without asking you to read like a very thick and boring manual.

So it really enables this consistency of how the method is used and how then this average is calculated that actually delivers the same consistent reference across different users across different locations and enables then not only some sort of random improvement saying, hey, we thought that these things might sound better in your room, but it actually allows us to talk about driving everything towards the same reference sound standard across different places. So if you work on a track in your room and you send it over to your friend in another room in the city or maybe even on the continent, then you are actually hearing very much the same thing when you’re listening to a set of calibrated speakers.

So that’s kind of the other important thing. And the third important thing is that we actually apply the same calibration target for speakers as well as for headphones. So headphones we measure in our lab so the user doesn’t have to measure them, but also for the headphones we strive to achieve frequency response wise for you as the listener, the same frequency response as that of the calibrated speakers. And when we were still allowed to travel, we were often doing doing the studio demos where you can actually measure a set of speakers and then also set up a celebrated headphones and allow the user to compare and see that actually frequency response wise, it’s really, really sounds very similar and that enables a lot of kind of portability and ability to work like while traveling or late at night when you kind of lost your speakers or in different places where you don’t really have access to speakers.

So it’s really this portability between headphone use cases and speaker use cases. That’s also one of the unique things behind somnolence.

That’s interesting, this idea of linearity and portability.

I did an interview with Alex Awana from Audio Test Kitchen and he yeah, he pointed out to me that one of the reasons that lots of studios at one point started installing SSL consoles was so that you could record at one studio, go to another one and have it pretty much sound the same. Now, I’m not sure how much that SSL console imparts its own sound. It seems like, you know, the room and the position of the speakers would also have a big effect.

But still, I understand sort of people’s pursuit of this idea of like let’s figure out how we can create some consistency.

Mm hmm. Yeah, and I think it’s especially kind of nowadays when everything is moving so fast towards the reality where music is actually more and more produced over distance and then different types of home studios, this kind of ability to work on the same platform of reference sound I think is more important than ever, because kind of, I don’t know, 20 years ago when most of the hit songs were kind of still mixed and mastered in some of the kind of high end studios under the big labels or whatever.

That’s reality. I mean, it’s still real for a few musicians, but for the majority and if not the reality anymore. And this kind of mobile, portable global world really asks for a consistency, I think, in sound.

So from what you your description of the way reference for works, one of the things that I’m understanding is that if I were to try to just replicate the same thing at home in my own studio without reference for, I could take thirty seven measurements around my room, average them together and then create some sort of complementary IQ and a manual way. Or I could use some kind of fire filter creator with some auto things anyway. I could do a lot of that manually, but I’m not going.

I don’t know how to localize that microphone in the space. How does reference for use that location information in its final result?

So I mean, there are these two things that uses the localization information for. One is this ability to see the measurement points in context and in this relative MAPP between each other. And then we’re talking about an average. But it’s not that simple in the sense that kind of how can you combine the spatial information to actually calculate the profile is kind of one of the one of the secret sources. And then the other thing is this kind of consistency of user interface.

I mean, sure, there are kind of tools like Erev or some other that are smart that allow you kind of if you want to geek out about how you measure your room, then you can of measure all sorts of parameters and then of do your own thing. And there are people who do that. And I don’t get to hold anything against them, obviously. I mean, that’s perfectly cool thing to do. But then you have to realize that you are then in full control and taking full responsibility for the things that you are measuring, for the things that you’re tuning.

And I think I mean, one of the analogies that I like about our software is if you talk about the kind of visual design world, right. Then there is Photoshop and there is Instagram. I mean, there is Photoshop, it has gazillions of features. You have all the freedom that you want creatively. There is this thick Photoshop Bible and you can probably spend like a good five to 10 years actually mastering the tool. And once you do that, then you can create wonders.

And there is no limit to kind of how you can express your imagination through kind of visual design. But there’s also Instagram that’s kind of more focused towards people who don’t want to invest five years in kind of mastering that art and they just want enough to push a button. So that’s a pretty picture, be done with it and move on with their lives. So as I see the music create a world, most of the people I mean, there are some people obviously who like to kind of geek out the bathroom acoustics, but there are less and less of those people.

And most of the people I know among our kind of users is people who are actually very passionate about music. And they would want to spend as much time as possible actually thinking about music and creating music. And the fact that the music from their studio doesn’t translate that well to the outside world is just the problem that they would like to get rid of as quickly and as seamlessly as possible. So our philosophy behind building the product is also to give them that ability.

We’re always kind of thinking, hey, how can we ask less questions to the user? How can we make it even more seamless? How can we kind of smooth out all the workflows so that the user can be as I mean, the user experience from our perspective would be you install solar rocks, you press one button. The system says you’re calibrated, kind of your sound is good. You can go back to your creativity and you can say thank you and do that.

So that’s the kind of ideal user interface. So unfortunately, we have to ask the user to do more things, but the less questions we can ask and the less the quicker we can let the user return to creating music, the more successful we think we are. So that’s also kind of, I think, a very important aspect to take into account. So totally.

I really appreciate how you guys have simplified the process and also attempted to remove the opportunity for me to make errors, you know, as as the end user, because I can see an opportunity where if I really wanted to geek out about the measurements myself and pursue a path where I go and I. Get out a smart rig and I and I take 37 measurements, but then I would also need to figure out a way to average them together in a weighted manner where I would say, OK, measurements one through ten, I want those to be worth five percent in measurements, you know, 11 through 20.

I want those to be worth seven percent.

And then, you know, you build up this complex average.

So you guys have thought all that stuff through and probably saved me a couple of days of trying to figure that all out. And I’m able to do it in, you know, ten minutes or think I was I would say more.

I mean, ultimately, the real test of what we do is kind of once you really do the calibration and then you mix your song on this new reference sound, then the real test, whether we’re bringing value to you is, hey, is it now translating better? Are you getting a better result soon or are you able to deliver better sounding song than you could before? So that’s the real test. And the trick was actually finding that the right method of measurement and calculating the profile and there are kind of more psychoacoustics and acoustical things that kind of go into the equation, but is really arriving at the curve where you can say, OK, this actually works.

This actually helps things translate. And with sonar works, I mean, we really get users every day. Somebody writes to our support saying, hey, thank you. Like before I was doing ten cycles back and forth to my car to check my mics. And now since I’m still zone, I would say I just did one cycle to the car and I liked everything about it. Like, so if there are these really user stories coming in. So we’re quite confident that the sound that we deliver actually helps people get their translation faster.

That was one one engineer when we met actually at the MAPP show last year. And he was like, hey, guys. Like, yes, what? I installed the reference and the first song I mixed on this, I got the grab it. So that’s funny. So it works.

So, Martin, I know that you you mentioned that you have measured a lot of systems in studios and you also have seen and done a lot of support for the people who are using reference for.

So I wondered if you could sort of, you know, aggregate some of this learning and share with us some of the biggest mistakes you see people making who are maybe new to studio monitors set up and calibration. What are some of the things people are doing wrong in terms of placement? AIM and then I don’t know other like maybe key things that they’re doing?

I would say kind of if you talk about specifically studio systems, I mean the kind of installment of the physical things, I think the kind of biggest thing I would say is not the way how people set their systems up. It’s actually how much of this historical attachment to the hardware is keeping people in the mind frame of mind that says, hey, it’s all about the speaker or it’s all about the headphone or it’s all about the room, fine tuning.

I mean, the way how I find most people think about it is basically first I should get the best speakers I want and then I can get. And then if I have more money, I should invest more into speakers. Then at some point they say, OK, I probably should invest into room tunings. And I mean and for the speakers, you can probably I mean, there are very, very kind of affordable speakers and there are like the thousand dollars, a pair to two thousand dollars a pair of speakers.

And then you quickly get into the five to ten thousand dollar range. Right. And then usually people think that, hey, it’s probably if I can’t get into the ten thousand dollar for a pair of speakers range, then then I’ve been that’s my limitation. And then if you talk about the room tuning, then you quickly get into like you can do some small job yourself, but then you quickly also get into tend to like, I mean, unlimited amount of dollars that you can invest into room tuning.

And I think people really think that that’s the goal and that’s the limitation. Whereas really I think that of now in the day and age with where we are with like software tuning technology like ours, I mean, by all means, we are not saying that we are kind of the replacement of room tuning. You have to get a decent pair of speakers and you have to get a decent amount of room tuning. You can’t really go into a glass cubicle and then kind of expect everything to be solved by calibrating your speakers with sonar works.

It wouldn’t work. You have to invest in speakers and you have to invest into room tuning. But really, the place where the kind of return on investment from sonar works becomes really your best way to improve your studio. Sound is closer kind. Then people think I would say kind of, I don’t know, invest your first thousand dollars into getting a pair of speakers, then invest your next thousand or two thousand dollars into room tuning. But from that point on, sonar works will really kind of be the thing that kind of takes you to a gets you to a much, much better place than additional two or four thousand dollar investment into the room tuning.

So we’ve had like a real story. We visited a friend in L.A. who is kind of working from his home studio and producing for a band. And we were kind of somebody introduced us and he went there to show what sonar was can do. And after we set it up and we calibrate the studio, he was like, well, guys, you know, I had these four thousand dollar speakers and I was thinking that that’s my limit. And that’s why I kind of I can’t really get my mixes to translate as well as I could.

So I was thinking of selling some more gear, selling those speakers and getting myself a ten thousand dollar speakers. But now, apparently, I don’t have to do that because it’s way better than I thought it would be with these more expensive speakers. So that’s, I think, is kind of the biggest the biggest thing I think I see. I mean I mean, obviously. But that’s very rarely where you see kind of some very massive errors and placement like people placing their speakers asymmetrically or kind of I mean, most of the people already know that, hey, like this equilateral triangle is the best kind of placement and you have to place the speakers at the level.

So that’s kind of I haven’t seen those problems too often, but this kind of thinking that, hey, I mean, with a few hundred dollars worth of headphones and the calibration, you’re probably better off than with a two thousand pair of headphones. So I like what you’re saying about don’t put the cart before the horse. So I can’t just go get some speakers out of the trash in the alley that I found with blown drivers and then install sonar works and expect it to sound amazing.

Sure, sure, sure.

I mean, there is enough kind of you have to get a decent pair of speakers, but most, many more people are there already than than they think they are. So small towns.

Let’s look at some questions that people sent me from Facebook. Alex Binz says, I’d love to know. Generates that, I guess it is talking about reference for I’d love to know how it generates the fees, part of the measurements marginal like 80, 30 ayes have one full wrap measured in smart, but it doesn’t appear in reference for.

All right. So this is a deep technical question. What he’s asking about is once you when you look at the plug in or the system wide of works, then you can select the different curves that the showing. And we’re trying to be really transparent about what the plugin is doing to your audio so you can turn on the phase response of the system. But what I think is what he’s referring to. But the phase response that the plug in is showing to you is only the phase response of the sonar effect.

So when we measure the speakers, we actually do not measure the phase response of the system. We only measure the frequency response of the system. So we don’t really attempt to measure the phase response of his pair of general. What we’re saying is that depending on the filter modes that you select inside the plug in, whether it’s a zero and whether it’s a zero latency or is it a phase linear thing, then if you’re in the zero latency mode, then the plugin introduces some change to the phase response of the audio signal coming through.

So what this graph shows is the phase distortion, if you will, that the plug in is introducing to your audio. And that’s the then layered on top, whatever the phase response of your speakers are. So that’s the answer to the question, I guess. Cool.

So, Carol, Mariotte wants to know, can you ask if they are fear based and tackle phase or just magnitude? We just covered part of that. So it’s just magnitude. But the plug in does show the resulting phase change from that magnitude change.

So, yeah, is the way you’re applying the filters are based. Yes.

So if I based when it’s working in phase linear mode, then it’s kind of full FBAR. When it’s working in the zero latency mode, then technically it’s IIR that’s implemented through the FLIR. But yeah, well that’s zero latency is the minimum phase filter.

And then he asks about the latency, is it fixed or variable. And it sounds like you have two settings. It’s either going to be linear phase or minimum phase.

Yeah, we actually we actually have three. I mean, the kind of depending on the depending on the use case and the really the preference of the user. The tradeoff is if you go for phase linear mode on the filter, then it introduces latency to the system. And that’s kind of inevitable. By the way, the mass of the phase linear filter works and in some cases that’s OK. And then people say, hey, I want this phase linearity of the system, but of the plug in.

But if you want to like if you’re tracking or you’re working in some other kind of latency sensitive tasks, then we also have in the other extreme we have the zero latency mode that’s then zero latency in the plug in. But then that costs you some change in the phase response. As far as we’ve tested, it’s not really audible, but kind of as I said, we’re being transparent about what it does. So you can check what the phase response is in the curve in the plug ins and then be your own judge about whether it’s OK in your use case.

So in the zero latency mode, then the plug in works zero latency that cost you some phase change. And then there is the optimal mode that kind of introduces a little bit of latency, but kind of goes to a little bit of a distortion. So it’s kind of trying to find the middle ground between the between the other two choices. So there are these three modes. I guess one of the things in the question might be also we have two ways how you can apply sonar works in your system.

One is the plug in. That’s fine. If if you have, like a serious production kind of setup on the rig, then that’s the most robust way to introduce sonar works into your system while you’re actually for your kind of mixing job. But we also have the system one, which is after the virtual sounds are in your computer and then processes all the audio and then releases to the real output. And that enables you to kind of calibrate all the audio coming from your machine, like YouTube, Spotify or whatever else you might want to use.

And some users also find it more convenient to work for their set up to kind of routed through the system wide. But then because of this virtual driver that then sits in your machine, that costs you additional latency so the plug ins can run in through zero latency and the system wide, even in the event of zero latency mode, it’s still cost to the latency of the buffer of the virtual sound. That’s kind of. And does that have to do with the way we’re looking?

Does that have to do with the sample rate? Can that be changed?

So the next question is, it has to do with how the operating system kind of. Handles sound card devices and of what are the operating system requirements for buffering audio that’s kind of going to the audio device. So we’re looking for ways how to how to optimize it by kind of thinking about new ways, how to ride these virtual drivers. But at the moment, it’s kind of limited by the operating system.

OK, so John Burton had a question about that in this system wide implementation.

So, John, they’re working on that then, Dave. Do they have any plans to add profiles for in your monitors to the headphone side of things? Reference for is amazing on my closed and open back headphones, but would love to be able to balance out my ultimate ears as well.

Generally, we’re constantly working on having new headphones to the database. So I think somewhere in the support page we have a place where people can vote in and say, Hey, I would like you to add this or that headphone, and then we’re kind of averaging all of that kind of decide which model do we proceed with. So generally we can calibrate both over the years and in years. So it’s just a matter of where our ultimate ears is currently in our pipeline.

So I’ll I’ll check it, then add another votes to ultimately get in the pipeline. And Ben, it sounds like you can go find that place on the support page and vote yourself. Brandon Crowley says. Yes, I love reference for and use it for girls mixing and listening purposes. I’d like to know what’s next for reference for it seems like it’s already the complete package. Could Studio Monitor Emulation B in its future? The short answer is yes, I mean, as we speak, actually today I had a couple of meetings which are currently really intensively working on planning the reference five.

I hope to kind of be able to come with enough news in the coming months about exactly what is going to be. But this ambulation is one of the one of the features we’re now intensively thinking about. So the answer is yes, it could, but I don’t want to make any hard promises yet. But that’s one of the things on the table. Martine’s, you’ve made this great product, a lot of people love it, and it seems like everything is just going great for you, but I’m sure that you’ve had some challenges and some painful moments in your career.

So I wondered if you would tell us about maybe one of the biggest and most painful mistakes you’ve made on the job and how you recovered. All right.

That is actually a very interesting story behind this. One reference is kind of where we started out as a company. So we built this tool for calibrating speakers and headphones for the music creators. But actually, since day one, when we started the company, we’ve always kind of dreamed bigger. And we’ve always asked ourselves the question about, hey, what is the ultimate answer to the question about the perfect sound? What is the ultimate sound that everybody is thriving for?

And kind of early on, we realized that there is a creator world, but obviously there’s also the listener world. So once you really create the song in the studio, I mean, reference for it helps you get the song to translate better. But what does that translation really mean? It means that it still sounds maybe OK on everything, but it still sounds different and maybe not perfect on anything because all the consumer devices out there still have various discrepancies in the frequency responses and the way how they sound.

So once you create your piece of art and let it out to the world for the people to listen, then I’ve met engineers who say, hey, I can’t really listen to my music outside my studio because my ears bleed about how wrong it’s really sound. So there is this even even with the reference for you still face the problem of, hey, so how do people outside in the world really listen in to my music? And it’s kind of early on, we’ve kind of dreamed about, hey, if we have this software technology that can kind of change and control the way speakers and headphones sound, why don’t we kind of go all the way and solve the translation problem, kind of not only across music creators, but also across the and listeners.

And then everybody will be listening to the same reference. Everybody would be hearing the same thing and everybody would be perfectly happy and we would have kind of killed with the translation problem, which is very much possible technologically, I think, day and age. So and based on that dream, we created a product that was called True Life. Maybe, maybe, you know, this is like three to four years ago. Yeah. So this was kind of the thing that we brought to the consumer world saying, hey, now you can actually listen to the sound that artists heard in the studio.

This is kind of how you should hear it. And we thought that that’s going to be like a Tugg and kind of and it will all go great. A long story short and coming to kind of the biggest mistake perhaps, is that it ended up with the realization that not enough people in the consumer world really liked that sound. And if some people were like, yeah, I always wanted to listen to the way how people hear things in the studio, but that was eventually a minority of the people who listened to it.

And that was like a kind of a low moment for us because we invested quite a bit of effort and the kind of parts into building that product and it sort of didn’t take off as good as possible. And from the mistakes perspective, I think what was happening is we kind of thought that perhaps we can kind of dictate to the consumers kind of and teach them what good sound is and what good music is or kind of what’s good taste is we could kind of come from down from top down and say, hey, hey, listeners, this is kind of how your sound should be.

And that didn’t really turn out to be that way. And that was probably the lesson learned that you can’t really kind of dictate too many things to the market. You can only kind of listen to kind of what they I mean, you can only walk from the place where people are and kind of you can only solve the problems that they think they have and then kind of you can work from there. So but then long story short, how we recover from it, we decided to we’ve been always like a data and kind of rational thinking, approach driven company.

So we kind of said, hey, this is interesting. Let’s kind of double down on that problem. Let’s figure out what’s really going on. So we launched a probably biggest ever research into we launched the biggest ever research into consumer sound preference to discover. OK, so you don’t like this. So tell us, what do you really like? And we have like close to fifty thousand users participating in the in the preference discovery test to discover that, hey, if you take I mean, the long story short is that everybody likes different sounds.

I mean, there is if you think about there is a difference in how people. What people. Some people genuinely life like some people like a lot of faith, some people genuinely hated, some people like a lot of trouble, some people genuinely hate it. And there is also a difference in how we hear physically and our hearing changes with age. So all of these things combined, I think, is kind of the underlying reality of these kind of religion wars between, hey, is beats a good sounding headphone or is it something that’s kind of we should kill everybody who ever listen to it then for four minutes through a law, or is Sennheiser the right sound or.

I don’t know, Grado the right sound. So I think that below all of that, if you kind of buck unpackaged and look at this from a data perspective, then really there are these genuine differences in what sound people like, and there is this difference in our hearing. And from that insight we have now built, what is what you can also see on our Web page, which is the latest product that we’ve released, which is called Sound Idy, which is this idea that you can find your personally perfect sound through discovering your preference and through adjusting the sound to your hearing.

So we’re still using the studio reference sound as the starting point. We think that that’s the right place to start, like the way the artist wanted the music to sound. But then on top of that, we can discover the user’s preference for sound and we can adjust it for your hearing, for his hearing or her, so that when they listen to the music of the artist, they’re actually hearing the interpretation of it, if you will, that most suits their taste and has the best chance of actually emotionally engaging with the artist.

So for comparison from these research numbers, we now see that if you talk about a single fixed sound of a single fixed headphone sound as it comes out of the box, then it’s going to be the best possible sound for no more than 17 percent of the population. So no matter how good job headphone company is doing in their R&D, they’re going to hit the sweet spot for no more than 17 percent of the population. If they do the best job possible and they kind of air on that, then they are going to face a smaller percentage of the population.

Whereas if you personalize the sound with, say, sound, then we kind of see that currently over 80 percent of people actually say, hey, I like this sound better than what ever was the original sound of my headphones. So it’s a huge improvement. And as we evolve, the technology is actually kind of getting even better. So it feels that we’re really onto something. It’s still fresh. And if we just announced Sound Design Live this year and we’re really kind of just getting started with it.

So fingers crossed. But that’s kind of in a way, I think we feel that we have found the ultimate answer to the question about sound quality, and that is that the ultimate answer is individual. It’s kind of not one size fits all is actually individual for everybody of us. And the definition of perfect is really a personal matter. And we’re kind of working to get that into the consumer reality. And also, I think through that, we can solve the translation problem for music creators, because then you can kind of create this on the reference and be sure that whoever is listening to it will not be listening to like a random issue that happens to be their headphones.

But there’s going to be listening to it through an issue that’s actually kind of intelligently matched to that person’s preferences. Wow.

Well, I find this idea of personalized listening really interesting. And as you were talking, I was reminded of two things. The first one was we just wrapped up a live sound summit, 20/20, and one of the presenters was Lorien Bohanon, who mixes in your monitors for Michael Bolton Lizzo, among others. And she gave a presentation about not only how people’s hearing changes with age, but also with gender. And so it’s interesting that she is kind of a younger woman mixing these in your monitors for this older man.

And so she was talking about how for her she can barely stand to listen to the next from Michael Bolton, because for her, it’s way too loud, way too bright. But that totally makes sense for him as an aging man who’s like lost a lot of his high end and his hearing in general. And the other thing it makes me think of is hearing aids. So I have a friend who’s an audiologist and I have interviewed her on this podcast a few years ago.

And I also worked on a conference for a hearing aid manufacturer about a year ago. And was interesting for me that I learned from that conference is that the people doing sales have commissioned certain research to show that even people who don’t necessarily have a lot of hearing loss still enjoy like an improved experience of their lives when they have a hearing aid with some filters because the way they tune. Those things, as they measure, you’re hearing first and then they are similar to what we everything we’ve been talking about today, they apply some corrective IQ kind of processes.

That’s one of the things that the hearing aid does. And so even people who just have like a tiny bit of hearing loss just from age, not from maybe anything intense, find that they can like they can understand people better, they enjoy music better and things like that. So so, yeah, that seems like this. There’s a there’s a big opportunity for this kind of personalized listening experience.

Yes.

We we killed the topic of personalized listening.

OK, so I’ve got a few short questions here to wrap up Martine’s What’s One book that’s been immensely helpful to you.

Yeah, the one the one book that really comes to my mind is actually a book called Organizational Culture and Leadership by Edgar Shine’s. I read it, I think at some point when our team was kind of transitioning from being this kind of six people, kind of a little family to kind of growing bigger. And we can offer more and more getting into different types of kind of situations that kind of didn’t fit into my model of the reality at the time.

And I think this was kind of I mean, on a one on one end, it’s kind of a really simple message from the book that, hey, there are different cultures and they’re eventually based on different beliefs. And that’s why there are different types of miscommunications between these cultures. But it’s kind of I really like how that book goes quite kind of deep and wide. And so unpacking that subject and kind of putting it together in a very practical way for what it means for us on and on and on in everyday life.

I think that kind of at that point was kind of an eye opener for me that kind of helped me, helped me realize better how. Yeah, how to kind of how different people are growing up among occupations, among nationalities, among their backgrounds and so and where that all is coming from and how to kind of make life better for different people.

So Mārtiņš where’s the best place for people to follow your work? I’m not.

I have to kind of say that I’m not too active on social media, but I have an Instagram account, I have a Facebook account, and I post from time to time different things there. So I would say Instagram and Facebook are probably the places you can find me by my name. And certainly there are not too many Mārtiņš Popelises out there, so. .

Well, Mārtiņš Popelis, thank you so much for joining me on Sound Design Live.

Well, thank you so much for having me.

It’s been fun.

Does flipping a sub around also flip its polarity?

By Nathan Lively

This question made my brain hurt so I had to make a video to explain it to myself. The answer is NO for the common subwoofers we encounter in the field that are closed in the back and have ports in the front. The answer would be YES if the sub were open in the back, but then the coverage pattern would be a figure-8, not omni.

  • Why do you polarity invert the rear sub in a cardioid array?
  • Phase Wheel
  • FIGURE 4.12 Summation zones from Sound Systems: Design and Optimization by Bob McCarthy

Transcription

This transcription was generated automatically. Please let me know if you find any errors.

I have a question for you: If I grab this speaker and flip it around so it faces the rear, does that polarity invert the signal?

Think about it for a second. A student asked me this recently and it made my head hurt a little bit. And I realized that I had some competing ideas in my head about how speakers work versus how instruments work. And so I wanted to just talk about that for a few seconds in case other people are having sort of if this question makes your brain hurt, then hopefully this video will help a little bit.

Here’s why I think this question makes your brain hurt. I asked this question in my student community, and you can see most people are saying no, but twenty five percent said yes. I just posted this question a while ago on YouTube and I only have five votes and it’s only an hour ago. You can see there’s still some division as to what’s going on with this question. Maybe I didn’t ask the question correctly. So maybe the question is confusing.

But but let’s look at this. So, um, let’s see. I have this video over here from Alex to find out where he’s talking about making a snare drum. And I think this is maybe where this idea comes from, because one of the first things we learn about making drums is that if you would like to get the nice sound of the snare on the bottom of the snare drum, and you’re going to put a microphone down there and you’re going to put it equidistant like this, then you need to do a polarity inversion on the input channel where that microphone comes in.

Otherwise you’re going to have problems. OK, so we all learned that a long time ago and we are familiar with this idea. I like this. I found this video of a slow motion snare drum. We’re familiar with this idea that when the top head has struck up here, then the bottom head also goes down at the same time. And so we have these equidistant microphones that are receiving opposite pressure. And that’s why we need to polarity invert them so that they’ll go in the same direction when they get some together in our console digital audio workstation.

I have another image here from Sound on Sound. So here is a recording of those two microphones. And if I draw on this, it should be pretty easy to see that where we have a peak here. Then at about the same time down here in the bottom snare mic, we have a peak, but in the opposite direction. Now we start thinking about how this relates to loudspeakers and we think, oh, it’s the same thing, right?

We’ve got positive pressure going in this direction and we’ve got, um, negative pressure going in the other direction. I should probably use different colors anyway. And this is, I think, kind of what we’re thinking. And here’s just the same picture. But just the parts exploded out, positive pressure, negative pressure. So if I go back into my simulation here and I take a look at the speaker and I look at it at this microphone, then what do we expect to see?

Well, I’m thinking that there’s going to be a positive peak going up. OK, let’s measure that. Let’s zoom in and let’s do let’s reset delay and let’s store that. So I feel like my expectation was met. I expected a positive peak. So now if I have this idea that if I flip the speaker around and now at that same microphone, the peak should go down, then the same thing would be true if I left the speaker the same way, but just measured behind it.

Right. So I have a microphone back here so we can just switch microphones. So I’m at my one in one hundred and eighty degree microphone and I’ll hit predict and I’m kind of expecting to see it go down. Right, because this is my idea. I flip the speaker around, but we can see that the speaker is totally weird now, right. Because we’re not getting as much high frequency information back there behind the speaker, but we still have a peak to look at and it’s going up.

So that’s weird. You may be wondering why I don’t have this perfectly in the center, and that’s because that it takes a little bit longer for the sound to travel around the speaker and get to this microphone. And so I had to offset a little bit so I could put the microphones at the exact same position, three meters and negative three meters. But I just move the speaker. OK, but now you’re thinking, OK, well, that doesn’t work with high frequency drivers.

But surely subwoofers, which we know are omnidirectional sounds going forward. Sounds going all sounds are going everywhere. So let’s test that. Let’s get rid of this X 40. I’ve got a 750 here. Let’s measure the 750 at our front microphone. Let’s autoset delay, zoom out, we’ll store this and let’s do the same thing, let’s just switch to the rear microphones. We’re going to go to the trouble of actually flipping the speaker around.

Oh, same problem. Yes, it’s arrived a little bit later. And that’s why I had to change the position a little bit so that I could line the peaks up perfectly on top of each other. But same polarity. And if you like to look at phase, we can do that.

So here we go at our zero degree microphones store and here’s our rear microphone, and it’s exactly the same. So what’s going on here? I think the confusion is that sound is not coming out of the rear of the subwoofer the way you are imagining it like this or like this or like this. OK, this is kind of what we’re imagining. And that might be true if this were not closed in the back. So we might be tempted to think that positive pressure coming out here and then negative pressure coming out of here.

But that’s not really happening because it gets here and it says, I can’t get out this way. So then it goes over here and out here and then actually comes out of a port somewhere. And then maybe then it goes around the speaker and and that’s why it takes a little bit longer to get to that rear microphone. But you have experienced an open back driver and where you have experienced that is with an open back guitar cabinet. So it’s pretty common for a guitar amplifier to be open on the back in many times in studio sound, we will mark the front and the back.

And you have the same situation, right? You need to polarity, invert that rear microphone. I can’t insert a guitar amp into MAPP XT, but I have tried to make a simulation to play with this. So it looks like these two speakers are really far apart. But just imagine that this is a big guitar amp and that this speaker up here is the going to stimulate the forward pressure of that single speaker. And this, uh, back here is going to stimulate the rear pressure.

So if I play these both at the same time, this is the kind of pattern that we would get with an open back guitar. And it. Right, similar to a microphone that receives on two sides that’s open on the back. You’re going to get this figure eight pattern guitar amplifier figure eight pattern of coverage, subwoofer, not a figure eight pattern because it’s not open in the back. And the way I built this, in case you’re curious, is I just turn this guy on.

Let’s look at him at the zero degree microphone. And store, and then I just inverted the polarity at this guy and pushed him back far enough so that his peaks would still line up with the peaks of the other guy so that we’d have a lot of. I wanted to make a really dramatic example. Right. OK, in case this is all still confused you, there’s one way that always works for me. Any time I’m ever getting confused.

And I’m like, oh, this still doesn’t make sense to me. You can always insert a gradient array in MAPP XT. The next question that I found people often get to once they realize that flipping a subwoofer around does not invert the polarity, then they say, wait, but isn’t that how you build a cardio subwoofer array? You flip the sub around so that it inverts the polarity? That’s not right. You flip the sub around to create delay.

So I have another video called Why Do You Polarity Invert the rear sub and a cardioverter way where I suggest you watch it. If you haven’t, it’s just a nice step by step illustration of how an inline gradient cardio subwoofer already works. But right now, I’m not going to go over that, but I am going to show you one. So here I’ve got the same subwoofers, but now they’re creating cardio and subwoofer so I can do a prediction at eighty hertz and we’re too zoomed in to really appreciate it.

You can see that we’ve got some action going this to the front and a cancellation going to the rear. And so just keep that in mind because the inverted gradient stack is the exact same principle, just smush together so that it uses less real estate. So here we’ve got two processing channels. Right. Let me expose that for you. So here’s my gradient in line, forward gradient in line rearward and I’ve got normal polarity and reverse polarity. So you always need two processing channels, one with the polarity inversion and the delay.

So let’s compare that then to an inverted gradient stack. And this is really fast in case you ever get confused in your and you’re like, wait, how does this work? You can open up map sixty just right click choose insert gradient flown subwoofer. And when you do that, take note of two important things. Uh, number one, we have two processing channels and we are achieving a polarity inversion not by flipping the sub around but with an electronic polarity inversion.

Number two, we can automatically apply the processing that we need, namely the delay directly into our process. So make sure you check this and make sure you choose the right channels. OK, so I’ve already inserted this, so I’m not going to do it again. But that’s what’s going on here. We have two speakers facing forwards, one facing to the rear, and it’s the one that’s facing to the rear that has the delay and polarity inversion.

So over here you can see I’ve got gradient forward, gradient rear and this is my stack. And if I do a prediction, we’ll see that we get summation to the front and cancellation to the rear. So just to summarize, flipping a sub around does not invert its polarity because the back is closed. And the way we achieve this polarity inversion in our gradient subwoofer arrays as through an electronic polarity inversion, not by this change in orientation. All right.

Let me know what questions come up for you about this and let me know if you have any suggestions for me. I’m always trying to improve my own understanding of these principles. Thanks.

Do cardioid subwoofer arrays work underneath a stage?

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live, I talk with the Associate Professor of Electroacoustics at the University of Derby, Adam Hill. We discuss managing sound exposure and noise pollution, diffuse signal processing to cure power alley, and the effect of stages and other boundaries on directional subwoofer arrays.

I ask:

  • Your presentation at this year’s Live Sound Summit was called Managing On-Site Sound Exposure and Off-Site Noise Pollution. My takeaway was that you came away with more questions than answers, but there were a couple of interesting moments I wanted to follow up on:
    • If I am required to measure audience exposure during my event and the regulation states that it must be measured at the loudest point in the audience, but there’s no way for me to practically keep a mic there through the show, walk me through how to set up a measurement at FOH to accomplish the same thing.
    • From the research you got from NASA you wrote: Note of caution for 50-60Hz, due to chest resonance, causing whole-body vibration (annoyance and/or discomfort)? Is this something I should be watching out for with my system calibration? If there’s a resonance at 55Hz, could that unconsciously make someone feel bad?
    • Can you talk about what Vanguardia and SPLtrack are doing around off-site noise pollution?
  • Why is it illegal to fly subwoofers in Amsterdam?
  • I took the WHO hearing test and got a 79, which is supposed to mean I have good hearing. What did you score?
  • Diffuse Signal Processing (DiSP):
    • While a ground-based centrally distributed subwoofer array is a common and straightforward solution, it can be impractical and unsafe in certain situations (which is a topic for another day). Often a left/right subwoofer system (ground-based or flown) is a better choice. The problem with a left/right configuration is that there will be severe comb-filtering, causing inconsistent horizontal coverage. To avoid this issue, the left/right signals must be decorrelated. Existing approaches involve unique EQ applied to each side of the PA (which isn’t great from an efficiency viewpoint) or the use of allpass filters (which generally result in a reduction in audio quality).
    • I know you intended this for small rooms, but is it possible this could fix power alley for larger sound systems with uncoupled subs?
  • Subwoofer positioning, orientation and calibration for large-scale sound reinforcement.
    • Based on these results, it is evident that subwoofer placement directly underneath the stage can almost eliminate any advantages gained with cardioid polar patterns; the low-frequency SPL on the stage is virtually identical to that in the audience (Figure 22). Moving the subwoofers two meters forward so that they are not underneath the stage results in much lower SPL on stage while preserving the audience area response (Figure 21). 
  • FB
    • Elliott Clarke: Is he aware of male/female hearing differences (and hearing LOSS differences), as per Loreen’s LSS talk?
      • Where is the medical hearing research heading (or currently at) for dBC/music exposure (rather than constant dBA industrial noise)?
      • Does he think we will reach a sensible compromise for impactful/powerful live shows, within safe exposure limits *and* environmental/off-site concerns?
    • Matty Luka Sokanovic: Ask him if there is any 3D acoustic modelling softwares that you can drop your own models into! That are priced reasonably that is..
adam hill

Any directional low-frequency system whether in a single box or gradient or end-fire approach isn’t going to play ball when underneath the stage.

Adam Hill

Notes

  1. All music in this episode by Liam The Villain.
  2. Decorrelation of Signals Demo
  3. Software: EASE, CATT Acoustic, Odeon
  4. Fear and Loathing in Las Vegas
  5. Quotes
    1. An audience that is in front of a ground-based subwoofer system actually causes resonances within the audience almost like standing waves and room modes.
    2. It’s been shown that someone having the perceived control of an annoying noise lowers the annoyance. So what they do, for Vanguardia, they create a hotline so all you have to do is pick up the phone.
    3. There was never a subwoofer in the air during that test.
    4. I’m convinced that us audio engineers are able to cheat these (listening) tests.
    5. We’ve come up with a way to decorrelate two or more signals that are initially identical in a way that you can’t perceive it. You shouldn’t hear a difference in the tonality, but statistically, they are decorrelated. Run those through your LR subwoofers and you’re not going to get the coherent interference that causes the difference in tonality from left to right in the audience and the power alley.

Transcript

This transcript was automatically generated. Please let me know if you discover any errors.

Welcome to Sound Design Live, the home of the world’s best online training and sound system, tuning that you can do at your own pace from anywhere in the world. I’m Nathan Lively, and today I’m joined by the associate professor of electro acoustics at the University of Derby, Adam Hill. Adam, welcome to Sound Design Live.

Thanks for having me, Nathan.

So, Adam, I definitely want to talk to you about a bunch of stuff. We got a lot of questions, managing sound exposure, noise pollution, diffuse signal processing to maybe cure power alley. I don’t know. And the effect of stages, near gradient arrays. But first, once you have set up a sound system, what’s maybe one of the first listening tracks that you want to put on to hear something through it?

For years, my track was Broken-Hearted Road by Sonny Landereth. About a year ago, I switched over to three songs that I always listen to. One was stolen from my now colleague John Burton, which is Teardrop by Newton Falkiner.

So, Adam, how do you get your first job in audio, like what was one of your first paying gigs, first paying gig? That was my mom’s fault.

Thanks, Mom.

Yeah, thanks, Mom. Really, it was not on purpose that I ended up in this industry. You know, I was even at the age of about 17, 18, I was pretty convinced I was still going to be a musician. I had luckily enough musicians in the family to tell me, you know what, if you’re good at something else, you should probably do that. But it really came down to a finished high school. And my mom was like, well, you know, you need a summer job.

You need to do something. She’d already kind of pressured me into signing up for university. So I was going to do that. I was going to do electrical engineering that fall. But she said, look, you’re not bumming around this summer. You’re doing something. And she said, you do realize that you like music, you like live events. There are engineers who have to make this all happen. And I never had any idea. I never even crossed my mind that, oh, yeah, there’s a lot of people who work in this industry, but yeah, that was it.

So I thought about it and said, OK, I sent out a few emails, which especially back then I don’t think anyone checked their emails, just got nothing. But I thought about it like, oh, you know, I had been playing in my dad’s band since I was about 10 years old. And obviously we do maybe a dozen gigs or something like that. But we kept coming across. This guy named Gary Gand and Gary runs Gand Concert Sound, which is located just outside Chicago.

It’s a company that had been running since the mid 70s, still is running. And I said, oh, yeah, I know Gary. And so I asked my dad, like you, you’ve got Gary’s contact information, you know, mind calling him up and seeing if he could have a chat. So we got hold of Gary. Gary invited me over to his house one night and basically sized me up, you know, showed me a few of his guitars, asked me a few questions, see if I knew anything about sound, and then basically said, look, I’ll give you a shot.

I’m not going to pay you much, throwing you in the deep end. If you can’t swim, that’s your fault. And in a way, I went so that summer. Yeah, it was brutal. I had all the worst jobs, you know, testing about a thousand explorer cables and repainting the lift gates on the trucks, cleaning off your big analog multicar snake after it came back from a circus gig, you know. Yeah, all the good stuff.

But that’s where I started. And when I started, it was just a summer job. That was that I was going to, you know, make a few bucks, you know, maybe go to a few gigs if I was lucky and get paid for it and then go off to university and do something else. Sure. And then whenever work that’s probably almost 20 years ago now, you know, here I am. I’m still doing work for, again, concerts, sound and still involved.

That’s amazing. Your first job? One of my colleagues, Rob Lizzo, at the time, about a month in, he said very little to me the whole time. I think he was kind of saving me up from a distance. And he came up to me after about a month, just looked at me and said, You’re still here. I am like, you’re doomed now. Yeah, it’s like telling me you’re never going to escape.

Looking back on your career so far, what do you think is one of the best decisions you made to stay happy and get more of the work that you really love?

I think the best decision that I made was to stay in academia, actually, which is probably maybe not the usual answer that that you’ll get for from interviewing people on this podcast. But basically, when I ended my undergrad degree, I knew a lot about electronics, electrical engineering, and the plan was right. I’m going to go back and work full time and live sound. And I thought about it. I’m like, OK, I don’t really know much of anything about acoustics.

So I started snooping around and I found that if you go to the UK, you can do an entire master’s degree in a single year.

So I thought that was pretty cool. I’ll I’ll take a year off.

I’ll go live in the UK and learn a bit about acoustics and then come back and do live sound. So I did that. Learned a lot about acoustics at the University of Edinburgh, you know, the proper physics and mathematics behind it. And then happened to be just get in touch with this professor at the University of Essex and Malcolm Oxford, who you probably don’t know him in and live sound. People probably don’t know him, but the name is absolutely legendary and more of the academic side of audio engineering.

Yeah, he’s spent fifty years working on power, AMP designs, working and all sorts of digital signal processing routines. He’s really kind of behind the scenes, touched quite a bit of the technology we used and he convinced me to go to a PhD. So at that point I kind of had to make the decision right. You know, do I go the academic route? Because if you do a PhD, I think you’re you’re pretty much overqualified for, you know, sticking it out as a live sound engineer.

And I’d be. I’d be standing so far away, you just love it so much. OK, sorry, go ahead. But I think you’d you’d also be stepping away from the core of the industry for such a long period of time that, you know, I think it would take some time to get back into the full time groove of it. But I thought about it and said, well, you know, in the summers that I’ve spent with and still doing summers, probably about four or even five months a year full time with them working in the warehouse, going to do gigs.

And, you know, I said, OK, the gigs are great. I love those. You know, I live for that. But the warehouse work is just really monotonous. And I was getting pretty bored with it. And it just kind of said, look, you know, I’m not sure if I’m going to be happy doing this for the rest of my career, need maybe a bit more of a challenge. So the academic world seemed to fit with.

That’s where I can still kind of keep a hand in life sound. You know, to this day, I still go back, you know, at least a month every year and go work for Gand. But it kind of allowed me to work more on the research side of things and teach the next generations of engineers. So I did the PhD and then the University of Derby, which is smack in the middle of England.

If you’ve heard of the car company Rolls-Royce, that’s for Rolls-Royce is from that’s going on for they were looking for a lecturer who had a background in life, sound, understood music, understood acoustics, understood electronics and was research active. And there aren’t many people in the world who who take all those boxes. So it just it was the right place at the right time. And I’ve been with Darbee for almost nine years now.

All right, Adam, so we just wrapped up a live sound summit. Your presentation was called Managing on Sound Exposure and off site noise pollution. And my takeaway from your presentation is that you ended up raising a lot more questions than answers during this entire process. And that’s because it’s this big project that’s never been done before and you’re trying to bring together all these people in the world. And so the first thing you guys did is just like, hey, what are all the things we don’t know?

What are all the questions that we have? But there were a couple of interesting moments that I wanted to follow up on. So you mentioned that most of the regulations about on site noise exposure have to be measured at the loudest point where someone would experience that noise exposure. So I’m wondering, how do you do that? If I’m required to measure audience exposure during my event and the regulation states that it must be measured at the loudest point in the audience.

And I’m not going to somehow, like, figure out how to get a microphone into that mosh pit, then walk me through how to set up a measurement, a front of house that could accomplish the same thing.

Well, in practical terms, I’ll talk about what’s possible at the moment and most software, most bits of software. And I’m thinking Tunisie or Tennesee within smart or, you know, the stuff Smart’s working on, kind of separate from Tennesee. The terminology will differ between the software, but it’s a correction factor. You add in to your measurement feed coming from the microphone. So what you can do is before anyone gets there to the gig, you can walk around the venue with the sound level, meter, whatever you have handy.

Hopefully it’s calibrated ideally and you identify that loud as part of the audience, wherever that may be, and take a measurement and then keep the signal going through your system the same and go to front of house where your measurement location is actually going to be during the show and take a measurement. The difference between those two sound pressure levels, that’s your correction factor. You plug that into the software and then it gives you the best possible estimate for what the level is during the show at the loudest location based on your front of house measurements.

So I’ll say it’s imperfect because what that doesn’t take into account is any affects of the audience. So high frequencies, the audience will provide the absorption, low frequencies. There’s been some really interesting work done by Elayna Zabbaleen, who’s now at dB, where she found that an audience that has that is in front of a ground based subwoofer system actually causes resonances within the audience, almost like standing waves and remotes. OK, so you may get some, you know, a couple of dB difference throughout the audience at low frequencies.

So you’re not going to pick up on that with your correction factor. But it’s a quick and easy way of saying, OK, give me an estimate of what’s going on. At the loudest point, if you look at the regulations, that’s what they recommend. But it’s a frequency independent correction. It’s not looking at active bands or third active bands. It’s just one overall dB or dB measurement. And that’s it.

OK, so let’s talk about NASA. This is kind of fun from your research that you got from NASA. You wrote that.

Well, I’ll just read what you wrote. Note of caution for 50 to 60 hertz due to chest resonance, causing whole body vibration, annoyance and or discomfort. So is this something that I should be watching out for?

With my system calibration, let’s say if there’s a resonance of fifty five hertz, could that potentially cause some, you know, uneasy feeling unconsciously in in my audience and should I, like, try to avoid those resonances?

I think at the moment, as a designer, it’s not something that you need to do anything about because there’s still too many questions that are unanswered about this. I can tell you from experience when I’m tuning system and I walk right up to a ground based subsystem and I’m right there standing in front of tens of meters worth of subwoofers. Yeah, my eyeballs are rattling, my teeth are shaking. I can’t see straight. And I’m convinced that everyone will have a slightly different reaction to this.

You know, it just might be I have a certain reaction because the proportions of my body are a certain way. Someone else is different. Sized to me might be different. So I don’t think it’s a one size fits all sort of thing. And I quote the NASA research because that’s all we have to go on right now. There’s just not anything out there on the level of low frequency we’re being exposed to. There’s plenty on lower level, low frequency where they say, oh, you know, it’s not a problem because you’re talking 60, 70, maybe 80 dB.

We’re not talking 60, 70, 80 dB. We’re talking front row of the audience, getting on average between 120, 130 dB. See, at peak, we’re exceeding 140 dB. See, and the only people who have even started to look into this is NASA.

Can you talk about what Vanguardia and SBL Track are doing with off site noise pollution? I just thought this was really interesting and I thought, you know, people should know about it since they’ve been doing it for years and it sounds like they’ve had some success with it.

Well, it’s it’s something that once you hear about it, you’re like, oh, yeah, that makes a lot of sense. Vanguardia and dB control and the guys that SBL track. So I should say Vanguardia and SBL Track are UK based companies and dB control is based in the Netherlands. So if you’ve done any sort of festival work in those areas, you’ll know these guys. But their main method of attack to minimise annoyance off site is actually communication.

It has nothing to do with engineering the sound system or, you know, telling the engineers to turn it down constantly. I guess the first thing they implement is a communication protocol, and that’s communication with all stakeholders involved in an event. So they bring on board the people who are managing the event, the system designers, the sound engineers, even in some cases the musicians, depending on what the event is, but also bring in the local community.

And that’s really the important bit, because in the run up to the event, they can distribute flyers or send out some method, find some way to alert the local community that, look, there’s an event that’s happening. This is when it’s happening. This is what we’re doing to limit the noise coming to your place. And if you have a problem, here’s how you can get in touch with us in the run up to the event and during the event.

And it’s that during the event method of communication that I think is really helpful because you’re sitting in your house and let’s say your neighbors are having a party next door. You know, the thing that probably drives you more crazy than anything else is that you have no control over that, aside from angrily pounding on the door, you know, and, you know, making a war with your neighbors. You don’t have control over that sound. And this is backed up by loads of research, really, where it’s been shown that someone having the perceived control of an annoying noise actually lowers the annoyance.

So what they do is they pour Vanguardia. They allow they create a hotline. So all you have to do is pick up the phone, call this number and say, look, here’s where I live and it’s too loud, it’s annoying. And they’ll say, OK, great, we’re going to contact the people in control of the sound and we’ll get this sorted. Now, whether they do that or not is almost irrelevant. Sure. Because you’ve told someone who has control over this.

And so in your mind, it’s going to be sorted. Even if it’s not, it’s all psychological and therefore kind of taking it a step further. The guys at SBL Track, which is run by Chris Beale, they’ve created an app that any local residents can have on their phone free to download. And if they have a noise complaint, they send a text message. It pings the crew at front of house and they see on their display where on the map it’s coming from and the message.

And if the front of house engineer wants to reply back, they can text right back. And there’s that direct line of communication. But it’s the same idea. It’s giving people the perceived control over the sound. And in almost all the cases, it is significantly lowered complaints.

Why is it illegal to fly subwoofers in Amsterdam? This this is a very frustrating story. What happened was as if you’ve ever visited Amsterdam or if you live anywhere near Amsterdam, you’ll know that over the past maybe 10 years, there’s been an explosion of outdoor music events there in the summers. It’s just one of the places to be for a good music festival. And they do loads of good events outdoors there and indoors for that matter. But the city of Amsterdam, again, if you’re familiar, is not the cheapest of places to live.

So if you’re living in central Amsterdam, you’re paying a premium to be there and you’ve probably been there for quite some time. All of a sudden, with all these events popping up, you’re having some very annoyed rich people. And so they’re going to the city saying, what are you doing? Well, I can’t live in peace here anymore. You know, sort it out, fix it. So what Amsterdam did was they said, OK, we’re going to learn more about this.

So they didn’t just slap a regulation on it. They said, we’re going to learn some more about this. We’re going to figure out what we can actually do to have both sides of the situation live in harmony so we can have the festivals while also keeping all our rich rich residents happy. So they got in touch with the local audio firm who does system design as well as research and product development. And they know their stuff. And they said do a study and tell us what the best available techniques are for system design to minimize noise pollution.

So they trailed a bunch of different systems. They found this big field just outside the city. They took loads of measurements. It was a fairly well constructed experiment as well as any outdoor effectively shoot out as well as it could be designed. And they tracked the weather and all that sort of thing. And then from the data, tried to draw some clear conclusions. And what happened was that one of their conclusions was saying if you fly subwoofers, the sound propagates further and causes more noise pollution.

And they see me to point out that, yeah, that’s what our data showing. And so I looked at it and there are a number of other people know colleagues who are working on this as project looked at it as well, critically looked at the data and said, well, we can’t see that. More importantly, we looked at what systems they tested and they didn’t testify, for example, first, that there there’s never a cell before in the air during that test.

And the argument was that, oh, well, we used the Dask system, which goes down to 35 hertz. And so I pulled up the old Vidor’s documentation of how something comes out at 35 hertz, but it’s, you know, whatever, 30 dB below everything else. And so the data, they didn’t have good enough signal to noise ratio on their data to give anything conclusive. Although I’m not knocking the whole report, I think there were some really good observations made in that work and know it’s not slating the whole thing.

But then that one specific area, they draw a conclusion where a conclusion should not have been drawn. Unfortunately, what happened was that that was the conclusion Amsterdam took that and said, oh, OK, I can’t fly subwoofers because those send the low frequencies really far. And they said, fine, that’s the new no fly zone. Subwoofers, the end. Wow.

I’m also just impressed at how quickly and almost like efficiently they were able to make that happen. Normally it seems like any anything else we try to get done for the auto industry is takes forever and is a big hassle.

Adamu, like the World Health Organization hearing test app, I tested it out as well. And it’s interesting, instead of your normal test that has some sign tones at different levels and you see if you can hear them, this one is just a recording of a person reading numbers with different amounts of relationship to background noise and then you get a score. So I took the test. I got a seventy nine, which is supposed to mean I have good hearing.

So what did you get.

I got an eighty six. Oh, but there’s a big but there. I know my hearing, especially hearing and noise is not an eighty six out of 100. It’s not. I’m convinced and I’ve had chats with a lot of people who were kind of very closely involved with hearing health. I’m convinced that US audio engineers who are trained to listen critically are able to cheat these tests. We’re not we don’t fit the the kind of forum of the general public or general people that unless you’re admitting that which is which is why you got higher score than me.

Well, yeah, but I couldn’t avoid it. It’s you know, it’s we critically listen, our job is to pick out these little yellow, unrecognizable details and audio that no one else will hear, because I know, you know, that’s my big bit of hearing loss. It’s not revealed in terms of frequency loss. It’s hearing a noise. And if I’m in a busy environment trying to hear a conversation, I really struggle. And it’s called hidden hearing loss.

And it’s something that’s less easily tested for. But so I think the test is useful and interesting to take. But if you’re an audio professional and a good critical listener, I would lower your score probably by about 20. Oh, wow.

OK, shit. And maybe I don’t have flittering as I thought.

So, Adam, you love talking about diffuse signal processing. If you look up Adam Hill in the a library journal, there are a bunch of papers on this subject. So to dive into it, I’m just going to read this paragraph from your website so you don’t have to explain it all again. And I just I kind of want to talk about the practical implications of this. OK, so here’s what you wrote. While a ground based, centrally distributed subwoofer array is a common and straightforward solution, it can be impractical and unsafe in certain situations.

Often a left right subwoofer system, ground based overflown is a better choice. The problem with left right configuration is that there will be severe combe filtering causing inconsistent horizontal coverage. To avoid this issue, the left right signals must be dB correlated. Existing approaches involve unique.

You applied to each side of the PEO, which isn’t great from an efficiency viewpoint or the use of all past filters, which generally result in a reduction in audio quality. And so I’m just wondering, could this fix Power Alley for large sound systems? I know you have a design for your thinking. It’s intended for smaller rooms, but what about you uncoupled subs that we that we use on shows to correct one point?

It actually is designed for big systems. That was the initial intent. It was actually kind of an extra application that I just dreamed up with my PhD student, John Moore, who did a lot of the work on this to test it in small rooms. But yeah, to go to your question, yes, it can deal with power early and deal with all the kind of notches in the frequency response you get along with it. Basically, without explaining the whole thing.

You can follow the link to my blog to kind of get the full in detail explanation of how it works. But we’ve come up with a way to correlate to a more signals that are initially identical in a way that you can’t perceive it. So if I’m turning the Decorrelation on and off and you’re just listening to the signal over headphones, you shouldn’t hear a difference in the tonality at all. But statistically, then those signals will be correlated. So the idea is that run those through your left are left right subwoofers, and you’re not going to be getting that coherent interference that causes the difference in tonality from left to right and the audience and will then not cause the power rally, which is that big bass buildup you get right down the middle of the audience.

So that really was the initial focus of this research. That’s what we wanted to solve. And I’m fairly confident we’ve come up with an algorithm that can do it.

Oh, cool. So how long before we can get a demo from you? That is low latency.

It’s on my list of things, not just a question of when to get around to it. I, I know what needs to be done and I’m I’m pretty confident that it can be done. I’m shooting for a few milliseconds latency at the moment. We’re looking at thirty, forty milliseconds which isn’t going to work for life sound. So that’s the last thing on the list really. I’ve been having informal conversations with people in industry manufacturers about it and implementing it.

Nothing concrete at the moment, although there’s a completely different application then I can’t talk about at the moment on the other side of the world. Yeah, secrets. But it has nothing to do with live sound. It’s it’s a completely off the wall application of it. So it’s getting out there. And to be honest, I mean, looking at my email inbox, I’m at least getting one or two messages every day from someone in the industry saying, hey.

I’ve heard you’re working on this one. Can I have it? So, yeah, there’s the demand, I think there and I think enough engineers are open minded enough about taking this approach, because let’s be honest, what we’re doing is we’re messing up our signals to make it better, to make the listening experience better, which I think for some people doesn’t quite fit too well.

Sure. And that actually makes me a little bit more happy about it, because let’s imagine that it’s available already. There’s going to be plenty of people who reject it, and then there’s going to be plenty of people who see it as like that. Now, this perfect solution, which I think is cool, you know, and so then those early adopters are going to go out and test it out and eventually we’ll find out, you know, if it’s going to be accepted in the long run.

Yeah, that’s it.

So I’m hopeful with it. I think it’s something that at least needs to be tried in the industry.

I want to mention one more application that you and I talked about previously. So I showed you my main subelement alignment calculator that I’ve been working on. And then we got into talking about this decorrelation of signals and it came up that you could use this in just the tiny crossover region between these two boxes, between main and sub. And so if you just correlated that tiny area, then you could potentially stop worrying about main subelement. You would still want them to, you know, arrive at the same time and not like cycles and cycles late.

But it could save us from some of the more drastic effects created from bad means of alignment’s.

Yeah, if you focus this decorrelation on the crossover region between your mains and your Celebes, which is entirely possible with diffuse signal processing, it’s you can be frequency selective or you can apply to all the frequencies. Doesn’t matter. You can choose. But if you just focus on the crossover region, that’s the area where you have a huge amount of problems, especially if you have ground based cell buffers of getting the mains and the subs to play ball. And you’re looking at some of the line arrays out there these days.

They’re going down to 50 hertz or below. You know, they’re definitely kind of moving well into the subwoofer range. So you need to sort that out. And if you can correlate those two elements in that specific frequency region where they overlap, then time alignment is less critical. And we talked about it earlier. But I’m not saying that you shouldn’t timeline. I think you always are going to have to timeline. That’s really the name of the game of system design, but it makes it a little less sensitive.

So if you’re a little out in certain areas, you’re not going to be getting massive peaks and dips in the frequency response. Sure.

This is the sort of crossover alignment is that, hey, you can only be aligned in one point, so why do it at all? You know, doesn’t does it make a whole lot of sense? Because then you never know really where you’re aligned and where you’re misaligned. But it sounds like it would then allow you to make that alignment work for a much larger portion of the audience.

That’s the idea. These days, everybody sees this and things feel different and people go home and everybody sees how many people listen as we reminisce. Let’s talk about subwoofer positioning. So I wish we had time to dive into a lot more of the writing and research that you’ve done. But I picked out this one that was interesting for me called subwoofer positioning orientation in calibration for large scale sound reinforcement. So part of the study was looking at how stages affect the predictions that we do in our modeling software and then how they should how our sciberras should perform in the real world.

And one of the things you wrote is based on these results, it is evident that a subwoofer placement directly underneath the stage can almost eliminate any advantages gained with cardio polar patterns. The low frequency SBL on the stage is virtually identical to that in the audience, moving the subwoofers two meters forward so that they are not underneath the stage results in much lower speed on stage while preserving the audience area response. So this is interesting. What I’m taking away from this is that particularly is is it all directional rays are just gradients of a phrase, don’t work under the stage, don’t have the same result that we expect, and to get them to work as we expect, they need to be at least two meters away from the stage.

Well, the paper that you’re talking about, that was from about 10 years ago and the findings presented in it were purely based on simulations. Now, the good news is that since then this has been tested. I had an undergraduate student a few years ago do these tests, so we put a directional subwoofer on its own, took some measurements around it in the virtual state while the stage area, the pretended stage area and the audience, and then plopped a stage on top of it and see what happened.

And the results from those experiments almost perfectly lined up with the simulations. And that was a single unit that was cartload that basically turned into an army directional unit when a stage was placed on top of it, when you kicked that unit out by about a metre in front of the stage. And then we tested it with even less and the effect was about the same. It regained its cartload response. So from the research that research and other smaller experiments done by myself and others, any sort of directional low frequency system, whether it’s in a single box or whether it’s a gradient or an end fire approach, isn’t going to play ball when underneath the stage.

So you have to get it out as much as you can in front of the stage. Now, whether that’s a metre, two metres, half a metre, as long as it has some sort of breathing room, then it has the ability to achieve the cartilage response. If you chuck it under stage, there’s no point having to do it. It’s going to be IEM.

It’s interesting to say that because I recently heard a story from Mira Ramirez talking about how a student had a question about this at a seminar he was teaching in Iceland. And so they decided to put a gradient array on a dolly or wheels somehow. And then they had a rope and they were able to, like, measure it and pull it closer towards I think it was a boundary I came in for as a wall or a stage. And until they saw the results change in front of the array.

And it was when they got close to a meter that they started seeing the results change. So it’s cool to to hear you’ve had.

And I’m glad you mentioned the boundary effect as well. I mean, this is something that’s been known since the 1950s with Richard Waterhouses research. You know, we’ll know the Waterhouse effect. At least some of us know where you get towards a perfectly reflected boundary. You get plus six dB in some cases. But if you read his research a bit closer, he also looks at dipole units where his version of a dipole unit is basically ah, and fire cardioverter without the delay, I should say, gradients.

No, don’t fire a gradient. Turei Without the delay, just the reversal. And what he found was if you back this up towards a wall, it just kills your output in front. And I’ve had this happen to me before, years ago with my colleague Adam Rosenthal. We were doing a gig at the Auditorium Theatre in Chicago and we were using Naxos 18 subs at the time. So cardio boxes and the only place we could place these were right in front of the proscenium.

And so it was right up against the wall. We flip the switch with the system. There’s no I’m just looking at the amps and the amps are slammed like there’s definitely electricity coming out of these things. But we were getting nothing acoustically and we thought about it. And this was still early on in my career. I didn’t have like a proper, proper knowledge of it, but just through experience. And we turned off the rear drive unit, the it became nominee, and all of a sudden, boom, you had left once again.

And a number of years later, when I was learning about Waterhouses research and I’m reading through his paper like that, yeah, that’s what happened.

We had heard stories too close to the wall and it was canceling itself out effectively. What happens is the direct sound is canceled by the reflection off the wall. So it’s the same thing, which is why even when I have my subs out for a ground based system in front of a stage, I have to battle the staging people to not put a stage skirt up. And this is my welcome to my world in the summer and the Chicago festival scene.

You know, one day I’ll take down the stage skirt and have a talk to the staging people and say, look, you know, my speakers aren’t going to work right off that stage skirts right behind the skirt, does it? I’ll take it down. Yeah. Yeah. If it’s not acoustically transparent, look at the same thing. It’ll destroy the cardio subs. And the next day I rack up to the site for day two of the festival and the stage will be back up because it looks like to.

I know. I’ll fix that. Yeah. So it’s part of my routine. Every morning I have to go take off the stage skirt again. So my subs work. But yeah, it’s stage skirts are also really important to keep an eye on because if they’re right up against eukaryotes, I don’t know which they described as acoustically transparent though I got to test them all.

I talked to John Burton about it because John has wanted to do a test not only on this, but he wants to know what happens to this scrubbed material when it gets wet. And I’m sure a lot of people have experienced that when it starts to rain. I’m not talking about subs now. I’m talking about the banners that get flown in front of our eyes. All is well and good until it rains and all those tiny little holes fill up with water.

So John wants to do a big study on this, not only for low frequencies, but for the high frequency effect or just do in the morning or, you know, water in the air.

Yeah, yeah. So my answer is, I don’t know. But I know the stage skirt’s used at the fest’s I work in Chicago are not at all transparent. They’re sheets of plastic. OK, so they need to go.

We have a couple of questions from Facebook. Elliott Clark says, Where is the medical hearing research heading or currently at four dB music exposure rather than constant deba industrial noise? So I guess what he’s referring to is that so many of our regulations refer to dB standard, right?

That’s right. All of them referring to dB standard with most of the industrial noise, will have dB limits as Peake’s. So in terms of the research for our sector, for a music based noise, as far as I know, there is little to no research.

There are some interesting projects that are more in the early stages in the Netherlands looking into this, looking into the health implications of low frequency sound as quantified by dB. The guy to talk to about that is Marcel Cuck from dB Control. He’s currently part time doing a Ph.D. at a university in the Netherlands looking into this.

So he’s really, as far as I’m concerned, at the authority on this at the moment. But he’ll be the first to tell you that we don’t know much. There just isn’t the research. And when Marcel and I and others from the live event community travel to Geneva and are at these meetings at the World Health Organization, it’s about four of us in a room of about 50 people. And we’re the only ones who really seem to have any clue as to what we’re really talking about when we’re talking about 140 dB at low frequencies, I think most people don’t believe us.

We get some eye rolls. We get some distinguished professors from Europe kind of basically pooh pooh the whole thing, saying it’s not a problem. And really we’re saying, look, we go to these events, we work there. We’ve stood there in the audience. And, you know, we refuse to accept that this is perfectly safe. But at the same time, I can’t sit here and say anything definitively. I can’t say that’s definitely dangerous, but I can’t say that it’s definitely safe.

I’m saying here are all the questions and we need to do more research on it.

So Elliott would like you to continue to extrapolate into the future concerning the safety, he says. Does he think we will reach a sensible compromise for impactful, powerful live shows within safe exposure limits and environmental off site concerns?

At the moment, there are two separate things what’s going on with the show is purely looking at sound exposure on site. They don’t consider and make it a point now to consider anything off site because that would get messy. And that’s pretty much my answer. It’s going to get messy. What I hope and having gone through this work with a US and kind of releasing this big report with sort of what we know and what we don’t know, I’m fairly confident we can achieve the levels we want to achieve and the impact we want to achieve at live events.

I don’t think we’re going to be coming out and saying you have to turn everything down. I hope we don’t have to say that because I like loud sound more than most other people, probably.

But I think it will be causing maybe some slight changes to practice, for instance, when you can try to fly yourself Bofors instead of putting them on the ground or at the very least try to maximize the distance between any loud speaker and the closest audience member. So you don’t want to have someone yell within arm’s reach of any loud speaker know, especially looking at the subwoofers and air front fills their close, but they tend to be at a lower level anyway.

So you can make that work, but effectively it’s protecting the people closest to the P.A. That’s the main thing. And if you can maximize the distance they are away from the closed speaker, that means you have less to worry about and then you can get the level you want to front of house without killing the people out from up front. And that’s the main thing. So I think there’ll be some suggested changes of practice were practical for off site. I think there’s more research to be done along the same lines of what they did in Amsterdam, but just doing slightly more controlled and in-depth experiments to make sure that your conclusions are are robust.

But again, this is what we try to kind of hammer home with the WHL. We think that it has to start with system design. And what we’re trying to avoid is regulations coming out and then causing us to scramble to meet those regulations. What we want is for the discussion to start with finding the best practice for system design and then craft some regulations around that. So that’s what we’re looking for.

But I would really hope that none of this actually impacts the exciting live event experience that we’re delivering to people on a daily basis.

Is looking for some software suggestions. So they say ask him if there is any 3D acoustic modeling softwares that you can drop your own models into that are priced reasonably, that is. Define recently, yes, I wrote back to him on Facebook and said, do you mean like different than something like your eyes? And he didn’t write back. So I’m not sure what exactly he’s looking for, thinking of. But maybe you could just list a couple of your favorite software that allow you to bring in models from the outside.

Well, for what we do, which is electroacoustic, it’s designing sound systems and putting them in an acoustic environment. As far as I’m concerned, there’s only one accurate piece of software out there right now, and that’s is is brings in the proper loudspeaker data from the manufacturers and gives you, from my experience and talking to the others, the most accurate estimation of what that sound system will do in that room. You also have Cat Acoustic, which I’ve used a little bit, but I lost interest because it doesn’t go down to the low frequencies nor disease for that matter.

And you have Odean and Oden’s really great for pure acoustics and designing kind of great concert halls. But you really I find myself I keep going back to ease, and that’s what I teach my students with because they are the most accurate with loud speakers. But it’s going to cost you it is expensive and that’s you can’t really avoid that. As far as I know, there’s no free software out there at the moment that that does kind of the full package.

So a couple of short questions for you to wrap up. What’s the one book that has been immensely helpful to you, Adam?

There’s only one book that I’ve ever read more than once, I’ve only ever read one book. If it was my my teenage self talking to you, that would be a true answer, probably. No, I love reading. I’ve got bookshelf after bookshelf of books. I devour them. But the only book I’ve ever read twice is Fear and Loathing in Las Vegas, for sure. And, you know, I’m not saying that this has taught me how to live my life or anything because I think I’d be dead by now if I was following those examples.

But it taught me to laugh at life, not take anything too seriously. So important, you know, and I think that’s, you know, that’s something that you read Thompson’s work and, you know, yes, he’s he’s outraged by all these things he’s saying and commenting on it. But at the same time, he’s just being absolutely ridiculous, you know, and having fun in life. And I think that’s important. You know, we talk about these things and we’re serious about audio and all these important things, but ultimately we’re providing entertainment and we’re here to help other people have a good time.

And we have to have a good laugh all the time, really, you know, and be a little bit serious.

Adam, where is the best place for people to follow your work?

The best place is probably checking in maybe on a monthly basis to my website. My website is where once I’m allowed to, I’ll post my publications so you can read my papers. It’s usually about six months to a year delay before I’m allowed to put them up for email me and ask me what I’m working on.

I don’t know. Yeah, I’ll pull you in and you’ll never escape. Well, Adam, thank you so much for joining me on Sound Design Live.

It’s my pleasure. Thanks for having me.

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • …
  • 54
  • Next Page »

Search 200 articles and podcasts

Copyright © 2021 Nathan Lively