Sound Design Live

Build Your Career As A Sound Engineer

  • Podcast
  • Training
    • My Courses
  • Archive

Can you mix better on speakers with flat magnitude and phase?

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live, I talk with the Co-Founder and VP of Products at Sonarworks, Mārtiņš Popelis. We discuss the magic under the hood of Reference 4, why you can only please 17% of people, and the true test of mixing on a linear system.

I ask:

  • What’s under the hood of Reference 4 Studio Edition? Can you walk us through how it works? I’m curious what the secret sauce is beyond measurement and corrective EQ.
  • What are some of the biggest mistakes you see people making who are new to studio monitor setup and calibration?
  • FB
    • Alex Benn – I’d love to know how it generates the phase plot of the measurements. My Genelec 8030a’s have one full wrap when measured in SMAART, but it doesn’t appear in Reference 4.
    • Kyle Marriott – Can you ask if they’re FIR based, and tackle phase or just magnitude? Also what’s the latency? Is it fixed or variable?
    • Jon Burton – What is the normal latency on the system? I found it quite long on my MacBook pro so had to keep turning it on and off to do Zoom calls etc, is this normal? Can it be reduced?
    • Ben Davey – Do they have any plans to add profiles for In-ear monitors to the headphone side of things? Reference 4 is amazing on my closed and open back headphones, but would love to be able to balance out my Ultimate Ears as well.
    • Brandon Crowley – I love Reference 4 and use it for both mixing and listening purposes. I’d like to know, what’s next for Reference 4? It seems like it’s already the complete package, could studio monitor emulation be in it’s future?

Once you do the calibration and mix your song on this new reference sound then the real test for bringing value to you is whether it is translating better.

Martins Popelis

Notes

  1. All music in this episode by Spare Parts.
  2. Organizational Culture and Leadership
  3. Martins on Instagram and FB.
  4. Quotes
    1. The objective of the reference product is to remove all of the sound coloration.
    2. The measurement microphone lives in its own reality.
    3. There is a huge DSP living in your head doing magic tricks with interpreting the world for you.
    4. The solution to sound quality we feel is up to the individual. It’s not an all sounds fit one.
    5. Invest your first $1k into getting a pair of speakers. Then invest your next $1-2k into room tuning. But from that point on, Sonarworks will be the thing that gets you to a better place than an additional $2-4k investment into room tuning.
    6. There is a genuine difference in what sound people like and there is a difference in our hearing.
    7. No matter how well a headphone company does with their R&D, they are going to hit the sweet spot for no more than 17% of the population.

Transcript

This transcript was automatically generated. Please let me know if you discover any errors.

I’m Nathan Lively, and today I’m joined by the co-founder and VP of products that sonar works, Mārtiņš Popelis. Welcome to Sound Design Live Nathan.

Hey, I’m excited to be here.

All right. So, Mārtiņš, I definitely want to talk to you about reference for and kind of the amazing stuff you can do with it.

But before we do that, what is one of the first tracks you like to listen to after you get a system set up?

That’s actually a good question, because there is a specific track. My and actually the favorite track of many of our team is Rage Against the Machine. Killing in the name, OK.

So the good thing about the track and the good thing, and I mean the important thing about the test track is its ability to kind of very quickly uncover you the reality of that particular sound system. So it’s actually very important for that track to be kind of full with musical information as much as possible at all times across all the frequency range. Right. I mean, the other extreme, if you take just somebody some track where there is a female voice singing, maybe without a musical instrument or maybe with some kind of small sound in the background, that kind of track is off like a very hard thing to use to understand what the sound system sounds like on this Rage Against the Machine track is kind of all over the place from the first second.

So you can kind of very quickly assess what’s going on and also very quickly understand the AB of the kind of calibration impact and very, very quickly get a feel for what’s going on besides being a great song, of course. But its technical ability to show you the frequency response of the system is what’s more important when you talk about the test track.

So sure. So Martine’s how did you get your first job in audio?

Like what was your first paying gig? Short answer is Sonarworks.

OK, I got into audio through founding, through cofounding sonar works into a paying job as audio. Actually, I don’t come from a very heavy musical background before I kind of personally see that as an asset that’s been extremely exciting to kind of learn to know what the world of audio looks from inside, from the creator space. I’ve been excited about music and audio technologies before joining, but I think kind of the fact that we actually came from outside the music industry was one of the reasons it kind of let us take a fresh look and get a fresh perspective about the problems and maybe offer some kind of new solutions to the problems.

But as I was thinking about this question, what came to my memory was that kind of before I got into paying jobs in audio in high school, I think the first gig in audio was that in the high school I was involved, we kind of had like this thing called Radio Center, which was small. And it was basically the small place in the school that hosted the school’s audio gear and the intercom of the school, etc. So it was kind of led down and forgotten.

But me and a few friends, we were really excited about that for some reason. And we were also sometimes setting up the school kind of disco kind of party audio for using the school’s equipment. But if we uncovered that place and for a while we had built a computer machine that when the break hit, then it started playing some music over the school intercom. So we had like a few few months of the happy time when the break from the lesson was announced by Pink Floyd being played out in the school’s corridors.

But then some some some teachers just kind of didn’t like that. Their lesson is interrupted. So we got shut down. But that was my first involvement, was kind of P.A. system, I guess, because the market is looking back on your career so far.

And a lot of things have have happened since in between your first time playing around with some electronics and music in high school and then cofounding sonar works and you had to make some choices along the way. So what do you think is one of the best decisions that you made to get more of the work that you really love?

We think my previous job, I guess. Sure.

How did you know that? I mean, as I did, you know that it was the right time to do that. Can you sort of take us to that moment?

I mean, it was kind of I think I was twice in my life in that situation, and it was always sort of a scary choice at the point in time. But both times I think it was kind of the right thing to do. So I was right out of the university. I had joined one company and I was working with them. It was an environmental consulting and I was working with them for like three years or something with at some point I just kind of realize that, hey, I really kind of don’t feel in that particular set up with us.

It’s right for me to work for the company. And I would really like to try building a company rather than working for one. And yeah, I took the kind of risky bets of quitting that job and joining with my friends to actually start another nonmusical related company that was, I think, also a very good choice that kind of opened the Intrapreneurship drive in me. So I haven’t really worked for a proper job since. And then and then at some point, this non-equity related company, I kind of was with that for something like seven years, I think.

And then at some point I just thought to myself that, hey, I’m kind of I think I kind of know. The thing about this business, and it’s kind of if I will be doing the same thing in five to 10 years, then I’ll probably hate myself. So I understood that there is there really is no choice. Sure.

Simitis, let’s get into talking about the software. So there’s a lot of other videos out there already. And you’ve done several other interviews about what reference four is and kind of how it works. And I saw that there are several videos on YouTube already of people sort of walking you through step by step of what it does and what the results are. And so I think there’s plenty of material out there already about that. But I haven’t seen a whole lot of people talk about kind of what’s under the hood.

And since the Sound Design Live audience includes a lot of live sound engineers and a lot of people that also do measurement as part of their work and output processing, I think they would be interested to know kind of how does it work.

So I was wondering if you could walk us through how it works and maybe what some of the secret sauce beyond measurement and correctives IQ.

Sure to begin. The objective of the reference product is obviously to remove all the coloration. Right. Can you talk about the speakers in the room situation then, having measured too many professional studios and bedroom studios alike? I haven’t really seen a completely neutral studio kind of from the first measurement, meaning that I mean, bedroom studios obviously have a ton of problems because they’re are not the bright room and the set of options are limited and the budget is limited.

But if you talk about the big studios, then they also have their own problems, like the gear keeps changing and somebody is bringing in a new sofa and a new kind of shelf. And then the console has a lot of reflections and there are different issues in the big studios as well. So every studio I have seen so far could kind of have some benefits, bigger or smaller, of removing that kind of unnecessary coloration to the sound to give you a more more accurate ability to hear of what you have really kind of mixed or produced or created musically in your track.

And that is the objective. But then when you start to ask the question really about so what is really the excess coloration, what is really that thing that you’re trying to remove, then you very quickly get to a realization that it really depends on the method of measurement. And then very one very important aspect to realize is that if you think about a measurement microphone and the human being, then these two things do not hear a like like measurement microphone lives and it lives in its own reality and in the measurement microphone.

The reality, if you measure a thing, you kind of in one spot and then you move it for a couple of inches and then you measure another measurement and then you move it far more a couple of inches in your studio. You will get three completely different measurements, frequency response wise, because the frequency response of the room is uneven as seen by the measurement microphone. But for a human being, your reality is interpreted by your brain also in the audio domain, obviously, and it’s a huge DSP actually living in your head, doing magic tricks with interpreting the world for you and the way your brain is interpreting audio.

It’s actually not showing you the world of the measurement microphone. It’s actually more like taking the average of all the sounds around you and all the kind of reflections coming from your body and from different areas of your room. So your brain is kind of constantly listening in to an area around you and then kind of averaging it out for you. So you don’t have these radical you don’t hear very radical changes in frequency response as you move your head from being to inch by inch and kind of sonar works.

Measurements also is really built into one of the cornerstones is it’s built around this inside. So we do not try to do like a single point measurement. We really work with the average of the area that’s kind of intelligently calculated to mimic the way a human being here. So we’re kind of working with the reality of a human being and not the reality of a measurement microphone. And then the other important aspect of the measurement software that is important is that we do like thirty seven measurement points around the listening spot of the engineer, and our software is unique with the fact that we can actually locate the microphone with each measurement.

So it’s actually kind of draws itself MAPP of the relative spacing of these points in the room. And that actually gives two important things. One is it actually allows the software to kind of understand these measurements in context and then of do the right calculation of the average profile. But the other thing is it actually allows for a really easy and consistent user guidance of the method so that. Actually, if I do the measurements in this room or you travel over here and also do the measurements in this room or you actually do the measurements in your room, the method that we apply using sonar works reference will actually be very consistent because the software is actually asking you to get the very specific points around very specific places in your studio and making sure that you actually do that without asking you to read like a very thick and boring manual.

So it really enables this consistency of how the method is used and how then this average is calculated that actually delivers the same consistent reference across different users across different locations and enables then not only some sort of random improvement saying, hey, we thought that these things might sound better in your room, but it actually allows us to talk about driving everything towards the same reference sound standard across different places. So if you work on a track in your room and you send it over to your friend in another room in the city or maybe even on the continent, then you are actually hearing very much the same thing when you’re listening to a set of calibrated speakers.

So that’s kind of the other important thing. And the third important thing is that we actually apply the same calibration target for speakers as well as for headphones. So headphones we measure in our lab so the user doesn’t have to measure them, but also for the headphones we strive to achieve frequency response wise for you as the listener, the same frequency response as that of the calibrated speakers. And when we were still allowed to travel, we were often doing doing the studio demos where you can actually measure a set of speakers and then also set up a celebrated headphones and allow the user to compare and see that actually frequency response wise, it’s really, really sounds very similar and that enables a lot of kind of portability and ability to work like while traveling or late at night when you kind of lost your speakers or in different places where you don’t really have access to speakers.

So it’s really this portability between headphone use cases and speaker use cases. That’s also one of the unique things behind somnolence.

That’s interesting, this idea of linearity and portability.

I did an interview with Alex Awana from Audio Test Kitchen and he yeah, he pointed out to me that one of the reasons that lots of studios at one point started installing SSL consoles was so that you could record at one studio, go to another one and have it pretty much sound the same. Now, I’m not sure how much that SSL console imparts its own sound. It seems like, you know, the room and the position of the speakers would also have a big effect.

But still, I understand sort of people’s pursuit of this idea of like let’s figure out how we can create some consistency.

Mm hmm. Yeah, and I think it’s especially kind of nowadays when everything is moving so fast towards the reality where music is actually more and more produced over distance and then different types of home studios, this kind of ability to work on the same platform of reference sound I think is more important than ever, because kind of, I don’t know, 20 years ago when most of the hit songs were kind of still mixed and mastered in some of the kind of high end studios under the big labels or whatever.

That’s reality. I mean, it’s still real for a few musicians, but for the majority and if not the reality anymore. And this kind of mobile, portable global world really asks for a consistency, I think, in sound.

So from what you your description of the way reference for works, one of the things that I’m understanding is that if I were to try to just replicate the same thing at home in my own studio without reference for, I could take thirty seven measurements around my room, average them together and then create some sort of complementary IQ and a manual way. Or I could use some kind of fire filter creator with some auto things anyway. I could do a lot of that manually, but I’m not going.

I don’t know how to localize that microphone in the space. How does reference for use that location information in its final result?

So I mean, there are these two things that uses the localization information for. One is this ability to see the measurement points in context and in this relative MAPP between each other. And then we’re talking about an average. But it’s not that simple in the sense that kind of how can you combine the spatial information to actually calculate the profile is kind of one of the one of the secret sources. And then the other thing is this kind of consistency of user interface.

I mean, sure, there are kind of tools like Erev or some other that are smart that allow you kind of if you want to geek out about how you measure your room, then you can of measure all sorts of parameters and then of do your own thing. And there are people who do that. And I don’t get to hold anything against them, obviously. I mean, that’s perfectly cool thing to do. But then you have to realize that you are then in full control and taking full responsibility for the things that you are measuring, for the things that you’re tuning.

And I think I mean, one of the analogies that I like about our software is if you talk about the kind of visual design world, right. Then there is Photoshop and there is Instagram. I mean, there is Photoshop, it has gazillions of features. You have all the freedom that you want creatively. There is this thick Photoshop Bible and you can probably spend like a good five to 10 years actually mastering the tool. And once you do that, then you can create wonders.

And there is no limit to kind of how you can express your imagination through kind of visual design. But there’s also Instagram that’s kind of more focused towards people who don’t want to invest five years in kind of mastering that art and they just want enough to push a button. So that’s a pretty picture, be done with it and move on with their lives. So as I see the music create a world, most of the people I mean, there are some people obviously who like to kind of geek out the bathroom acoustics, but there are less and less of those people.

And most of the people I know among our kind of users is people who are actually very passionate about music. And they would want to spend as much time as possible actually thinking about music and creating music. And the fact that the music from their studio doesn’t translate that well to the outside world is just the problem that they would like to get rid of as quickly and as seamlessly as possible. So our philosophy behind building the product is also to give them that ability.

We’re always kind of thinking, hey, how can we ask less questions to the user? How can we make it even more seamless? How can we kind of smooth out all the workflows so that the user can be as I mean, the user experience from our perspective would be you install solar rocks, you press one button. The system says you’re calibrated, kind of your sound is good. You can go back to your creativity and you can say thank you and do that.

So that’s the kind of ideal user interface. So unfortunately, we have to ask the user to do more things, but the less questions we can ask and the less the quicker we can let the user return to creating music, the more successful we think we are. So that’s also kind of, I think, a very important aspect to take into account. So totally.

I really appreciate how you guys have simplified the process and also attempted to remove the opportunity for me to make errors, you know, as as the end user, because I can see an opportunity where if I really wanted to geek out about the measurements myself and pursue a path where I go and I. Get out a smart rig and I and I take 37 measurements, but then I would also need to figure out a way to average them together in a weighted manner where I would say, OK, measurements one through ten, I want those to be worth five percent in measurements, you know, 11 through 20.

I want those to be worth seven percent.

And then, you know, you build up this complex average.

So you guys have thought all that stuff through and probably saved me a couple of days of trying to figure that all out. And I’m able to do it in, you know, ten minutes or think I was I would say more.

I mean, ultimately, the real test of what we do is kind of once you really do the calibration and then you mix your song on this new reference sound, then the real test, whether we’re bringing value to you is, hey, is it now translating better? Are you getting a better result soon or are you able to deliver better sounding song than you could before? So that’s the real test. And the trick was actually finding that the right method of measurement and calculating the profile and there are kind of more psychoacoustics and acoustical things that kind of go into the equation, but is really arriving at the curve where you can say, OK, this actually works.

This actually helps things translate. And with sonar works, I mean, we really get users every day. Somebody writes to our support saying, hey, thank you. Like before I was doing ten cycles back and forth to my car to check my mics. And now since I’m still zone, I would say I just did one cycle to the car and I liked everything about it. Like, so if there are these really user stories coming in. So we’re quite confident that the sound that we deliver actually helps people get their translation faster.

That was one one engineer when we met actually at the MAPP show last year. And he was like, hey, guys. Like, yes, what? I installed the reference and the first song I mixed on this, I got the grab it. So that’s funny. So it works.

So, Martin, I know that you you mentioned that you have measured a lot of systems in studios and you also have seen and done a lot of support for the people who are using reference for.

So I wondered if you could sort of, you know, aggregate some of this learning and share with us some of the biggest mistakes you see people making who are maybe new to studio monitors set up and calibration. What are some of the things people are doing wrong in terms of placement? AIM and then I don’t know other like maybe key things that they’re doing?

I would say kind of if you talk about specifically studio systems, I mean the kind of installment of the physical things, I think the kind of biggest thing I would say is not the way how people set their systems up. It’s actually how much of this historical attachment to the hardware is keeping people in the mind frame of mind that says, hey, it’s all about the speaker or it’s all about the headphone or it’s all about the room, fine tuning.

I mean, the way how I find most people think about it is basically first I should get the best speakers I want and then I can get. And then if I have more money, I should invest more into speakers. Then at some point they say, OK, I probably should invest into room tunings. And I mean and for the speakers, you can probably I mean, there are very, very kind of affordable speakers and there are like the thousand dollars, a pair to two thousand dollars a pair of speakers.

And then you quickly get into the five to ten thousand dollar range. Right. And then usually people think that, hey, it’s probably if I can’t get into the ten thousand dollar for a pair of speakers range, then then I’ve been that’s my limitation. And then if you talk about the room tuning, then you quickly get into like you can do some small job yourself, but then you quickly also get into tend to like, I mean, unlimited amount of dollars that you can invest into room tuning.

And I think people really think that that’s the goal and that’s the limitation. Whereas really I think that of now in the day and age with where we are with like software tuning technology like ours, I mean, by all means, we are not saying that we are kind of the replacement of room tuning. You have to get a decent pair of speakers and you have to get a decent amount of room tuning. You can’t really go into a glass cubicle and then kind of expect everything to be solved by calibrating your speakers with sonar works.

It wouldn’t work. You have to invest in speakers and you have to invest into room tuning. But really, the place where the kind of return on investment from sonar works becomes really your best way to improve your studio. Sound is closer kind. Then people think I would say kind of, I don’t know, invest your first thousand dollars into getting a pair of speakers, then invest your next thousand or two thousand dollars into room tuning. But from that point on, sonar works will really kind of be the thing that kind of takes you to a gets you to a much, much better place than additional two or four thousand dollar investment into the room tuning.

So we’ve had like a real story. We visited a friend in L.A. who is kind of working from his home studio and producing for a band. And we were kind of somebody introduced us and he went there to show what sonar was can do. And after we set it up and we calibrate the studio, he was like, well, guys, you know, I had these four thousand dollar speakers and I was thinking that that’s my limit. And that’s why I kind of I can’t really get my mixes to translate as well as I could.

So I was thinking of selling some more gear, selling those speakers and getting myself a ten thousand dollar speakers. But now, apparently, I don’t have to do that because it’s way better than I thought it would be with these more expensive speakers. So that’s, I think, is kind of the biggest the biggest thing I think I see. I mean I mean, obviously. But that’s very rarely where you see kind of some very massive errors and placement like people placing their speakers asymmetrically or kind of I mean, most of the people already know that, hey, like this equilateral triangle is the best kind of placement and you have to place the speakers at the level.

So that’s kind of I haven’t seen those problems too often, but this kind of thinking that, hey, I mean, with a few hundred dollars worth of headphones and the calibration, you’re probably better off than with a two thousand pair of headphones. So I like what you’re saying about don’t put the cart before the horse. So I can’t just go get some speakers out of the trash in the alley that I found with blown drivers and then install sonar works and expect it to sound amazing.

Sure, sure, sure.

I mean, there is enough kind of you have to get a decent pair of speakers, but most, many more people are there already than than they think they are. So small towns.

Let’s look at some questions that people sent me from Facebook. Alex Binz says, I’d love to know. Generates that, I guess it is talking about reference for I’d love to know how it generates the fees, part of the measurements marginal like 80, 30 ayes have one full wrap measured in smart, but it doesn’t appear in reference for.

All right. So this is a deep technical question. What he’s asking about is once you when you look at the plug in or the system wide of works, then you can select the different curves that the showing. And we’re trying to be really transparent about what the plugin is doing to your audio so you can turn on the phase response of the system. But what I think is what he’s referring to. But the phase response that the plug in is showing to you is only the phase response of the sonar effect.

So when we measure the speakers, we actually do not measure the phase response of the system. We only measure the frequency response of the system. So we don’t really attempt to measure the phase response of his pair of general. What we’re saying is that depending on the filter modes that you select inside the plug in, whether it’s a zero and whether it’s a zero latency or is it a phase linear thing, then if you’re in the zero latency mode, then the plugin introduces some change to the phase response of the audio signal coming through.

So what this graph shows is the phase distortion, if you will, that the plug in is introducing to your audio. And that’s the then layered on top, whatever the phase response of your speakers are. So that’s the answer to the question, I guess. Cool.

So, Carol, Mariotte wants to know, can you ask if they are fear based and tackle phase or just magnitude? We just covered part of that. So it’s just magnitude. But the plug in does show the resulting phase change from that magnitude change.

So, yeah, is the way you’re applying the filters are based. Yes.

So if I based when it’s working in phase linear mode, then it’s kind of full FBAR. When it’s working in the zero latency mode, then technically it’s IIR that’s implemented through the FLIR. But yeah, well that’s zero latency is the minimum phase filter.

And then he asks about the latency, is it fixed or variable. And it sounds like you have two settings. It’s either going to be linear phase or minimum phase.

Yeah, we actually we actually have three. I mean, the kind of depending on the depending on the use case and the really the preference of the user. The tradeoff is if you go for phase linear mode on the filter, then it introduces latency to the system. And that’s kind of inevitable. By the way, the mass of the phase linear filter works and in some cases that’s OK. And then people say, hey, I want this phase linearity of the system, but of the plug in.

But if you want to like if you’re tracking or you’re working in some other kind of latency sensitive tasks, then we also have in the other extreme we have the zero latency mode that’s then zero latency in the plug in. But then that costs you some change in the phase response. As far as we’ve tested, it’s not really audible, but kind of as I said, we’re being transparent about what it does. So you can check what the phase response is in the curve in the plug ins and then be your own judge about whether it’s OK in your use case.

So in the zero latency mode, then the plug in works zero latency that cost you some phase change. And then there is the optimal mode that kind of introduces a little bit of latency, but kind of goes to a little bit of a distortion. So it’s kind of trying to find the middle ground between the between the other two choices. So there are these three modes. I guess one of the things in the question might be also we have two ways how you can apply sonar works in your system.

One is the plug in. That’s fine. If if you have, like a serious production kind of setup on the rig, then that’s the most robust way to introduce sonar works into your system while you’re actually for your kind of mixing job. But we also have the system one, which is after the virtual sounds are in your computer and then processes all the audio and then releases to the real output. And that enables you to kind of calibrate all the audio coming from your machine, like YouTube, Spotify or whatever else you might want to use.

And some users also find it more convenient to work for their set up to kind of routed through the system wide. But then because of this virtual driver that then sits in your machine, that costs you additional latency so the plug ins can run in through zero latency and the system wide, even in the event of zero latency mode, it’s still cost to the latency of the buffer of the virtual sound. That’s kind of. And does that have to do with the way we’re looking?

Does that have to do with the sample rate? Can that be changed?

So the next question is, it has to do with how the operating system kind of. Handles sound card devices and of what are the operating system requirements for buffering audio that’s kind of going to the audio device. So we’re looking for ways how to how to optimize it by kind of thinking about new ways, how to ride these virtual drivers. But at the moment, it’s kind of limited by the operating system.

OK, so John Burton had a question about that in this system wide implementation.

So, John, they’re working on that then, Dave. Do they have any plans to add profiles for in your monitors to the headphone side of things? Reference for is amazing on my closed and open back headphones, but would love to be able to balance out my ultimate ears as well.

Generally, we’re constantly working on having new headphones to the database. So I think somewhere in the support page we have a place where people can vote in and say, Hey, I would like you to add this or that headphone, and then we’re kind of averaging all of that kind of decide which model do we proceed with. So generally we can calibrate both over the years and in years. So it’s just a matter of where our ultimate ears is currently in our pipeline.

So I’ll I’ll check it, then add another votes to ultimately get in the pipeline. And Ben, it sounds like you can go find that place on the support page and vote yourself. Brandon Crowley says. Yes, I love reference for and use it for girls mixing and listening purposes. I’d like to know what’s next for reference for it seems like it’s already the complete package. Could Studio Monitor Emulation B in its future? The short answer is yes, I mean, as we speak, actually today I had a couple of meetings which are currently really intensively working on planning the reference five.

I hope to kind of be able to come with enough news in the coming months about exactly what is going to be. But this ambulation is one of the one of the features we’re now intensively thinking about. So the answer is yes, it could, but I don’t want to make any hard promises yet. But that’s one of the things on the table. Martine’s, you’ve made this great product, a lot of people love it, and it seems like everything is just going great for you, but I’m sure that you’ve had some challenges and some painful moments in your career.

So I wondered if you would tell us about maybe one of the biggest and most painful mistakes you’ve made on the job and how you recovered. All right.

That is actually a very interesting story behind this. One reference is kind of where we started out as a company. So we built this tool for calibrating speakers and headphones for the music creators. But actually, since day one, when we started the company, we’ve always kind of dreamed bigger. And we’ve always asked ourselves the question about, hey, what is the ultimate answer to the question about the perfect sound? What is the ultimate sound that everybody is thriving for?

And kind of early on, we realized that there is a creator world, but obviously there’s also the listener world. So once you really create the song in the studio, I mean, reference for it helps you get the song to translate better. But what does that translation really mean? It means that it still sounds maybe OK on everything, but it still sounds different and maybe not perfect on anything because all the consumer devices out there still have various discrepancies in the frequency responses and the way how they sound.

So once you create your piece of art and let it out to the world for the people to listen, then I’ve met engineers who say, hey, I can’t really listen to my music outside my studio because my ears bleed about how wrong it’s really sound. So there is this even even with the reference for you still face the problem of, hey, so how do people outside in the world really listen in to my music? And it’s kind of early on, we’ve kind of dreamed about, hey, if we have this software technology that can kind of change and control the way speakers and headphones sound, why don’t we kind of go all the way and solve the translation problem, kind of not only across music creators, but also across the and listeners.

And then everybody will be listening to the same reference. Everybody would be hearing the same thing and everybody would be perfectly happy and we would have kind of killed with the translation problem, which is very much possible technologically, I think, day and age. So and based on that dream, we created a product that was called True Life. Maybe, maybe, you know, this is like three to four years ago. Yeah. So this was kind of the thing that we brought to the consumer world saying, hey, now you can actually listen to the sound that artists heard in the studio.

This is kind of how you should hear it. And we thought that that’s going to be like a Tugg and kind of and it will all go great. A long story short and coming to kind of the biggest mistake perhaps, is that it ended up with the realization that not enough people in the consumer world really liked that sound. And if some people were like, yeah, I always wanted to listen to the way how people hear things in the studio, but that was eventually a minority of the people who listened to it.

And that was like a kind of a low moment for us because we invested quite a bit of effort and the kind of parts into building that product and it sort of didn’t take off as good as possible. And from the mistakes perspective, I think what was happening is we kind of thought that perhaps we can kind of dictate to the consumers kind of and teach them what good sound is and what good music is or kind of what’s good taste is we could kind of come from down from top down and say, hey, hey, listeners, this is kind of how your sound should be.

And that didn’t really turn out to be that way. And that was probably the lesson learned that you can’t really kind of dictate too many things to the market. You can only kind of listen to kind of what they I mean, you can only walk from the place where people are and kind of you can only solve the problems that they think they have and then kind of you can work from there. So but then long story short, how we recover from it, we decided to we’ve been always like a data and kind of rational thinking, approach driven company.

So we kind of said, hey, this is interesting. Let’s kind of double down on that problem. Let’s figure out what’s really going on. So we launched a probably biggest ever research into we launched the biggest ever research into consumer sound preference to discover. OK, so you don’t like this. So tell us, what do you really like? And we have like close to fifty thousand users participating in the in the preference discovery test to discover that, hey, if you take I mean, the long story short is that everybody likes different sounds.

I mean, there is if you think about there is a difference in how people. What people. Some people genuinely life like some people like a lot of faith, some people genuinely hated, some people like a lot of trouble, some people genuinely hate it. And there is also a difference in how we hear physically and our hearing changes with age. So all of these things combined, I think, is kind of the underlying reality of these kind of religion wars between, hey, is beats a good sounding headphone or is it something that’s kind of we should kill everybody who ever listen to it then for four minutes through a law, or is Sennheiser the right sound or.

I don’t know, Grado the right sound. So I think that below all of that, if you kind of buck unpackaged and look at this from a data perspective, then really there are these genuine differences in what sound people like, and there is this difference in our hearing. And from that insight we have now built, what is what you can also see on our Web page, which is the latest product that we’ve released, which is called Sound Idy, which is this idea that you can find your personally perfect sound through discovering your preference and through adjusting the sound to your hearing.

So we’re still using the studio reference sound as the starting point. We think that that’s the right place to start, like the way the artist wanted the music to sound. But then on top of that, we can discover the user’s preference for sound and we can adjust it for your hearing, for his hearing or her, so that when they listen to the music of the artist, they’re actually hearing the interpretation of it, if you will, that most suits their taste and has the best chance of actually emotionally engaging with the artist.

So for comparison from these research numbers, we now see that if you talk about a single fixed sound of a single fixed headphone sound as it comes out of the box, then it’s going to be the best possible sound for no more than 17 percent of the population. So no matter how good job headphone company is doing in their R&D, they’re going to hit the sweet spot for no more than 17 percent of the population. If they do the best job possible and they kind of air on that, then they are going to face a smaller percentage of the population.

Whereas if you personalize the sound with, say, sound, then we kind of see that currently over 80 percent of people actually say, hey, I like this sound better than what ever was the original sound of my headphones. So it’s a huge improvement. And as we evolve, the technology is actually kind of getting even better. So it feels that we’re really onto something. It’s still fresh. And if we just announced Sound Design Live this year and we’re really kind of just getting started with it.

So fingers crossed. But that’s kind of in a way, I think we feel that we have found the ultimate answer to the question about sound quality, and that is that the ultimate answer is individual. It’s kind of not one size fits all is actually individual for everybody of us. And the definition of perfect is really a personal matter. And we’re kind of working to get that into the consumer reality. And also, I think through that, we can solve the translation problem for music creators, because then you can kind of create this on the reference and be sure that whoever is listening to it will not be listening to like a random issue that happens to be their headphones.

But there’s going to be listening to it through an issue that’s actually kind of intelligently matched to that person’s preferences. Wow.

Well, I find this idea of personalized listening really interesting. And as you were talking, I was reminded of two things. The first one was we just wrapped up a live sound summit, 20/20, and one of the presenters was Lorien Bohanon, who mixes in your monitors for Michael Bolton Lizzo, among others. And she gave a presentation about not only how people’s hearing changes with age, but also with gender. And so it’s interesting that she is kind of a younger woman mixing these in your monitors for this older man.

And so she was talking about how for her she can barely stand to listen to the next from Michael Bolton, because for her, it’s way too loud, way too bright. But that totally makes sense for him as an aging man who’s like lost a lot of his high end and his hearing in general. And the other thing it makes me think of is hearing aids. So I have a friend who’s an audiologist and I have interviewed her on this podcast a few years ago.

And I also worked on a conference for a hearing aid manufacturer about a year ago. And was interesting for me that I learned from that conference is that the people doing sales have commissioned certain research to show that even people who don’t necessarily have a lot of hearing loss still enjoy like an improved experience of their lives when they have a hearing aid with some filters because the way they tune. Those things, as they measure, you’re hearing first and then they are similar to what we everything we’ve been talking about today, they apply some corrective IQ kind of processes.

That’s one of the things that the hearing aid does. And so even people who just have like a tiny bit of hearing loss just from age, not from maybe anything intense, find that they can like they can understand people better, they enjoy music better and things like that. So so, yeah, that seems like this. There’s a there’s a big opportunity for this kind of personalized listening experience.

Yes.

We we killed the topic of personalized listening.

OK, so I’ve got a few short questions here to wrap up Martine’s What’s One book that’s been immensely helpful to you.

Yeah, the one the one book that really comes to my mind is actually a book called Organizational Culture and Leadership by Edgar Shine’s. I read it, I think at some point when our team was kind of transitioning from being this kind of six people, kind of a little family to kind of growing bigger. And we can offer more and more getting into different types of kind of situations that kind of didn’t fit into my model of the reality at the time.

And I think this was kind of I mean, on a one on one end, it’s kind of a really simple message from the book that, hey, there are different cultures and they’re eventually based on different beliefs. And that’s why there are different types of miscommunications between these cultures. But it’s kind of I really like how that book goes quite kind of deep and wide. And so unpacking that subject and kind of putting it together in a very practical way for what it means for us on and on and on in everyday life.

I think that kind of at that point was kind of an eye opener for me that kind of helped me, helped me realize better how. Yeah, how to kind of how different people are growing up among occupations, among nationalities, among their backgrounds and so and where that all is coming from and how to kind of make life better for different people.

So Mārtiņš where’s the best place for people to follow your work? I’m not.

I have to kind of say that I’m not too active on social media, but I have an Instagram account, I have a Facebook account, and I post from time to time different things there. So I would say Instagram and Facebook are probably the places you can find me by my name. And certainly there are not too many Mārtiņš Popelises out there, so. .

Well, Mārtiņš Popelis, thank you so much for joining me on Sound Design Live.

Well, thank you so much for having me.

It’s been fun.

Do cardioid subwoofer arrays work underneath a stage?

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live, I talk with the Associate Professor of Electroacoustics at the University of Derby, Adam Hill. We discuss managing sound exposure and noise pollution, diffuse signal processing to cure power alley, and the effect of stages and other boundaries on directional subwoofer arrays.

I ask:

  • Your presentation at this year’s Live Sound Summit was called Managing On-Site Sound Exposure and Off-Site Noise Pollution. My takeaway was that you came away with more questions than answers, but there were a couple of interesting moments I wanted to follow up on:
    • If I am required to measure audience exposure during my event and the regulation states that it must be measured at the loudest point in the audience, but there’s no way for me to practically keep a mic there through the show, walk me through how to set up a measurement at FOH to accomplish the same thing.
    • From the research you got from NASA you wrote: Note of caution for 50-60Hz, due to chest resonance, causing whole-body vibration (annoyance and/or discomfort)? Is this something I should be watching out for with my system calibration? If there’s a resonance at 55Hz, could that unconsciously make someone feel bad?
    • Can you talk about what Vanguardia and SPLtrack are doing around off-site noise pollution?
  • Why is it illegal to fly subwoofers in Amsterdam?
  • I took the WHO hearing test and got a 79, which is supposed to mean I have good hearing. What did you score?
  • Diffuse Signal Processing (DiSP):
    • While a ground-based centrally distributed subwoofer array is a common and straightforward solution, it can be impractical and unsafe in certain situations (which is a topic for another day). Often a left/right subwoofer system (ground-based or flown) is a better choice. The problem with a left/right configuration is that there will be severe comb-filtering, causing inconsistent horizontal coverage. To avoid this issue, the left/right signals must be decorrelated. Existing approaches involve unique EQ applied to each side of the PA (which isn’t great from an efficiency viewpoint) or the use of allpass filters (which generally result in a reduction in audio quality).
    • I know you intended this for small rooms, but is it possible this could fix power alley for larger sound systems with uncoupled subs?
  • Subwoofer positioning, orientation and calibration for large-scale sound reinforcement.
    • Based on these results, it is evident that subwoofer placement directly underneath the stage can almost eliminate any advantages gained with cardioid polar patterns; the low-frequency SPL on the stage is virtually identical to that in the audience (Figure 22). Moving the subwoofers two meters forward so that they are not underneath the stage results in much lower SPL on stage while preserving the audience area response (Figure 21). 
  • FB
    • Elliott Clarke: Is he aware of male/female hearing differences (and hearing LOSS differences), as per Loreen’s LSS talk?
      • Where is the medical hearing research heading (or currently at) for dBC/music exposure (rather than constant dBA industrial noise)?
      • Does he think we will reach a sensible compromise for impactful/powerful live shows, within safe exposure limits *and* environmental/off-site concerns?
    • Matty Luka Sokanovic: Ask him if there is any 3D acoustic modelling softwares that you can drop your own models into! That are priced reasonably that is..
adam hill

Any directional low-frequency system whether in a single box or gradient or end-fire approach isn’t going to play ball when underneath the stage.

Adam Hill

Notes

  1. All music in this episode by Liam The Villain.
  2. Decorrelation of Signals Demo
  3. Software: EASE, CATT Acoustic, Odeon
  4. Fear and Loathing in Las Vegas
  5. Quotes
    1. An audience that is in front of a ground-based subwoofer system actually causes resonances within the audience almost like standing waves and room modes.
    2. It’s been shown that someone having the perceived control of an annoying noise lowers the annoyance. So what they do, for Vanguardia, they create a hotline so all you have to do is pick up the phone.
    3. There was never a subwoofer in the air during that test.
    4. I’m convinced that us audio engineers are able to cheat these (listening) tests.
    5. We’ve come up with a way to decorrelate two or more signals that are initially identical in a way that you can’t perceive it. You shouldn’t hear a difference in the tonality, but statistically, they are decorrelated. Run those through your LR subwoofers and you’re not going to get the coherent interference that causes the difference in tonality from left to right in the audience and the power alley.

Transcript

This transcript was automatically generated. Please let me know if you discover any errors.

Welcome to Sound Design Live, the home of the world’s best online training and sound system, tuning that you can do at your own pace from anywhere in the world. I’m Nathan Lively, and today I’m joined by the associate professor of electro acoustics at the University of Derby, Adam Hill. Adam, welcome to Sound Design Live.

Thanks for having me, Nathan.

So, Adam, I definitely want to talk to you about a bunch of stuff. We got a lot of questions, managing sound exposure, noise pollution, diffuse signal processing to maybe cure power alley. I don’t know. And the effect of stages, near gradient arrays. But first, once you have set up a sound system, what’s maybe one of the first listening tracks that you want to put on to hear something through it?

For years, my track was Broken-Hearted Road by Sonny Landereth. About a year ago, I switched over to three songs that I always listen to. One was stolen from my now colleague John Burton, which is Teardrop by Newton Falkiner.

So, Adam, how do you get your first job in audio, like what was one of your first paying gigs, first paying gig? That was my mom’s fault.

Thanks, Mom.

Yeah, thanks, Mom. Really, it was not on purpose that I ended up in this industry. You know, I was even at the age of about 17, 18, I was pretty convinced I was still going to be a musician. I had luckily enough musicians in the family to tell me, you know what, if you’re good at something else, you should probably do that. But it really came down to a finished high school. And my mom was like, well, you know, you need a summer job.

You need to do something. She’d already kind of pressured me into signing up for university. So I was going to do that. I was going to do electrical engineering that fall. But she said, look, you’re not bumming around this summer. You’re doing something. And she said, you do realize that you like music, you like live events. There are engineers who have to make this all happen. And I never had any idea. I never even crossed my mind that, oh, yeah, there’s a lot of people who work in this industry, but yeah, that was it.

So I thought about it and said, OK, I sent out a few emails, which especially back then I don’t think anyone checked their emails, just got nothing. But I thought about it like, oh, you know, I had been playing in my dad’s band since I was about 10 years old. And obviously we do maybe a dozen gigs or something like that. But we kept coming across. This guy named Gary Gand and Gary runs Gand Concert Sound, which is located just outside Chicago.

It’s a company that had been running since the mid 70s, still is running. And I said, oh, yeah, I know Gary. And so I asked my dad, like you, you’ve got Gary’s contact information, you know, mind calling him up and seeing if he could have a chat. So we got hold of Gary. Gary invited me over to his house one night and basically sized me up, you know, showed me a few of his guitars, asked me a few questions, see if I knew anything about sound, and then basically said, look, I’ll give you a shot.

I’m not going to pay you much, throwing you in the deep end. If you can’t swim, that’s your fault. And in a way, I went so that summer. Yeah, it was brutal. I had all the worst jobs, you know, testing about a thousand explorer cables and repainting the lift gates on the trucks, cleaning off your big analog multicar snake after it came back from a circus gig, you know. Yeah, all the good stuff.

But that’s where I started. And when I started, it was just a summer job. That was that I was going to, you know, make a few bucks, you know, maybe go to a few gigs if I was lucky and get paid for it and then go off to university and do something else. Sure. And then whenever work that’s probably almost 20 years ago now, you know, here I am. I’m still doing work for, again, concerts, sound and still involved.

That’s amazing. Your first job? One of my colleagues, Rob Lizzo, at the time, about a month in, he said very little to me the whole time. I think he was kind of saving me up from a distance. And he came up to me after about a month, just looked at me and said, You’re still here. I am like, you’re doomed now. Yeah, it’s like telling me you’re never going to escape.

Looking back on your career so far, what do you think is one of the best decisions you made to stay happy and get more of the work that you really love?

I think the best decision that I made was to stay in academia, actually, which is probably maybe not the usual answer that that you’ll get for from interviewing people on this podcast. But basically, when I ended my undergrad degree, I knew a lot about electronics, electrical engineering, and the plan was right. I’m going to go back and work full time and live sound. And I thought about it. I’m like, OK, I don’t really know much of anything about acoustics.

So I started snooping around and I found that if you go to the UK, you can do an entire master’s degree in a single year.

So I thought that was pretty cool. I’ll I’ll take a year off.

I’ll go live in the UK and learn a bit about acoustics and then come back and do live sound. So I did that. Learned a lot about acoustics at the University of Edinburgh, you know, the proper physics and mathematics behind it. And then happened to be just get in touch with this professor at the University of Essex and Malcolm Oxford, who you probably don’t know him in and live sound. People probably don’t know him, but the name is absolutely legendary and more of the academic side of audio engineering.

Yeah, he’s spent fifty years working on power, AMP designs, working and all sorts of digital signal processing routines. He’s really kind of behind the scenes, touched quite a bit of the technology we used and he convinced me to go to a PhD. So at that point I kind of had to make the decision right. You know, do I go the academic route? Because if you do a PhD, I think you’re you’re pretty much overqualified for, you know, sticking it out as a live sound engineer.

And I’d be. I’d be standing so far away, you just love it so much. OK, sorry, go ahead. But I think you’d you’d also be stepping away from the core of the industry for such a long period of time that, you know, I think it would take some time to get back into the full time groove of it. But I thought about it and said, well, you know, in the summers that I’ve spent with and still doing summers, probably about four or even five months a year full time with them working in the warehouse, going to do gigs.

And, you know, I said, OK, the gigs are great. I love those. You know, I live for that. But the warehouse work is just really monotonous. And I was getting pretty bored with it. And it just kind of said, look, you know, I’m not sure if I’m going to be happy doing this for the rest of my career, need maybe a bit more of a challenge. So the academic world seemed to fit with.

That’s where I can still kind of keep a hand in life sound. You know, to this day, I still go back, you know, at least a month every year and go work for Gand. But it kind of allowed me to work more on the research side of things and teach the next generations of engineers. So I did the PhD and then the University of Derby, which is smack in the middle of England.

If you’ve heard of the car company Rolls-Royce, that’s for Rolls-Royce is from that’s going on for they were looking for a lecturer who had a background in life, sound, understood music, understood acoustics, understood electronics and was research active. And there aren’t many people in the world who who take all those boxes. So it just it was the right place at the right time. And I’ve been with Darbee for almost nine years now.

All right, Adam, so we just wrapped up a live sound summit. Your presentation was called Managing on Sound Exposure and off site noise pollution. And my takeaway from your presentation is that you ended up raising a lot more questions than answers during this entire process. And that’s because it’s this big project that’s never been done before and you’re trying to bring together all these people in the world. And so the first thing you guys did is just like, hey, what are all the things we don’t know?

What are all the questions that we have? But there were a couple of interesting moments that I wanted to follow up on. So you mentioned that most of the regulations about on site noise exposure have to be measured at the loudest point where someone would experience that noise exposure. So I’m wondering, how do you do that? If I’m required to measure audience exposure during my event and the regulation states that it must be measured at the loudest point in the audience.

And I’m not going to somehow, like, figure out how to get a microphone into that mosh pit, then walk me through how to set up a measurement, a front of house that could accomplish the same thing.

Well, in practical terms, I’ll talk about what’s possible at the moment and most software, most bits of software. And I’m thinking Tunisie or Tennesee within smart or, you know, the stuff Smart’s working on, kind of separate from Tennesee. The terminology will differ between the software, but it’s a correction factor. You add in to your measurement feed coming from the microphone. So what you can do is before anyone gets there to the gig, you can walk around the venue with the sound level, meter, whatever you have handy.

Hopefully it’s calibrated ideally and you identify that loud as part of the audience, wherever that may be, and take a measurement and then keep the signal going through your system the same and go to front of house where your measurement location is actually going to be during the show and take a measurement. The difference between those two sound pressure levels, that’s your correction factor. You plug that into the software and then it gives you the best possible estimate for what the level is during the show at the loudest location based on your front of house measurements.

So I’ll say it’s imperfect because what that doesn’t take into account is any affects of the audience. So high frequencies, the audience will provide the absorption, low frequencies. There’s been some really interesting work done by Elayna Zabbaleen, who’s now at dB, where she found that an audience that has that is in front of a ground based subwoofer system actually causes resonances within the audience, almost like standing waves and remotes. OK, so you may get some, you know, a couple of dB difference throughout the audience at low frequencies.

So you’re not going to pick up on that with your correction factor. But it’s a quick and easy way of saying, OK, give me an estimate of what’s going on. At the loudest point, if you look at the regulations, that’s what they recommend. But it’s a frequency independent correction. It’s not looking at active bands or third active bands. It’s just one overall dB or dB measurement. And that’s it.

OK, so let’s talk about NASA. This is kind of fun from your research that you got from NASA. You wrote that.

Well, I’ll just read what you wrote. Note of caution for 50 to 60 hertz due to chest resonance, causing whole body vibration, annoyance and or discomfort. So is this something that I should be watching out for?

With my system calibration, let’s say if there’s a resonance of fifty five hertz, could that potentially cause some, you know, uneasy feeling unconsciously in in my audience and should I, like, try to avoid those resonances?

I think at the moment, as a designer, it’s not something that you need to do anything about because there’s still too many questions that are unanswered about this. I can tell you from experience when I’m tuning system and I walk right up to a ground based subsystem and I’m right there standing in front of tens of meters worth of subwoofers. Yeah, my eyeballs are rattling, my teeth are shaking. I can’t see straight. And I’m convinced that everyone will have a slightly different reaction to this.

You know, it just might be I have a certain reaction because the proportions of my body are a certain way. Someone else is different. Sized to me might be different. So I don’t think it’s a one size fits all sort of thing. And I quote the NASA research because that’s all we have to go on right now. There’s just not anything out there on the level of low frequency we’re being exposed to. There’s plenty on lower level, low frequency where they say, oh, you know, it’s not a problem because you’re talking 60, 70, maybe 80 dB.

We’re not talking 60, 70, 80 dB. We’re talking front row of the audience, getting on average between 120, 130 dB. See, at peak, we’re exceeding 140 dB. See, and the only people who have even started to look into this is NASA.

Can you talk about what Vanguardia and SBL Track are doing with off site noise pollution? I just thought this was really interesting and I thought, you know, people should know about it since they’ve been doing it for years and it sounds like they’ve had some success with it.

Well, it’s it’s something that once you hear about it, you’re like, oh, yeah, that makes a lot of sense. Vanguardia and dB control and the guys that SBL track. So I should say Vanguardia and SBL Track are UK based companies and dB control is based in the Netherlands. So if you’ve done any sort of festival work in those areas, you’ll know these guys. But their main method of attack to minimise annoyance off site is actually communication.

It has nothing to do with engineering the sound system or, you know, telling the engineers to turn it down constantly. I guess the first thing they implement is a communication protocol, and that’s communication with all stakeholders involved in an event. So they bring on board the people who are managing the event, the system designers, the sound engineers, even in some cases the musicians, depending on what the event is, but also bring in the local community.

And that’s really the important bit, because in the run up to the event, they can distribute flyers or send out some method, find some way to alert the local community that, look, there’s an event that’s happening. This is when it’s happening. This is what we’re doing to limit the noise coming to your place. And if you have a problem, here’s how you can get in touch with us in the run up to the event and during the event.

And it’s that during the event method of communication that I think is really helpful because you’re sitting in your house and let’s say your neighbors are having a party next door. You know, the thing that probably drives you more crazy than anything else is that you have no control over that, aside from angrily pounding on the door, you know, and, you know, making a war with your neighbors. You don’t have control over that sound. And this is backed up by loads of research, really, where it’s been shown that someone having the perceived control of an annoying noise actually lowers the annoyance.

So what they do is they pour Vanguardia. They allow they create a hotline. So all you have to do is pick up the phone, call this number and say, look, here’s where I live and it’s too loud, it’s annoying. And they’ll say, OK, great, we’re going to contact the people in control of the sound and we’ll get this sorted. Now, whether they do that or not is almost irrelevant. Sure. Because you’ve told someone who has control over this.

And so in your mind, it’s going to be sorted. Even if it’s not, it’s all psychological and therefore kind of taking it a step further. The guys at SBL Track, which is run by Chris Beale, they’ve created an app that any local residents can have on their phone free to download. And if they have a noise complaint, they send a text message. It pings the crew at front of house and they see on their display where on the map it’s coming from and the message.

And if the front of house engineer wants to reply back, they can text right back. And there’s that direct line of communication. But it’s the same idea. It’s giving people the perceived control over the sound. And in almost all the cases, it is significantly lowered complaints.

Why is it illegal to fly subwoofers in Amsterdam? This this is a very frustrating story. What happened was as if you’ve ever visited Amsterdam or if you live anywhere near Amsterdam, you’ll know that over the past maybe 10 years, there’s been an explosion of outdoor music events there in the summers. It’s just one of the places to be for a good music festival. And they do loads of good events outdoors there and indoors for that matter. But the city of Amsterdam, again, if you’re familiar, is not the cheapest of places to live.

So if you’re living in central Amsterdam, you’re paying a premium to be there and you’ve probably been there for quite some time. All of a sudden, with all these events popping up, you’re having some very annoyed rich people. And so they’re going to the city saying, what are you doing? Well, I can’t live in peace here anymore. You know, sort it out, fix it. So what Amsterdam did was they said, OK, we’re going to learn more about this.

So they didn’t just slap a regulation on it. They said, we’re going to learn some more about this. We’re going to figure out what we can actually do to have both sides of the situation live in harmony so we can have the festivals while also keeping all our rich rich residents happy. So they got in touch with the local audio firm who does system design as well as research and product development. And they know their stuff. And they said do a study and tell us what the best available techniques are for system design to minimize noise pollution.

So they trailed a bunch of different systems. They found this big field just outside the city. They took loads of measurements. It was a fairly well constructed experiment as well as any outdoor effectively shoot out as well as it could be designed. And they tracked the weather and all that sort of thing. And then from the data, tried to draw some clear conclusions. And what happened was that one of their conclusions was saying if you fly subwoofers, the sound propagates further and causes more noise pollution.

And they see me to point out that, yeah, that’s what our data showing. And so I looked at it and there are a number of other people know colleagues who are working on this as project looked at it as well, critically looked at the data and said, well, we can’t see that. More importantly, we looked at what systems they tested and they didn’t testify, for example, first, that there there’s never a cell before in the air during that test.

And the argument was that, oh, well, we used the Dask system, which goes down to 35 hertz. And so I pulled up the old Vidor’s documentation of how something comes out at 35 hertz, but it’s, you know, whatever, 30 dB below everything else. And so the data, they didn’t have good enough signal to noise ratio on their data to give anything conclusive. Although I’m not knocking the whole report, I think there were some really good observations made in that work and know it’s not slating the whole thing.

But then that one specific area, they draw a conclusion where a conclusion should not have been drawn. Unfortunately, what happened was that that was the conclusion Amsterdam took that and said, oh, OK, I can’t fly subwoofers because those send the low frequencies really far. And they said, fine, that’s the new no fly zone. Subwoofers, the end. Wow.

I’m also just impressed at how quickly and almost like efficiently they were able to make that happen. Normally it seems like any anything else we try to get done for the auto industry is takes forever and is a big hassle.

Adamu, like the World Health Organization hearing test app, I tested it out as well. And it’s interesting, instead of your normal test that has some sign tones at different levels and you see if you can hear them, this one is just a recording of a person reading numbers with different amounts of relationship to background noise and then you get a score. So I took the test. I got a seventy nine, which is supposed to mean I have good hearing.

So what did you get.

I got an eighty six. Oh, but there’s a big but there. I know my hearing, especially hearing and noise is not an eighty six out of 100. It’s not. I’m convinced and I’ve had chats with a lot of people who were kind of very closely involved with hearing health. I’m convinced that US audio engineers who are trained to listen critically are able to cheat these tests. We’re not we don’t fit the the kind of forum of the general public or general people that unless you’re admitting that which is which is why you got higher score than me.

Well, yeah, but I couldn’t avoid it. It’s you know, it’s we critically listen, our job is to pick out these little yellow, unrecognizable details and audio that no one else will hear, because I know, you know, that’s my big bit of hearing loss. It’s not revealed in terms of frequency loss. It’s hearing a noise. And if I’m in a busy environment trying to hear a conversation, I really struggle. And it’s called hidden hearing loss.

And it’s something that’s less easily tested for. But so I think the test is useful and interesting to take. But if you’re an audio professional and a good critical listener, I would lower your score probably by about 20. Oh, wow.

OK, shit. And maybe I don’t have flittering as I thought.

So, Adam, you love talking about diffuse signal processing. If you look up Adam Hill in the a library journal, there are a bunch of papers on this subject. So to dive into it, I’m just going to read this paragraph from your website so you don’t have to explain it all again. And I just I kind of want to talk about the practical implications of this. OK, so here’s what you wrote. While a ground based, centrally distributed subwoofer array is a common and straightforward solution, it can be impractical and unsafe in certain situations.

Often a left right subwoofer system, ground based overflown is a better choice. The problem with left right configuration is that there will be severe combe filtering causing inconsistent horizontal coverage. To avoid this issue, the left right signals must be dB correlated. Existing approaches involve unique.

You applied to each side of the PEO, which isn’t great from an efficiency viewpoint or the use of all past filters, which generally result in a reduction in audio quality. And so I’m just wondering, could this fix Power Alley for large sound systems? I know you have a design for your thinking. It’s intended for smaller rooms, but what about you uncoupled subs that we that we use on shows to correct one point?

It actually is designed for big systems. That was the initial intent. It was actually kind of an extra application that I just dreamed up with my PhD student, John Moore, who did a lot of the work on this to test it in small rooms. But yeah, to go to your question, yes, it can deal with power early and deal with all the kind of notches in the frequency response you get along with it. Basically, without explaining the whole thing.

You can follow the link to my blog to kind of get the full in detail explanation of how it works. But we’ve come up with a way to correlate to a more signals that are initially identical in a way that you can’t perceive it. So if I’m turning the Decorrelation on and off and you’re just listening to the signal over headphones, you shouldn’t hear a difference in the tonality at all. But statistically, then those signals will be correlated. So the idea is that run those through your left are left right subwoofers, and you’re not going to be getting that coherent interference that causes the difference in tonality from left to right and the audience and will then not cause the power rally, which is that big bass buildup you get right down the middle of the audience.

So that really was the initial focus of this research. That’s what we wanted to solve. And I’m fairly confident we’ve come up with an algorithm that can do it.

Oh, cool. So how long before we can get a demo from you? That is low latency.

It’s on my list of things, not just a question of when to get around to it. I, I know what needs to be done and I’m I’m pretty confident that it can be done. I’m shooting for a few milliseconds latency at the moment. We’re looking at thirty, forty milliseconds which isn’t going to work for life sound. So that’s the last thing on the list really. I’ve been having informal conversations with people in industry manufacturers about it and implementing it.

Nothing concrete at the moment, although there’s a completely different application then I can’t talk about at the moment on the other side of the world. Yeah, secrets. But it has nothing to do with live sound. It’s it’s a completely off the wall application of it. So it’s getting out there. And to be honest, I mean, looking at my email inbox, I’m at least getting one or two messages every day from someone in the industry saying, hey.

I’ve heard you’re working on this one. Can I have it? So, yeah, there’s the demand, I think there and I think enough engineers are open minded enough about taking this approach, because let’s be honest, what we’re doing is we’re messing up our signals to make it better, to make the listening experience better, which I think for some people doesn’t quite fit too well.

Sure. And that actually makes me a little bit more happy about it, because let’s imagine that it’s available already. There’s going to be plenty of people who reject it, and then there’s going to be plenty of people who see it as like that. Now, this perfect solution, which I think is cool, you know, and so then those early adopters are going to go out and test it out and eventually we’ll find out, you know, if it’s going to be accepted in the long run.

Yeah, that’s it.

So I’m hopeful with it. I think it’s something that at least needs to be tried in the industry.

I want to mention one more application that you and I talked about previously. So I showed you my main subelement alignment calculator that I’ve been working on. And then we got into talking about this decorrelation of signals and it came up that you could use this in just the tiny crossover region between these two boxes, between main and sub. And so if you just correlated that tiny area, then you could potentially stop worrying about main subelement. You would still want them to, you know, arrive at the same time and not like cycles and cycles late.

But it could save us from some of the more drastic effects created from bad means of alignment’s.

Yeah, if you focus this decorrelation on the crossover region between your mains and your Celebes, which is entirely possible with diffuse signal processing, it’s you can be frequency selective or you can apply to all the frequencies. Doesn’t matter. You can choose. But if you just focus on the crossover region, that’s the area where you have a huge amount of problems, especially if you have ground based cell buffers of getting the mains and the subs to play ball. And you’re looking at some of the line arrays out there these days.

They’re going down to 50 hertz or below. You know, they’re definitely kind of moving well into the subwoofer range. So you need to sort that out. And if you can correlate those two elements in that specific frequency region where they overlap, then time alignment is less critical. And we talked about it earlier. But I’m not saying that you shouldn’t timeline. I think you always are going to have to timeline. That’s really the name of the game of system design, but it makes it a little less sensitive.

So if you’re a little out in certain areas, you’re not going to be getting massive peaks and dips in the frequency response. Sure.

This is the sort of crossover alignment is that, hey, you can only be aligned in one point, so why do it at all? You know, doesn’t does it make a whole lot of sense? Because then you never know really where you’re aligned and where you’re misaligned. But it sounds like it would then allow you to make that alignment work for a much larger portion of the audience.

That’s the idea. These days, everybody sees this and things feel different and people go home and everybody sees how many people listen as we reminisce. Let’s talk about subwoofer positioning. So I wish we had time to dive into a lot more of the writing and research that you’ve done. But I picked out this one that was interesting for me called subwoofer positioning orientation in calibration for large scale sound reinforcement. So part of the study was looking at how stages affect the predictions that we do in our modeling software and then how they should how our sciberras should perform in the real world.

And one of the things you wrote is based on these results, it is evident that a subwoofer placement directly underneath the stage can almost eliminate any advantages gained with cardio polar patterns. The low frequency SBL on the stage is virtually identical to that in the audience, moving the subwoofers two meters forward so that they are not underneath the stage results in much lower speed on stage while preserving the audience area response. So this is interesting. What I’m taking away from this is that particularly is is it all directional rays are just gradients of a phrase, don’t work under the stage, don’t have the same result that we expect, and to get them to work as we expect, they need to be at least two meters away from the stage.

Well, the paper that you’re talking about, that was from about 10 years ago and the findings presented in it were purely based on simulations. Now, the good news is that since then this has been tested. I had an undergraduate student a few years ago do these tests, so we put a directional subwoofer on its own, took some measurements around it in the virtual state while the stage area, the pretended stage area and the audience, and then plopped a stage on top of it and see what happened.

And the results from those experiments almost perfectly lined up with the simulations. And that was a single unit that was cartload that basically turned into an army directional unit when a stage was placed on top of it, when you kicked that unit out by about a metre in front of the stage. And then we tested it with even less and the effect was about the same. It regained its cartload response. So from the research that research and other smaller experiments done by myself and others, any sort of directional low frequency system, whether it’s in a single box or whether it’s a gradient or an end fire approach, isn’t going to play ball when underneath the stage.

So you have to get it out as much as you can in front of the stage. Now, whether that’s a metre, two metres, half a metre, as long as it has some sort of breathing room, then it has the ability to achieve the cartilage response. If you chuck it under stage, there’s no point having to do it. It’s going to be IEM.

It’s interesting to say that because I recently heard a story from Mira Ramirez talking about how a student had a question about this at a seminar he was teaching in Iceland. And so they decided to put a gradient array on a dolly or wheels somehow. And then they had a rope and they were able to, like, measure it and pull it closer towards I think it was a boundary I came in for as a wall or a stage. And until they saw the results change in front of the array.

And it was when they got close to a meter that they started seeing the results change. So it’s cool to to hear you’ve had.

And I’m glad you mentioned the boundary effect as well. I mean, this is something that’s been known since the 1950s with Richard Waterhouses research. You know, we’ll know the Waterhouse effect. At least some of us know where you get towards a perfectly reflected boundary. You get plus six dB in some cases. But if you read his research a bit closer, he also looks at dipole units where his version of a dipole unit is basically ah, and fire cardioverter without the delay, I should say, gradients.

No, don’t fire a gradient. Turei Without the delay, just the reversal. And what he found was if you back this up towards a wall, it just kills your output in front. And I’ve had this happen to me before, years ago with my colleague Adam Rosenthal. We were doing a gig at the Auditorium Theatre in Chicago and we were using Naxos 18 subs at the time. So cardio boxes and the only place we could place these were right in front of the proscenium.

And so it was right up against the wall. We flip the switch with the system. There’s no I’m just looking at the amps and the amps are slammed like there’s definitely electricity coming out of these things. But we were getting nothing acoustically and we thought about it. And this was still early on in my career. I didn’t have like a proper, proper knowledge of it, but just through experience. And we turned off the rear drive unit, the it became nominee, and all of a sudden, boom, you had left once again.

And a number of years later, when I was learning about Waterhouses research and I’m reading through his paper like that, yeah, that’s what happened.

We had heard stories too close to the wall and it was canceling itself out effectively. What happens is the direct sound is canceled by the reflection off the wall. So it’s the same thing, which is why even when I have my subs out for a ground based system in front of a stage, I have to battle the staging people to not put a stage skirt up. And this is my welcome to my world in the summer and the Chicago festival scene.

You know, one day I’ll take down the stage skirt and have a talk to the staging people and say, look, you know, my speakers aren’t going to work right off that stage skirts right behind the skirt, does it? I’ll take it down. Yeah. Yeah. If it’s not acoustically transparent, look at the same thing. It’ll destroy the cardio subs. And the next day I rack up to the site for day two of the festival and the stage will be back up because it looks like to.

I know. I’ll fix that. Yeah. So it’s part of my routine. Every morning I have to go take off the stage skirt again. So my subs work. But yeah, it’s stage skirts are also really important to keep an eye on because if they’re right up against eukaryotes, I don’t know which they described as acoustically transparent though I got to test them all.

I talked to John Burton about it because John has wanted to do a test not only on this, but he wants to know what happens to this scrubbed material when it gets wet. And I’m sure a lot of people have experienced that when it starts to rain. I’m not talking about subs now. I’m talking about the banners that get flown in front of our eyes. All is well and good until it rains and all those tiny little holes fill up with water.

So John wants to do a big study on this, not only for low frequencies, but for the high frequency effect or just do in the morning or, you know, water in the air.

Yeah, yeah. So my answer is, I don’t know. But I know the stage skirt’s used at the fest’s I work in Chicago are not at all transparent. They’re sheets of plastic. OK, so they need to go.

We have a couple of questions from Facebook. Elliott Clark says, Where is the medical hearing research heading or currently at four dB music exposure rather than constant deba industrial noise? So I guess what he’s referring to is that so many of our regulations refer to dB standard, right?

That’s right. All of them referring to dB standard with most of the industrial noise, will have dB limits as Peake’s. So in terms of the research for our sector, for a music based noise, as far as I know, there is little to no research.

There are some interesting projects that are more in the early stages in the Netherlands looking into this, looking into the health implications of low frequency sound as quantified by dB. The guy to talk to about that is Marcel Cuck from dB Control. He’s currently part time doing a Ph.D. at a university in the Netherlands looking into this.

So he’s really, as far as I’m concerned, at the authority on this at the moment. But he’ll be the first to tell you that we don’t know much. There just isn’t the research. And when Marcel and I and others from the live event community travel to Geneva and are at these meetings at the World Health Organization, it’s about four of us in a room of about 50 people. And we’re the only ones who really seem to have any clue as to what we’re really talking about when we’re talking about 140 dB at low frequencies, I think most people don’t believe us.

We get some eye rolls. We get some distinguished professors from Europe kind of basically pooh pooh the whole thing, saying it’s not a problem. And really we’re saying, look, we go to these events, we work there. We’ve stood there in the audience. And, you know, we refuse to accept that this is perfectly safe. But at the same time, I can’t sit here and say anything definitively. I can’t say that’s definitely dangerous, but I can’t say that it’s definitely safe.

I’m saying here are all the questions and we need to do more research on it.

So Elliott would like you to continue to extrapolate into the future concerning the safety, he says. Does he think we will reach a sensible compromise for impactful, powerful live shows within safe exposure limits and environmental off site concerns?

At the moment, there are two separate things what’s going on with the show is purely looking at sound exposure on site. They don’t consider and make it a point now to consider anything off site because that would get messy. And that’s pretty much my answer. It’s going to get messy. What I hope and having gone through this work with a US and kind of releasing this big report with sort of what we know and what we don’t know, I’m fairly confident we can achieve the levels we want to achieve and the impact we want to achieve at live events.

I don’t think we’re going to be coming out and saying you have to turn everything down. I hope we don’t have to say that because I like loud sound more than most other people, probably.

But I think it will be causing maybe some slight changes to practice, for instance, when you can try to fly yourself Bofors instead of putting them on the ground or at the very least try to maximize the distance between any loud speaker and the closest audience member. So you don’t want to have someone yell within arm’s reach of any loud speaker know, especially looking at the subwoofers and air front fills their close, but they tend to be at a lower level anyway.

So you can make that work, but effectively it’s protecting the people closest to the P.A. That’s the main thing. And if you can maximize the distance they are away from the closed speaker, that means you have less to worry about and then you can get the level you want to front of house without killing the people out from up front. And that’s the main thing. So I think there’ll be some suggested changes of practice were practical for off site. I think there’s more research to be done along the same lines of what they did in Amsterdam, but just doing slightly more controlled and in-depth experiments to make sure that your conclusions are are robust.

But again, this is what we try to kind of hammer home with the WHL. We think that it has to start with system design. And what we’re trying to avoid is regulations coming out and then causing us to scramble to meet those regulations. What we want is for the discussion to start with finding the best practice for system design and then craft some regulations around that. So that’s what we’re looking for.

But I would really hope that none of this actually impacts the exciting live event experience that we’re delivering to people on a daily basis.

Is looking for some software suggestions. So they say ask him if there is any 3D acoustic modeling softwares that you can drop your own models into that are priced reasonably, that is. Define recently, yes, I wrote back to him on Facebook and said, do you mean like different than something like your eyes? And he didn’t write back. So I’m not sure what exactly he’s looking for, thinking of. But maybe you could just list a couple of your favorite software that allow you to bring in models from the outside.

Well, for what we do, which is electroacoustic, it’s designing sound systems and putting them in an acoustic environment. As far as I’m concerned, there’s only one accurate piece of software out there right now, and that’s is is brings in the proper loudspeaker data from the manufacturers and gives you, from my experience and talking to the others, the most accurate estimation of what that sound system will do in that room. You also have Cat Acoustic, which I’ve used a little bit, but I lost interest because it doesn’t go down to the low frequencies nor disease for that matter.

And you have Odean and Oden’s really great for pure acoustics and designing kind of great concert halls. But you really I find myself I keep going back to ease, and that’s what I teach my students with because they are the most accurate with loud speakers. But it’s going to cost you it is expensive and that’s you can’t really avoid that. As far as I know, there’s no free software out there at the moment that that does kind of the full package.

So a couple of short questions for you to wrap up. What’s the one book that has been immensely helpful to you, Adam?

There’s only one book that I’ve ever read more than once, I’ve only ever read one book. If it was my my teenage self talking to you, that would be a true answer, probably. No, I love reading. I’ve got bookshelf after bookshelf of books. I devour them. But the only book I’ve ever read twice is Fear and Loathing in Las Vegas, for sure. And, you know, I’m not saying that this has taught me how to live my life or anything because I think I’d be dead by now if I was following those examples.

But it taught me to laugh at life, not take anything too seriously. So important, you know, and I think that’s, you know, that’s something that you read Thompson’s work and, you know, yes, he’s he’s outraged by all these things he’s saying and commenting on it. But at the same time, he’s just being absolutely ridiculous, you know, and having fun in life. And I think that’s important. You know, we talk about these things and we’re serious about audio and all these important things, but ultimately we’re providing entertainment and we’re here to help other people have a good time.

And we have to have a good laugh all the time, really, you know, and be a little bit serious.

Adam, where is the best place for people to follow your work?

The best place is probably checking in maybe on a monthly basis to my website. My website is where once I’m allowed to, I’ll post my publications so you can read my papers. It’s usually about six months to a year delay before I’m allowed to put them up for email me and ask me what I’m working on.

I don’t know. Yeah, I’ll pull you in and you’ll never escape. Well, Adam, thank you so much for joining me on Sound Design Live.

It’s my pleasure. Thanks for having me.

You can’t predict subwoofer behavior in a room at 40Hz, but you can test it

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live, I talk with the head of engineering support at Nexo, François Deffarges. We discuss the troubling trend of uncoupled subwoofer arrays left and right of the stage and how to fix it, how to test subwoofer options for better LF results, why asymmetrical horns look upside down, and the time François was publicly shamed (mistakenly) in the French national press.

I ask:

  • What are some of the biggest mistakes you see people making who are new to using Nexo speakers?
  • I’ve always been confused by asymmetrical horns. Why do they look upside down? The bottom part is narrow, where you have wide coverage and the top part is wide where you have narrow coverage.
  • Tell us about the biggest or maybe most painful mistake you’ve made on the job and how you recovered.
  • FB
    • João Lopes Just tell him that N45 is one of the best wedges I had worked with 👍
    • Jason Raboin Why does Nexo opt for a single LF driver in their Geo M series while the rest of the major manufacturers opt for 2? What are the benefits and what are the trade-offs?
    • Devin Sheets I’d be interested in hearing him talk about the physics behind the latest port designs in the STM and Geo series. The design involves a slit in the port which seems to change the phase relationship of the harmonics so that there is linearity throughout the frequency ranges that are normally affected by port resonances.
      • How valuable does he think the cardioid low-mid functionality is in the market? They haven’t been pushing cardioid tops as much. Do they still offer cardioid modes for subs though?

My recommendation: keep 2-3 options open, test them, listen to them, and then make your decision.

François Deffarges

Notes

  1. All music in this episode by “Kosu” – Shock of Daylight.
  2. Tool Box: Trupulse 360r, Class 1 B&K Sound Level Meter, Smaart, SysTune, Lectrosonics wireless, NTI
  3. Books: The Foundations of Acoustics by Eugen J. Skudrzyk, Wind, Sand and Stars: Terre des hommes “Land of men” – Antoine de Saint-Exupery
  4. Quotes
    1. In life, you have external factors that influence your career.
    2. There are trends for mistakes, but they are mostly on the network side.
    3. One thing I don’t like when I see photos, is left and right stage subwoofers, because from the picture you know that this is not going to work.
    4. There are trends for mistakes, and the trends today are not the same as twenty years ago.
    5. The industry has really formalized/professionalized in the past twenty years from the manufacturer to the engineer. 
    6. When you do stereo design for subwoofers make sure the left goes to the left and the right goes to the right. 
    7. You cannot predict how the room will behave at 30-40Hz.
    8. My recommendation: keep 2-3 options open, test them, listen to them, and then make your decision.
    9. Music is transience and we want full respect of that transience.
    10. In an asymmetrical horn, a fast expansion you achieve a wider directivity. Where the throat is narrow you have a wider directivity because the expansion is faster.
    11. The smaller the source, the more omnidirectional it is.

Compare 300 Different Microphones with Double-Blind Listening Tests from Your Own Studio

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live, I talk with the co-founder of Audio Test Kitchen, Alex Oana. We discuss entrepreneurship and how Audio Test Kitchen enables you to compare 300 microphones through double-blind listening tests from your own studio.

I ask:

  • How did you get so many recommendations on LinkedIn?
  • What is the goal of Audio Test Kitchen? Why would I go there? What problem am I trying to solve with the tool?
  • How you got the idea, where it came from?
  • What are some of the biggest mistakes you see people making who are new to pro audio and microphone choice?
  • How many microphones have you measured? What are one or two of the things you were most surprised to learn from measuring 300 microphones?
  • How were the recordings created?
    • VOCALS: Single vocal performance “bottled” by a laboratory-grade, neutral microphone in an anechoic chamber, re-amplified via a “vocal surrogate” loudspeaker into each product microphone, one at a time. Schoeps MK2 flat omni
  • Tell us about the biggest or maybe most painful mistake you’ve made on the job and how you recovered.
alex oana

Often times that’s the hardest thing, that last 5% of getting it perfect.

Alex Oana

Notes

  1. All music in this episode by Robin Applewood.
  2. Audio Test Kitchen Has Revolutionized the Microphone Shootout
  3. Vocal Surrogate: Schoeps CMC6 body with MK2 capsule, DPA 4011, Adam S3H, ATC SCM45A.
  4. Quotes
    1. Why don’t we just stop this game of telephone and why don’t we just make it so that the person with the vision in their head for how they want to sound can just audition the tools that make the sound themselves.
    2. We had to solve two huge problems: bring it to your environment where you are comfortable and then reduce the variables to the gear itself where the gear never changes.
    3. We have had no complaints from 54 microphone manufacturers.
    4. Often times that’s the hardest thing, that last 5% of getting it perfect. It’s the most expensive. It’s the most time-consuming. That’s when we need a breakthrough.
    5. That decision to do what it takes to deliver what we value was a $20k decision.

Transcipt

This transcript was generated automatically. Please let me know if you see any mistakes.

Welcome Sound Desig Live, the home of the world’s best online training and sound system, tuning that you can do at your own pace from anywhere in the world. I’m Nathan Lively, and today I’m joined by the co-founder of Audio Test Kitchen, Alex Oana. Alex, welcome to Sound Design Live.

Thank you, Nathan.Really a pleasure to be here with you in real time. First, many thousands of miles, but you’re located back where I spent almost two decades going to college and making records and mixing live sound. And now I’m on the West Coast in L.A. and you you got me out of bed real early this morning at a super non rock n roll time of the day, man.

That’s what I should have been going to bed. Yes. And not at the same time, but we’ve switched places because I spent several years living in Oakland and then moved to Minneapolis only a few years ago. So honestly, there are better. Yeah, I wish we could trade places sometimes.

Awesome. Well, Alex, I’m really excited to talk to you about audio test kitchen today and all of the ways that that people and the sounds on live community can use the tool and how you built it. But before we do that, when you get let’s say when you get into a studio and you’re trying to maybe figure out how it sounds and what it feels like for the first time, what’s maybe one of the first audio test tracks you like to play to, you know, see how it feels and how it’s going to work for you.

Steely Dan Pegg.

I mean, come on. Donald Fagan, night fly. I mean, come on. Daft Punk is everything just feels right.

So, Alex, how did you get your first job in audio? Like what was your first paying gig?

Oh, man. You know, getting a first anything for me, I think has been a result of being passionate. And that can, at some points in one’s career, trump expertise or talent.

Like, I think the first job in an audio that I got was as I was there and I kept showing up and I kept being like really like into it and really intense.

And I was in Northfield, Minnesota, and I was still in college. And I first of all, I got a job within sail of colleges like kind of technical services. And so I was hauling around TV’s on carts.

That was my first job in audio as well, except it was for the recording department, for the for the music department doing recording. But it’s very similar, I think, to what you were doing.

And so I think by the time my my senior year of college, I caught the attention of the big time audio production company in Northfield, Minnesota, that would do the big gigs in the Twin Cities.

Right. And so because because the the owner and operator of that company, Brent, day of Day Productions, he like saw me work in work in the shows at CNN all and down at the local club.

And I was like, hey, kid, let me in particular into my weirdly Creevey.

Well, there may be some accuracy to that, but, you know, like you, we need people to recognize our ambition and our passion.

I remember Lauren Wick, Lander of Southern Thunder. Go get Narin shout out.

Yes. Lauren is now works with Axium, which is a Italian brand of like loudspeakers and pro audio stuff. But I remember sitting across from Lauren, I had mixed his band.

His wife is a lead singer and a Patsy Cline cover band, and he played drums. And I was just like I and my friend Brent Dey, who is hiring me for Day Productions, was like, hey, this guy could.

This guy could give you this guy could give you a gig someday, you know, do a really good job.

So I made a cassette recording and like I would like cared so much about, like, the way they sounded live at that show and the way the recording sounded really as early as I could in my career.

I would record the shows and then that turned into just recording for the sake of recording and really being so interested in that fine tuning process of those audio relationships.

And that’s a kind of a different world between live and and and the studio. There’s a club in in Minneapolis downtown called the Fine Line Music Cafe.

Mm hmm.

And I was I was a wee lad in college looking through the window of the club is going to look at that.

And I saw the like, the meter’s going up and down on the console and like, oh, my God, I’ve got to get my hands on that console someday. And then eventually I did.

I got my breakthrough. I got my chance from Lauren Wick Lander of Southern Thunder. Thank you, Lauren.

And and then there was another moment in there where I was. So I was recording the bands. That was like a super important thing. So and I was doing it for either free or for like, so little money. And I would record them on the stereo.

Digital, you know, dat tape, you know, like a digital audio tape. Yeah. And then eventually an eight track, eight hour tape, digital and or sometimes just a cassette.

But I would like pay as much attention to the live mics that I was crafting.

And they’re a great monitor mix for them on onstage, as I would this kind of sub mix of out of the buses, the groups that I was creating onto the tape. And it had to be a different balance. You know, what’s going to sound good in the room is not good. You have to create a different balance for what’s going to sound good on the tape.

So I did that and then people would come to me saying, like, dude, your board mic sounds better than the album that we just made in the studio. So I knew I was onto something. And then that served as a bridge for me to what I really wanted to do, which was to be in the studio.

You’ve gone on to do more like sound, lots of recording. You had the software project now. And I know we that there are a lot of interesting stories in there, but I was wondering if maybe you could pick one. And what I want you to think about is maybe one decision that you made to get more of the work that you really loved. So I wonder if you could sort of. Can you think of something and sort of take us to that moment?

Yeah. Probably is around the time of transitioning my career from live sound to the studio and are super. More important thing for anybody to do when they imagine the place they want to go and they’re not there yet is to, number one, allow themselves to imagine it and then talk about it and put it out there. And I would say some really simple I did that changed my whole life. So there I am. I’m the front of house engineer at the Fine Line Music Cafe downtown Minneapolis, a venue that’s about 600 people at Max.

And it’s like a bar and it’s a restaurant and it’s a nightclub and bar. I was mixing there like Prince would sometimes show up, just like I remember I was spotlight up even as some other band had brought their front house.

And Prince was like three feet away from me in the balcony maze.

And my hands are burning on this like giant like incandescence like, you know, tinder box that’s about to burn the venue down. And it’s like it’s really him. And like that feeling that you get of like there’s something. Fame. Like electricity. Anyway, so there was at this club the fine line. There was a bouncer to just the dude at the door had nothing to do with music or sound or anything like that. But his side hustle.

I’m not sure which was a side hustle.

He was also a realtor and he we were friendly and and and I’ve chatted with him in and he maybe said something about like, hey, what you know, are you ever looking for, you know, a property to buy?

I’m like, you know what I’m looking for?

I. I want to buy up for his or her. I can set up a studio.

Oh, really? Okay, cool. And then I was like and I was like I think is going to be a barn and is gonna be on the country and it’s going to be this huge thing. And then you know, Chatterji Panjaitan, in a couple months later he’s like, dude, I found a place.

And so it was a is a little house in a part of Minneapolis next to the University of Minnesota, affectionately refer to Dinkytown Dinkytown as this little community of like shops and the houses and stuff. Next to you. And there was a little house where in the 70s, a record label called Twin Tone Records had started just a couple music freaks. And then from that, they’re like, well, we need to record our bands. And so they, like, started recording their bands in living room.

And eventually, sometime in the early 80s, they even built an addition on the house and they sliced a big hole in the side of the house so they could have like real control room glass. And so often it was a house and a studio. And this is the place that the bouncer from the fine line took me over there. And it was such a dump. It was Mel that was it looked like no one had, like, opened the curtains or windows.

And in a decade. No, but I was like, okay, I can live here. I can work here. I can. There’s a couple extra rooms I can rent out. I’m taking it and it was so little money was like 80 something thousand dollars. So like at the time, I mean, this is like early 90s. I mean, that was like an astronomical amount of money. Right. But somehow I mustered with the help.

I think my mom loaned me some some cash. Sure. And then by the time the sale went through, though, I had the cash. And like, Mom, I don’t need your cash anymore.

But, Mom, that 4000 dollars that you’ve known me. I keep it and buy some gear with it. So I did that.

So I but my first recording console Macchi thirty to thirty eight bucks, an eight bucks. All analog. No compression. So you’re keeping it real. Yeah, I, I really love here in that story because so many times I hear people telling me that I really just, I need to get I need to get myself out there. And to a lot of people that means sending their resumé to a lot of studios or to a lot of production houses.

And I think you just have to keep in mind that that is kind of the first thing that everybody thinks about. Right. And you’re basically doing the coldest form of communication and outreach that you can. But I hear a lot of stories like yours where it was because you were so passionate in talking to a lot of people that then, you know, those sort of all your connections in your network starts to really engage in like and then and then something like this happens.

You get an opportunity and then, you know, you sue you had over the fence and you’re like, I don’t know, I’m going to make this work.

But you you made it work. So. Yes. Light years ahead. How did you get so many recommendations on LinkedIn? Do you have somewhere north of 20? Do you like 21 or 23? Yeah, and most and engineers have maybe one or two.

My thought on that is like, you know, it’s it’s your network and it’s it’s treating people right. And it’s your passion and it’s giving. And like, I likely would have written a recommendation in in response or maybe I initiated by writing a recommendation for someone else. And perhaps this happened at a time when linked in software itself was more focused on like, hey, you’re a new member, whatever, build your profile. Maybe they still do that.

But I think that that’s that’s key to it right there.

It’s about in every scenario that you’re in a work scenario, any kind of relationship, you have an opportunity not just to do the job, but to be a great person and and to treat people right.

And if the number the high number of recommendations I have on LinkedIn is an indication of anything, hopefully it’s an indication that I’m not only did I do a good job, but like I I cared, you know, I cared about the people.

And I think that one of the biggest mistakes I’ve made in my career actually is those moments in time when I put the project before the people.

And dude, every time that happened, it got painful. There was a painful moment where there was a reckoning.

And like one thing that’s that’s cool about relationships is it’s like nature. If there’s an imbalance, it will force you to deal with that imbalance at some point or else you’re just going to die of stress, which I prefer not to do. So for those of you who are listening to this on the podcast, you’ll want to know that we’re also recording video. So this will be on YouTube. So at some point in this conversation, if Alex starts to show us Web sites and things on his computer, he’s going to.

We also try to describe that, but just also know that there’s a video on YouTube that you can go and look at of what we’re actually doing on his computer.

For people who don’t know what is the goal of audio test kitchen? Why would I go there? What’s the problem?

I want to solve with this tool you’ve created Audio Test Kitchen lets you hear the gear before you buy it on your test kitchen, lets you hear the gear.

If you have questions about like how does this thing sound? Maybe you’re a student, maybe you’re an enthusiast enthusiast. What’s a better word than that, Nathan? A hobbyist, whatever. You just love sound. You’re curious. Right.

But doesn’t it make sense that in a world in a world where we’re like, it’s so easy to transmit you, technically speaking, high quality audio, high quality video over the Internet to wherever people are.

And there’s these products that make sound or shape or sound or capture sound. Does that make sense that, like, you would be able to audition how those tools sound before you buy them?

I mean, that all this everything’s in place except for the raw data that you need to be able to adequately audition in a neutral, unbiased environment and compare those to other pieces of you’re in that same category.

So that’s what all a test kitchen did. Yes, that would be the next question for most people is, yes, it makes sense. Oh, but how would you do that? That’s impossible. Right. And we’re gonna get into how you did that. But first, I just want to sort of job people’s attention to the way that we have all traditionally done this, which is most people in my case, either you’re on a show and and you get to try something new, a new microphone.

You’re like, oh, this new microphone. Let’s try that. That sounds different. Or you go to a music store and probably lots of people have had this experience of go into the music store where they have a bunch of different microphones set up and you can basically pick them up and talk into them. Oh, we’re seeing into them or whatever you want. And they had headphones. And that gives you a particular experience where you get to hear your own voice through a bunch of different microphones.

And that’s eye opening for a lot of people. Right. All of a sudden.

But, you know, you in that context, not only do you get to hear your own voice, you get to hear all the background noise in the in the store, the air conditioning.

And you can turn up the headphones so loud that anything sounds blisteringly amazing. Right. So kind of two problems there. Right. Which is, one, you have to go to this place where they have this and they only have like six or seven. And then, yeah, there’s all this background noise and you don’t really get to to use the conditions that you want to hear the microphone under in the studio or in live use. Cool. So let’s get into it a little bit more.

Next, I’d like to just know, like, where you got the idea from and then maybe we can actually take a look at the site itself. So, dude, I’ve been a audio enthusiast as like a young.

Listener, listeners, stereos and and and building Hi-Fi systems, it just in terms of selecting the right parts to go together, and then that evolved into becoming a live sound engineer and a recording engineer. And then that evolved into me writing for some magazines like Pro Audio Review, which doesn’t exist anymore.

Sound on sound, press on news. These kinds of things. And coming from the perspective of trying to review gear.

So a review is supposed to serve as a kind of a surrogate for like, hey, let me have an experience with the gear, so and write about it and describe it clearly enough.

And from some kind of position that allows you to develop an understanding and maybe make conclusions about that piece of gear.

So I was in that role and then I started working for vintage King Audio as a sales person. So here I meant on the phone. I’m answering emails and I’m trying to do the same thing. It’s like, oh, you I’ve got I’m surrounded by this gear. And eventually at the Los Angeles Vintage King showroom, which is like one of the only places on the planet that actually has gear plugged in that you can come and check out in an audition.

And so I’m surrounded by this stuff and poor you.

And I really mean that poor these people on the other end of the phone or email, they have no access. They can’t come in here. The WHO gear for themselves. They can’t touch it and and get some kind of experience with it. So I was always in this, you know, fortunate and unfortunate position of being like a translator and an interpreter.

And like on two levels, the number one, my goal is like, okay, well, what problem do you want to solve? What do you want to sound like?

What inspires you?

So I’ve got to translate your abilities. Number one, you got to have an ability to translate your desires, your thoughts, your way of talking about sound.

Then I take that in and interpret it and you saying, like, you know, warm that has a meaning to you. Warm might have a totally different meaning to me. OK, now we’re into a game of telephone and through this, so we’ve got a process that breaks down.

It’s like you’ve got an intention.

You’ve got to desire something that you’re looking for, something you’re trying to do to get tools that will translate your innermost vision, your desire, and then you’re looking through several layers of telephone to try to to actually get that piece of gear, to translate your thoughts and to convey, you know, who you are as a you know, your voice as a podcast or your voice as a singer, your fingers as a guitarist, whatever. And so that it just made sense to me that why don’t we just stop this game of telephone?

And why don’t we just make it so that the person with the vision in their head for how they want to sound can just audition the tools that make the sound shape, the sound, the capture sound themselves. That’s really great.

It was really through all these conversations you were having of trying to describe gear to people or help them make purchasing decisions that led you to you were trying to figure out how do we stop this game of telephone and how do I just bring the gear to you without me actually, like driving to your house and like setting up a hundred microphones.

We are the first to admit that if you could create the perfect scenario where like every microphone, let’s say, for example, because audio test kitchen launched with a library of 300 large Styrofoam condenser microphones, all comparable online, all standardized recordings and this unbias interface.

But unless this they like you, Nathan, are thinking about 20 of them right there in the right price range.

They’re like. That’s that’s on your short list. So if I could set up a scenario in the studio, it’s your studio.

You know the speakers. You know the headphones.

You know the acoustics. Everything is perfectly level matched. And I’m just standardize these conditions one step further. I’m going to make it so that the every that the source that we record through every one of those microphones never changes. Well, what the heck, sources that I mean, so and so imagine trying to do that in a store or somebody else’s studio. OK, listen, it’s not your space, not your listening environment, not your speakers, not your headphones.

You might feel under pressure if you know you’re paying for that studio time or if you’re like in the store in that environment.

And certainly, even if you just walked around that room where you had all those microphones level mash and set up perfectly and you talked into each one, you had some were playing to each one. There are variables galore.

And so all of a sudden what happens is you’re going to start making conclusions about the gear that have nothing to do with the gear.

They have to do with like, oh, your acoustic guitar that you’re auditioning and every one of those mikes. It went slightly out of tune on a couple of them. Or the tuning change to whatever, and it change in such a way that when you’re comparing one mike versus another, you’re like, there’s something I like a little better about this, Mike.

Well, there may be truly differences between those two microphones, but the source that you recorded on them also changed.

Well, so now what are you comparing?

So we had to solve two really huge problems with with the way that people have been able to audition gear in the past. And that is like we wanted to bring it to your environment that you’re comfortable with.

You’re familiar with that, the power that that comparison.

And then we also we also knew that we needed to make it. So the only variable is the gear itself.

The source never changes.

So that that really was our journey and that that’s that’s our our big one two punch break through delivering those comparisons to you in a way that you can easily do them.

And that’s our Web site in an unbiased, controlled setting. And then creating the conditions and capturing the data that powers these comparisons. And you know what, man? Another role that I’ve had in my career as I’ve also been a manufacturer, I was the vice president of Slate Slate Digital Slate Media Technology, and I created a product called The Raven.

So it’s a touchscreen controller for DWC. And although that thing does not make a sound, it does not process a sound.

Being in that manufacturer’s seat, the last thing you want is the is for the users, for for retailers to not get it. Do not get what you’ve created.

And so let’s say that I’m a microphone manufacturer and some of my problem then is helping is needing people to understand what I’ve created. What is the sound of this thing that I’ve created? And having been in that seat now as a running audio test kitchen. I know how much passion and blood, sweat and tears has gone into the creation of these products.

And the last thing we wanted to do is to misrepresent the way these products sound in our comparisons and audio test kitchen. And so we talked with manufacturers. We said, like, how do you measure the quality of your microphones? How do you. What kinds of tests do you perform to even develop the tools that you create?

So we incorporated some of those same techniques and I can only fast-Forward to launch day.

And after now, nine months after our initial beta launch and seven months after our public launch, we have had no complaints from 54 microphone manufacturers. Wow. Congratulations. The people who make the most expensive, you know, ten thousand dollars, Sony see 800 or Telefunken microphones all the way down to an 80 dollar sterling Mike.

I mean, this is miraculous that 250 physical microphones could have all been represented in a fair and accurate way to what those mikes really sound like in in real life. Sure. And I’ll say one more thing, and that’s because we recorded them in real life. We didn’t simulate anything. We didn’t digitally model anything. We put these microphones up in a in acoustic environments and created all kinds of different scenarios from the lowest lows to the highest highs and, you know, vocals and guitars and drums and all this stuff.

So you’ve mentioned several times how important it was, the test procedure that you used. Yeah, I want to talk about that for a minute, cause I know that’s kind of the question on people’s minds is there’s less as they’re listening to this. There’s a lot of people that listen to the sound online podcast that do measurement and then work on live shows and measurement is very important to them. So I know that we could fill a whole other interview with just going through all the methods because there are multiple instruments and then how you handled all of that and there’s all these workflows you had to develop to make those multiple instruments.

And I first just want to direct people to a YouTube video called Audio Test. Kitchen has revolutionized the microphone shoot out because when you watch that, you will be able to watch Alex talk about how they worked with a kick drum and how they set up the microphones and where things went wrong and then the entire signal chain just for that one thing. And so to see like a half an hour of hand, just talking about this one instrument, in this one scenario, you realize like what he had to go through to figure all this stuff out.

So I thought maybe instead of that, we could talk about the vocals. So let’s talk about the workflow for recording the vocals and I’ll just say so you don’t have to repeat it. In the app itself, it says vocals. Single vocal performed by a laboratory grade neutral microphone in an anechoic chamber. Re amplified via a vocal surrogate loudspeaker into each product microphone one at a time. And I looked and then later on, you say that that laboratory, great neutral microphone, I believe, is a Szeps in Katou.

We were very listening, driven in our in our R&D process because my team and I, which at the time and ah this development process of like, okay, how can we actually create a system where like if you put a loudspeaker in front of a microphone, it sounds just like a human being.

And for that microphone we were really skeptical that that was possible. And so you’ve Dathan, you have, like, thrown me right into the hot seat.

The hardest one of the hardest things that we had to do and one of the most important use cases for microphones is vocals. Right.

So how do you how do you cook maintain consistency and and your question? It goes taps right into a fundamental principle of audio test kitchen. That is, the source can never change. Because if if you if we recorded the most amazing vocalist in the world who was capable of singing near perfectly 250 times, you know, into every microphone.

First of all, that would take a week.

We would have to be serving them the exact same lemon and honey tea, you know, at a dig at precise intervals. So we we actually tried some of that and it just wasn’t humanly possible.

And we would as we are comparing microphones, we would get fooled like, okay. Do I like that, Mike, better?

Because the performance is different. So we had to eliminate all that stuff. We created what we call our vocal surrogate, which actually is capable of re singing into a microphone.

So in order to find that Szeps CMC six body with an M.K. to Omni capsule in order to find the DPA 40 A11, those are the two capture mikes that we use in order to find the add on s h loudspeaker and the ATC SDM 45, a loudspeaker that served as our vocal surrogates.

We spent months testing all kinds of different microphones, all kinds of different loudspeakers in different acoustic environments, at different reapplication dis distances. We really relied on our ears to, first of all, steer us to like, hey, sure, this Mike has a really flat response.

But like, does it sound like music? Does this sound like a person? You know, would I it closing my eyes believe that that is an acoustic guitar playing live into that microphone, kind of close my eyes and believe that as a person singing into that microphone.

So that was really our first getting over the hump, our next getting over the hump, the last five or 10 percent of getting it perfect. You know, oftentimes that’s the hardest thing, right? That last five percent. It’s the most expensive, it’s the most time consuming. And that’s when we really needed a breakthrough and we had to call in calling a lifeline.

And fortunately, someone picked up on the other end.

What happened? Oh, you want to know?

Hello.

Harmon Harmon Laboratories.

Dr. Shamala seeking. Yeah. Yeah.

We, um, we done as much as we could in our own studio facilities. You know, just traditional recording studios. And we had access to some of the world’s greatest loud speakers and capture microphones, actually, courtesy of us being in a studio that was parked above vintage King audio. L.A. show. I mean, this was like years after I’d already left vintage King.

I had formerly been a salesperson and marketing person with them, but I still have a great relationship with them.

And they they loaned us speakers and and stuff. So we got to really test the best of the best sort of breakthrough, though, the two close at last five or 10 percent gap.

We knew we needed really like we knew we needed the kind of facilities where real audio, R&D and development is done.

So I’m talking like an echo chamber. Right.

And for those of you who know what an anechoic chamber is, you know that it’s a place where you can go in and you can, as I’ve been talking about, eliminate all the variables except the one that you want to test.

So it’s a room where you when you make a sound in it, it’s not going to throw anything back at you. No artifact. So an echo. Of course we know what that is and meaning a lack of or none. So we got anechoic no echoes. Right.

So what we we knew that we needed to be in an environment like this so we could identify the final differences between what we were capturing.

And in that process of bottling a human voice, bottling an acoustic performance from an acoustic guitar, and then rethinking that into these large diaphragm condenser microphones, because there was still a little bit of a difference.

Our measurement capabilities were tapped out. That was it.

So we got really fortunate to actually on a tour of the Harman facilities that we were very craftily got ourselves on. They walked us back through the laboratory part and where all the chambers are.

And we had our laptop in the backpack ready to show them a demo. And we already had the 250 physical microphones with these.

You got a meeting at Harmons on to tour. We booked a tour.

Did you talk about it? Just like, you know, refusing to, you know, to be to be stopped? This is the kind of stuff you’ve got to do in your career, you know? You go. You can’t just send out your resume like you’re talking about, Nathan. You know, you can’t just drop the email like we had called these guys.

We got you know, we thought we got into mode by like the former president of the A-S frickin called them on our behalf and we got nothing. I mean, come on. We thought it was a no. Right.

But so we booked ourselves on this tour. Okay. Last ditch effort. We’re out here at Harmons facilities. Luckily, they’re nearby where we live in Southern California.

And we walked back through the lab and they’re like, oh, hi. We’re like, hey, we’re all a test kitchen, blah, blah, blah. They’re like, oh, it’s interesting.

We were just talking about you.

Interesting. Like what? Serendipity talked about network effect.

So we would nam had just the teacher Nam in Anaheim, California had just commenced and one of our buddies, Ted White, over at Focus.

Right. Had also had also worked at Harmon and had mentioned something about what we were doing.

And so when we showed up, not only did we finally have a face to face with the very people on the planet who could help us, but we also had a warm introduction made prior to that.

So Dr. Sean Olive, he’s a super duper bigwig in the field of acoustics and has created such things as like a headphone target curve, which is like imagine you had the question how if I could correct the response of every headphone or if as a headphone designer, I was trying to target a certain frequency response.

What’s what’s the target?

Harmon is like one of one or two or three people or organizations on the planet who has defined that how headphones should sound.

And this is the kind of stuff you can do when you have an anechoic chamber and they have four of them. So Dr Sean Olive. Todd Welty, homemade concert rapport. And Dan PIJ welcomed us with open arms.

And we spent probably two months there doing research and also then doing a final capture of the way that.

We wanted to once we had Nathan bottled the vocals, we wanted to replay and these vocals in different acoustic environments to represent what you know, someone who’s actually, you know, buys one these microphones and used one of these microphones might experience themselves.

So he did want in in the we re amplified or re sang in the anechoic chamber.

So there are zero artifacts. We did have one in kind of a medium room, kind of like bedroom acoustics ish. And then we did one in a more lively, larger room.

So it was through working with being able to be in a controlled environment with some of their test tools and being able to to analyze. OK. Here’s how close are the test kitchen could get in the traditional recording studio.

The difference between a live vocal and a one that’s bottled in reapplied. Let’s measure that difference.

And then Harman just went like, OK, we’re going to take that difference in. NULL it out. That’s gone. All of a sudden, we’re in a position where the bottled and re amplified vocal was indistinguishable. When you close your eyes from the real thing. One of the things that you would like to know probably about the test kitchen is that it’s free and you can make when you first arrive at it. If you’ve never if you’ve never been there before.

You’re gonna have a few microphones. You can play with in the taste test. And that’s the area at the top, right below the place where you hit play and you see the wave form. The taste test is where you do your audio comparisons. And below that we call that the flex box. And you could do a lot of stuff in there, like compare frequency response graphs and you can search for more microphones. The second you hit that lows, magnifying glass is the search thing.

The second you hit that magnifying glass for search. If you’re not already logged in user, just create a free account.

And please sign up for join the mailing lists. And here’s why. Because we’ve only released half the content that we’ve captured on these 300 microphones. We’ve got tons more that we’re gonna put out there. And there’s there’s a few really fun Easter eggs that we’re gonna put out soon.

These are the category that you that we have now is large diaphragm condenser mikes.

But we also recorded an estimate, them seven, s.m 58 and s.M 57 alongside all these large diaphragm mikes. Just to have as a reference point, because everybody knows those.

Right. So, Youlden, that you’ll be able to find out about that when you join the mailing list.

Good. All right. So we’ve got in the search box here hundreds of microphones. And I think what we should do is, you know, there’s some cool ones in the taste test already.

We got one from LeWitt, one from Townsend Labs is the modeling mike, one from Sterling, one hundred bucks from Gage.

But I’m going to clear out that test right now. I’m just gonna put those microphones in the cupboard and I’ll tell you, you know, covered as a kitchen theme. But it’s also what the Brits call the place where they store their microphones there. Mike covered. And so I’m a click on the cupboard here. And it’s right next to the. It’s a tab in this in this in the flex box where you also do search. And that’s where you can kind of store your stuff.

You can create a little collection to to test. And you can also, like, save stuff for later if you want there.

So I’m going to go back to the search box and I’m going to click on the upper right hand corner that filters. And I’m just going to do kind of like look like what’s a question a lot of people have on their mind? I’d say in the studio realm, it’s like, hey, do I have to spend three thousand dollars on a you eighty-seven?

Yes, that’s a really good point. I think one of the biggest first questions is how much do I need to spend? Like, can I get away with a hundred dollar or two hundred dollar mike or do I need to jump up to a thousand. I like what’s the different.

How do those two price ranges compare. Totally.

I mean, it’s a totally legit question. And I think another thing to ask yourself is it’s so we give you for the first time ever, the ability to do a legitimate, totally unbiased comparison between an 80 dollar mike and a ten thousand dollar mike and anything in between.

So you can do that. You can compare the sound, you can compare the frequency response graphs and like it’s mind blowing and you will find gems at every price range.

But I have to tell you, you know, it’s not just about, you know, what a microphone costs is a reflection of a number of things that might be due to, you know, the brand value.

But now, before you toss out, just like, oh, it’s just a brand name, let’s think about this for a second. Why might I want to spend three thousand plus dollars on a new menu? Eighty seven.

Well, because if I run a voiceover studio, that might be the very thing that my clients need to see that I have on my gear list to trust that I know what I’m doing.

Yeah. Has an eight year credibility shared my credibility. Exactly. So.

And and also like, if you want another reason to get to pay for a Norman brand three you eighty-seven might be because you need to maintain a certain consistency and be able to collaborate with others and have the same sound from studio to studio. I mean, that was the reason why, you know, a lot of people installed Solid-State logic consoles in their studios in the 80s and 90s was so that we could have this what we have now with Dawe, which is like the same sound wherever you go.

So you could work in multiple locations, multiple cities anyway.

So there’s a use there’s a reason why a brand and what you would pay for it, if you feel like that is in a you know, has something more to it. It goes beyond the sound.

Another thing that is a that can be something that makes one product more expensive than another that isn’t sound related is build quality, is components selection.

Think so. It again, talking about that last five percent. Nathan, like the last five percent, offered the most expensive, most time consuming part. And you’ve got companies like you know, I mentioned Telefunken earlier.

They are they are scrutinizing every last detail of their microphones and that, you know, to deliver something that is going to last for you.

That is going to. And, you know, maybe has a better build quality in. And maybe they have better customer support.

So you have to kind of keep all these things in mind when you are talking about getting your tools.

And on the other end of that spectrum, there’s also incredible opportunities for value at every price range.

And you will find an audio test. Kitchen is, I think, the first tool to make it possible if sound is the most important thing.

And and sound is your is the top quality by which you are choosing and perhaps buying a piece of gear already. Test Kitchen gives you the opportunity to make your comparison solely based on that for the for the first time ever. So now that we’ve talked all about this kind of you eighty-seven use case in mice filtered search, I just typed in the numbers, the number eighty seven and came up all the microphones that that argue eighty sevens.

There’s only one.

No I mean 87 and all the clones of Uut seven still let me pull up say the warm audio. So here’s the norm. You eighty seven. Thirty six hundred bucks.

I’ve dragged that from the search up into my taste. US.

I just drag the warm audio w.a. a D7 from the search into the taste test.

And Nathan, you might notice that the microphone when the first load in the taste test is, is a little bit grayed out and it says loading audio and that because in order to make a really snappy interface that makes these comparisons instantaneous, we load a bunch of audio in the background.

So your act, we’re actually loading about 20 sources per microphone as soon as you add it to the taste test.

So be a little bit patient. But then the reward, the payoff is when you’re using it interface and it just responds instantaneously. And then let’s take an example. I mentioned earlier a modeling microphone. Let’s take the towns and labs. It’s they have a modeling microphone, which is one that is one physical microphone, and then they measure other that measure other microphones.

And I think from like a classic or B and create a digital model of that and then apply it to the sound of their physical microphone and allows you to select the different sounding microphones right within their software in your day. W.

So they have one called the LDA eighty-seven LDA probably stands for large diaphragm.

Let’s look at the polu. So Eighty-seven. So this is an all analog clone of of a what clone is a kind of a common term.

I’m not sure the degree to which they were really attempting to clone seven, but that one’s four six eleven hundred sixty nine dollars and then we’ll pick out one more. Let’s do Antelope and Off has another modeling microphone company.

So it’s one physical microphone that you buy and then you can play lots of different digital models to it. Now for all that we’ve talked about sound, I actually want to start with a really fun tool that is like gives you your ears superpowers.

And this was only possible through Harmon Laboratories. So I’m going to click on a tab of the flex box on the left and I’m in select frequency cur frequency response graphs. Up until this point in time have been a total mixed bag. A double edged sword.

There is hated as much as they’re loved. And here’s why.

It’s because if they’re if the frequency response graphs are measured and reported by each manufacturer individually, they can apply their own, their own smoothing, their own margins of error, their own reporting.

That doesn’t necessarily cirilli relate to the way another manufacturer, Suran does, or even how their aph looks.

Maybe like the X and the Y axis could be slightly differently. Other lines are in different places, right? So, yeah, exactly.

They would have the exact same methodology and the same transparency. But the way that they report it, you can’t literally line it up.

So thanks to Harman Labs, we were able to measure the frequency response of every one of these microphones in an identical setting in their anechoic chamber and for the first time ever, make it so that you can do a frequency response comparison.

Side by side, apples to apples of three hundred microphones. So here we go again. I’m going to click on that.

So. Here’s our ears, superpower’s, the each microphone has its own color coding, so the No menu, 87. Thirty six hundred bucks is the red line. And I click over to the warm W 87 as the orange line, and we can instantly see that the low frequency is pretty similar, pretty similar through the meds. And then all of a sudden there’s some pretty significant deviation in the high frequency response centered around six kilohertz. Really different response on those two.

And so all of a sudden I’m going like, well, so what?

Why might that be if Warm was trying to clone or recreate the U.D. seven? Well, what if their model of the seven was a vintage one that had a softer Top End response?

Because personally, my experience and from anecdotally what I’ve heard from others is that there’s a difference between the modern new Eighty-seven produced by Norman and a vintage one.

OK. And that might be due to aging, but it also might be due to differences in how their production process and their their components specs have changed over time. That’s so funny.

So they might even argue are clone sounds more like a U. 87 than the real thing. That’s right.

Fast forwarding.

I’ve noticed that companies that do modelling microphones like Townsend Labs, sleigh Gaige Antelope, they have the liberty of not only of of giving you multiple versions of you, Eddie Sevan’s, as as models.

So you can take a modern one, you can pick a vintage one and and and play with those different different profiles.

Did test kitchen is primarily it’s a it’s designed to help you focus on the sound. But there’s also some cool visual tools like the frequency response curves, give your ears superpowers and that they can help you go like, oh wow.

OK, I’m going. I see a dip here at six thousand at six K. I’m gonna start listening for that. Also, it opens up your hearing. But there’s some other tools that we built in to. And I’m just gonna go to the upper right hand corner and click on this I button.

And we have both ethic. We have actually hotkeys that allow you to more quickly navigate your session.

And I’m going to use one of those hotkeys right now. And it’s the the numbers on my keyboard, on my laptop.

Numbers one, two, three, four, five and six will select taste tests, the corresponding taste test slots, which are also numbered. So I can easily now it’s just my hand on my my numbers on my laptop jump between the eighty seven response and the warm you eighty seven response and the plew so you eighty seven you the nomen, the antelope, the Najman, the Townsend. And so I can really quickly not only get a visual comparison but also that that same principle works really great when you’re comparing microphones.

And in that, in that Hartke menu you’ll see there’s a few other cool ones too, like hitting the Beaky takes you back to the beginning of the audio track. So while we’re talking about audio tracks, it’s like we know what you’re the sound that you’re comparing is hugely important to what we built. And we were super deliberate about designing the content to create something that was actually fun, to listen to good music, and also that would bring out and reveal all the characteristics of the microphones that you need.

So I’m actually going to show you some of the solos here. There’s two songs right now.

And again, if you join the mailing list, you’ll know over the next couple months, as soon as we release the two other songs that we have in other genres so that you can get different types of music and different types of sounding, different sounding sources to compare these microphones on.

So this first song I pull up, it’s a it’s like old school, L.A. hip hop tune. And within it you can solo the bass, you like the guitar or the piano vocal piano room, and then also a drum stem and a dry mix. And the dry mixes is just a different version of the mix, which is in the stack effect here, which is a really cool tool. The stack effect was processed with USD plugins, so they were really cool like they did about a week or two after we launched the calls up and said, dude, this is amazing that like, can you how quickly can you build us an audio test kitchen for our site?

And I hope it’s okay for me to talk about that, but that’s a work and work in progress with those guys. And they’ve got some some cool they sent us some satellites and all their plug ins so that we could we could take our full band mixes here and process them as as you would if you were making a record.

And so, Natalie, can you hear the raw sound of each one of these microphones? But you can hear it in context of like here’s here’s the plugin settings. Never change. Mixed, never changed. But we add a little compression to the vocals that is a little compression to the drums.

We didn’t eat anything because as for the main differences between these microphones, but we added like a touch of reverb and like, you know, little slap back on the vocal, that kind of thing.

So you could kind of real world contextualize how these microphones might be if you were to own them. And that brings me to the stack defects so that for those of you who are just listening to the podcast, there’s a toggled between solo. So that would be able to, you know, hear any individual instrument within the track. And then the opposite side of that toggle stack defect. And it’s basically a mix.

But why we call it stacked is because what happens is if you record using you, if you record every source using one microphone, it’s personality stacks up.

If you’ve got eight sources, 10 sources, 12 sources in your track, all of a sudden you get an eight exit, 10x, a 12 axing of that microphones, personality, its characteristics. So if it was hard and this is something anything that I observed in in the showroom a lot at Binish King or when I was doing shootouts, comparing a microphone on a single source, sometimes those differences are pretty subtle. So we wanted to create a way to naturally amplify the differences between mikes, no processing, no trickery.

It’s just literally like, hey, man. And, you know, this is what a lot of people do anyway. It’s like, you know, he knows you’re talking into a Hail Pyar 40.

What if that were your one and only Mike?

Or what if that were your best Mike and you just recorded everything with it and you’re like a little distorted by your vocals, but you also stuck it in front of, you know, a guitar but a bass. And when you put it, you know, in the kick drum and you put it, you know, you’d have to put it on everything, your tambourine.

So that’s what stack effect simulates. You know, if sound if the sound is really the most important thing and it’s it’s more important than price and it’s more important than looking, you know, at a frequency response graph and more important than how these products look. They’re visual design, the aesthetics. You can flip the audio test, kitchen interface, which is just a free Web site, works in your browser into blind mode, operating in my former features.

Oh, good. Good.

So you would ask me about this? Okay, you flip it in the blind mode and interface automatically shuffles the position of the microphones in the tape taste test at random so that you can’t, you know, track.

Oh, that Mike was, you know, the the norm and that was the worm. And then so it allows you and then it it makes them anonymous.

So now we can’t see the pictures, the microphones anymore. We can only see like our little audio test kitchen logo and a letter assigned to each one of those microphone taste test slots.

So the thing that you want to do as your listening, you want to take notes on each microphone.

So, like, let’s let’s say that we were doing a listening session and and rather than trying to get good audio over Zoom and have it sound all funky and like, hey, it sounds weird to me.

Go, go and do this on audio test kitchen.

It’s free. So let’s just say this first microphone. I’m like, you know, Chris above eight K, let’s just say my ears are that are that good.

And then I go to the next one and I’m doing some listening and I go like a bit woolly and see.

There we go. And this is these are terms that make sense to me. But you know, if I said wolly to you, you might be like, oh my God, I’ve been looking for a Willie Brown all my life.

That’s the one.

But for me, I’m like, Wolly is not what I’m looking for right now. So a bit woolly in the low mids. Okay. So I and I submit that. No. Make sure you submit every time. And then this next one I’m listening and I go like oh man. That’s like oh Ben TopT and exclamation point. Hit submit and the next one. So you get the idea. And this last one I’m going to go like one of my favorite terms.

Supple. Oh yeah. It’s a full mid range.

Okay.

So and it’s suffered mid range that I have been looking for all my all my life for I’m finally ready sevens.

OK, so now I’m done with my blind test. I’m gonna flip it back in a normal mode and then go back to the flex box and I’ll get to select the tab. The third one down the little notepad. Now I can see. Now I can see who what my what my notes are.

And it looks like I have you guys, since I’m a logged in user, it actually records my historical notes that I’ve done in all my listening to the you eighty seven, for example. So I’ve got notes from today, I’ve got notes from April 18th. I’ve gotten so but if we’re looking at just the notes that I put today, now I can see like. Oh, in my blind taste test. The you know, remember, I was looking for the one that had the supple mid range and in this case, it just so happens that I didn’t actually listen these but I can see which microphone had the supple mid range.

It was the warm audio w eighty seven. Five ninety nine. Oh my gosh. I just saved myself three thousand dollars.

And then, you know, if you, if you really want to on one way that you can as a as someone who is hopefully benefiting from using audio test kitchen, you can click, you can click the shopping cart and you know, you wouldn’t be purchasing a microphone through us but you would be taken to your favorite dealer to make a purchase.

And then that our relationships with dealers, we actually are just getting those going would help keep audio test kitchen free.

I was hoping you could tell us about maybe one of the biggest or most painful mistakes you’ve made in your career applies to audio test kitchen is that we had to be okay with that.

The idea and the fact that developing a product is a journey and we had to take failures.

And you referred to some videos that are available to to watch about audio test kitchens process online.

We had to be able to get like here. Here we are in the studio. Picture this where? East West Studios. And we’ve got 250 microphones and we’ve got a crew of 10 that we’ve hired.

And we’re paying, you know, the rate of a world class top studio every day and assistant engineers and all the stuff.

And we’re three 1/2 days into recording drums on these microphones and we realize something is not right now and we have to.

And that was a real moment. And we had to choose at that moment who we are. Are we the people who go like, hey, welcome to Audio Test Kitchen. Now, it’s not quite perfect and you’re gonna have to ignore the fact that the tuning of the bass drum, the snare drum, slightly changes in the high hat is a little off from time to time.

But this is the closest you’ve ever gotten to be able to do a legitimate, you know, unbiased a B test under standardized conditions online for free.

Sure. Not community that we had to go to. And that decision, that decision to to stand up for and do what it takes to achieve and deliver what we value was a twenty thousand dollar decision.

Throughway that studio time, the drum rental, all that stuff, and then had to swallow that like, oh my gosh, our best laid plans lead us to failure.

But now think about like a much more epic expression of this. Let’s look look at a company that, like, makes rockets. Let’s look at like space X or something like that.

They build in failure as part of their cycle. They are they’re trying desperately to make things fail, because when it comes down to putting the human beings in the dragon capsule, that then can’t fail.

So if you flip it and go like, let’s find out every place possible, we can make this thing fail and let’s just be like, let’s embrace that and let failure be our teacher fail forward.

That’s. Man, if you can swallow your ego and reconcile with, like, dang, I thought I was really smart. Coming up with this idea how to solve this problem, I guess it didn’t work. So I guess I’m not as great as I am.

But your greatness is in your ability to adapt and learn from your mistakes and adapt.

Can you think of one book that has been really helpful to you? Call of the Wild Jack London. Yes.

Wow, that’s a good one. OK, cool. I was just reminded of that.

It’s like, you know, sometimes your life really does depend on lighting that match.

What’s one or two that you have to listen to every time they come out? Lewis, how’s School of Greatness, OK? That’s a good one. Well, Alex, where is the best place for people to follow your work?

So I would I would invite everybody to go to audio test, kitchen, dot com and first thing, actually scroll to the very bottom of that home page where you can learn about some of the behind the scenes stuff and and see some lasers and robots and actions like tab watch one minute video that is on a test kitchen. In a nutshell, it shows you some of the processes we went through in the studios in the anechoic chamber at Harmon, go all the way to the bottom of that home page and join the newsletter.

And that way you can hear about the new content that’s going to be coming out. And just go receive, receive.

Look for audio test kitchen on Facebook. Look for us on Instagram and let’s see me personally.

You can reach me [email protected]

All right, Alex, thank you so much for joining me on Sound Design Live

Nathan, thank you so much for having me as your guests. And I want to say hey and thank you to everybody who’s listening. We are all in this together, making the world a better starting place. I admire you. I applaud you.

Be courageous in not compromising your own standards and make the world; bend the world to your vision.

Is your Dante network truly redundant?

By Nathan Lively

Subscribe on iTunes, SoundCloud, Google Play or Stitcher.

Support Sound Design Live on Patreon.

In this episode of Sound Design Live, I talk with the Field Sales Engineering Manager at Yamaha Global, Preston Gray. We discuss the benefits of finding a mentor, redundant Dante networks, and how scuba diving is like pro audio.

I ask:

  • What are some of the biggest mistakes you see people making who are new to Yamaha consoles?
  • When it comes to Dante networks, I have some small problems every time I use them. Could you talk a little bit about proper system connection and boot up procedures?
  • Tell us about the biggest or maybe most painful mistake you’ve made on the job and how you recovered.
  • FB
    • Dave Gammon: How does he see rapid prototyping and the ability of printing parts effectively change the possible configurations in pro audio for the better.
  • What’s in your work bag?
preston gray

Talking about redundancy we want to look for single-point failures. If we have a network switch and it’s handling both primary and secondary networks, chances are it has one power supply. If that power supply goes down we lose both primary and secondary.

Preston Gray

Notes

  1. All music in this episode by Mile Twenty Four.
  2. Switched-mode: primary and secondary port are doing the same thing. Redundant: primary and secondary are separate. If you start with only the primary network connection then you can open Dante controller and confirm that everything is in redundant mode and check all network settings.
  3. Workbag: Bose noise-canceling headphones, Leicia D810, iSEMcon EMX7150, Lectrosonic plugin wireless, Trupulse range finder
  4. Books: Scuba Confidential
  5. Podcast: Breach, 20,000 Hz
  6. Quotes
    1. Being able to find a mentor and build a relationship that has been able to be in the industry much longer than you and is willing to share them with you is probably the most influential decision I have made.
    2. We need to constantly be updating the gain structure.
    3. I want to make sure I’m hitting the preamp hard enough to make sure I’m getting the characteristics out of the preamp, but I don’t want to overdrive it. I want to ride the line.
    4. We have to be careful in how all of those busses and inputs are summing with respect to latency. We can quickly create comb filtering inadvertently if we don’t pay attention to the audio paths and the time it takes to go to Waves servers and whatnot.
    5. With Rivage, we have a really powerful latency compensation engine.
    6. Let’s get the primary network first and then the secondary, but let’s explore why that is.
    7. Rule #1 when working with redundant systems: Don’t cross the streams.
    8. If we set static IP addresses we want to make sure they are all in the same sub-domain.
    9. If you’re on a rig consistency. If it’s a network your setting up for an installation that’s going to be left alone. I like to set a static IP address.
    10. I might have a 192.168.1 address for all house left amplifiers. And 192.168.2 address for house right amplifiers and maybe a .3 address for DSP.
    11. If we have devices that are looking for the DHCP server and it’s the last thing to get powered on, it may have already defaulted to a link-local address while it was waiting for the DHCP server to come along. Whatever device you are using for the DHCP server, that really needs to come on first.
    12. Connect everything from the start, primary and secondary, just don’t power up the secondary network.
    13. Talking about redundancy we want to look for single-point failures. If we have a network switch and it’s handling both primary and secondary networks, chances are it has one power supply. If that power supply goes down we lose both primary and secondary.
    14. If we are going to deploy a redundant network, primary stays on one set of switches and secondary stays on another set of switches.
    15. Hunting or golfing range finders can be used that are fairly inexpensive.
    16. [Scuba diving] is very technical but at the end of the day, you also get to experience art and connect with emotion.

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • …
  • 28
  • Next Page »

Search 200 articles and podcasts

Copyright © 2021 Nathan Lively