Subscribe on iTunes, SoundCloud, Google Play or Stitcher.
Support Sound Design Live on Patreon.
In this episode of Sound Design Live, I talk with the Co-Founder and VP of Products at Sonarworks, Mārtiņš Popelis. We discuss the magic under the hood of Reference 4, why you can only please 17% of people, and the true test of mixing on a linear system.
I ask:
- What’s under the hood of Reference 4 Studio Edition? Can you walk us through how it works? I’m curious what the secret sauce is beyond measurement and corrective EQ.
- What are some of the biggest mistakes you see people making who are new to studio monitor setup and calibration?
- FB
- Alex Benn – I’d love to know how it generates the phase plot of the measurements. My Genelec 8030a’s have one full wrap when measured in SMAART, but it doesn’t appear in Reference 4.
- Kyle Marriott – Can you ask if they’re FIR based, and tackle phase or just magnitude? Also what’s the latency? Is it fixed or variable?
- Jon Burton – What is the normal latency on the system? I found it quite long on my MacBook pro so had to keep turning it on and off to do Zoom calls etc, is this normal? Can it be reduced?
- Ben Davey – Do they have any plans to add profiles for In-ear monitors to the headphone side of things? Reference 4 is amazing on my closed and open back headphones, but would love to be able to balance out my Ultimate Ears as well.
- Brandon Crowley – I love Reference 4 and use it for both mixing and listening purposes. I’d like to know, what’s next for Reference 4? It seems like it’s already the complete package, could studio monitor emulation be in it’s future?

Once you do the calibration and mix your song on this new reference sound then the real test for bringing value to you is whether it is translating better.
Martins Popelis
Notes
- All music in this episode by Spare Parts.
- Organizational Culture and Leadership
- Martins on Instagram and FB.
- Quotes
- The objective of the reference product is to remove all of the sound coloration.
- The measurement microphone lives in its own reality.
- There is a huge DSP living in your head doing magic tricks with interpreting the world for you.
- The solution to sound quality we feel is up to the individual. It’s not an all sounds fit one.
- Invest your first $1k into getting a pair of speakers. Then invest your next $1-2k into room tuning. But from that point on, Sonarworks will be the thing that gets you to a better place than an additional $2-4k investment into room tuning.
- There is a genuine difference in what sound people like and there is a difference in our hearing.
- No matter how well a headphone company does with their R&D, they are going to hit the sweet spot for no more than 17% of the population.
Transcript
This transcript was automatically generated. Please let me know if you discover any errors.
I’m Nathan Lively, and today I’m joined by the co-founder and VP of products that sonar works, Mārtiņš Popelis. Welcome to Sound Design Live Nathan.
Hey, I’m excited to be here.
All right. So, Mārtiņš, I definitely want to talk to you about reference for and kind of the amazing stuff you can do with it.
But before we do that, what is one of the first tracks you like to listen to after you get a system set up?
That’s actually a good question, because there is a specific track. My and actually the favorite track of many of our team is Rage Against the Machine. Killing in the name, OK.
So the good thing about the track and the good thing, and I mean the important thing about the test track is its ability to kind of very quickly uncover you the reality of that particular sound system. So it’s actually very important for that track to be kind of full with musical information as much as possible at all times across all the frequency range. Right. I mean, the other extreme, if you take just somebody some track where there is a female voice singing, maybe without a musical instrument or maybe with some kind of small sound in the background, that kind of track is off like a very hard thing to use to understand what the sound system sounds like on this Rage Against the Machine track is kind of all over the place from the first second.
So you can kind of very quickly assess what’s going on and also very quickly understand the AB of the kind of calibration impact and very, very quickly get a feel for what’s going on besides being a great song, of course. But its technical ability to show you the frequency response of the system is what’s more important when you talk about the test track.
So sure. So Martine’s how did you get your first job in audio?
Like what was your first paying gig? Short answer is Sonarworks.
OK, I got into audio through founding, through cofounding sonar works into a paying job as audio. Actually, I don’t come from a very heavy musical background before I kind of personally see that as an asset that’s been extremely exciting to kind of learn to know what the world of audio looks from inside, from the creator space. I’ve been excited about music and audio technologies before joining, but I think kind of the fact that we actually came from outside the music industry was one of the reasons it kind of let us take a fresh look and get a fresh perspective about the problems and maybe offer some kind of new solutions to the problems.
But as I was thinking about this question, what came to my memory was that kind of before I got into paying jobs in audio in high school, I think the first gig in audio was that in the high school I was involved, we kind of had like this thing called Radio Center, which was small. And it was basically the small place in the school that hosted the school’s audio gear and the intercom of the school, etc. So it was kind of led down and forgotten.
But me and a few friends, we were really excited about that for some reason. And we were also sometimes setting up the school kind of disco kind of party audio for using the school’s equipment. But if we uncovered that place and for a while we had built a computer machine that when the break hit, then it started playing some music over the school intercom. So we had like a few few months of the happy time when the break from the lesson was announced by Pink Floyd being played out in the school’s corridors.
But then some some some teachers just kind of didn’t like that. Their lesson is interrupted. So we got shut down. But that was my first involvement, was kind of P.A. system, I guess, because the market is looking back on your career so far.
And a lot of things have have happened since in between your first time playing around with some electronics and music in high school and then cofounding sonar works and you had to make some choices along the way. So what do you think is one of the best decisions that you made to get more of the work that you really love?
We think my previous job, I guess. Sure.
How did you know that? I mean, as I did, you know that it was the right time to do that. Can you sort of take us to that moment?
I mean, it was kind of I think I was twice in my life in that situation, and it was always sort of a scary choice at the point in time. But both times I think it was kind of the right thing to do. So I was right out of the university. I had joined one company and I was working with them. It was an environmental consulting and I was working with them for like three years or something with at some point I just kind of realize that, hey, I really kind of don’t feel in that particular set up with us.
It’s right for me to work for the company. And I would really like to try building a company rather than working for one. And yeah, I took the kind of risky bets of quitting that job and joining with my friends to actually start another nonmusical related company that was, I think, also a very good choice that kind of opened the Intrapreneurship drive in me. So I haven’t really worked for a proper job since. And then and then at some point, this non-equity related company, I kind of was with that for something like seven years, I think.
And then at some point I just thought to myself that, hey, I’m kind of I think I kind of know. The thing about this business, and it’s kind of if I will be doing the same thing in five to 10 years, then I’ll probably hate myself. So I understood that there is there really is no choice. Sure.
Simitis, let’s get into talking about the software. So there’s a lot of other videos out there already. And you’ve done several other interviews about what reference four is and kind of how it works. And I saw that there are several videos on YouTube already of people sort of walking you through step by step of what it does and what the results are. And so I think there’s plenty of material out there already about that. But I haven’t seen a whole lot of people talk about kind of what’s under the hood.
And since the Sound Design Live audience includes a lot of live sound engineers and a lot of people that also do measurement as part of their work and output processing, I think they would be interested to know kind of how does it work.
So I was wondering if you could walk us through how it works and maybe what some of the secret sauce beyond measurement and correctives IQ.
Sure to begin. The objective of the reference product is obviously to remove all the coloration. Right. Can you talk about the speakers in the room situation then, having measured too many professional studios and bedroom studios alike? I haven’t really seen a completely neutral studio kind of from the first measurement, meaning that I mean, bedroom studios obviously have a ton of problems because they’re are not the bright room and the set of options are limited and the budget is limited.
But if you talk about the big studios, then they also have their own problems, like the gear keeps changing and somebody is bringing in a new sofa and a new kind of shelf. And then the console has a lot of reflections and there are different issues in the big studios as well. So every studio I have seen so far could kind of have some benefits, bigger or smaller, of removing that kind of unnecessary coloration to the sound to give you a more more accurate ability to hear of what you have really kind of mixed or produced or created musically in your track.
And that is the objective. But then when you start to ask the question really about so what is really the excess coloration, what is really that thing that you’re trying to remove, then you very quickly get to a realization that it really depends on the method of measurement. And then very one very important aspect to realize is that if you think about a measurement microphone and the human being, then these two things do not hear a like like measurement microphone lives and it lives in its own reality and in the measurement microphone.
The reality, if you measure a thing, you kind of in one spot and then you move it for a couple of inches and then you measure another measurement and then you move it far more a couple of inches in your studio. You will get three completely different measurements, frequency response wise, because the frequency response of the room is uneven as seen by the measurement microphone. But for a human being, your reality is interpreted by your brain also in the audio domain, obviously, and it’s a huge DSP actually living in your head, doing magic tricks with interpreting the world for you and the way your brain is interpreting audio.
It’s actually not showing you the world of the measurement microphone. It’s actually more like taking the average of all the sounds around you and all the kind of reflections coming from your body and from different areas of your room. So your brain is kind of constantly listening in to an area around you and then kind of averaging it out for you. So you don’t have these radical you don’t hear very radical changes in frequency response as you move your head from being to inch by inch and kind of sonar works.
Measurements also is really built into one of the cornerstones is it’s built around this inside. So we do not try to do like a single point measurement. We really work with the average of the area that’s kind of intelligently calculated to mimic the way a human being here. So we’re kind of working with the reality of a human being and not the reality of a measurement microphone. And then the other important aspect of the measurement software that is important is that we do like thirty seven measurement points around the listening spot of the engineer, and our software is unique with the fact that we can actually locate the microphone with each measurement.
So it’s actually kind of draws itself MAPP of the relative spacing of these points in the room. And that actually gives two important things. One is it actually allows the software to kind of understand these measurements in context and then of do the right calculation of the average profile. But the other thing is it actually allows for a really easy and consistent user guidance of the method so that. Actually, if I do the measurements in this room or you travel over here and also do the measurements in this room or you actually do the measurements in your room, the method that we apply using sonar works reference will actually be very consistent because the software is actually asking you to get the very specific points around very specific places in your studio and making sure that you actually do that without asking you to read like a very thick and boring manual.
So it really enables this consistency of how the method is used and how then this average is calculated that actually delivers the same consistent reference across different users across different locations and enables then not only some sort of random improvement saying, hey, we thought that these things might sound better in your room, but it actually allows us to talk about driving everything towards the same reference sound standard across different places. So if you work on a track in your room and you send it over to your friend in another room in the city or maybe even on the continent, then you are actually hearing very much the same thing when you’re listening to a set of calibrated speakers.
So that’s kind of the other important thing. And the third important thing is that we actually apply the same calibration target for speakers as well as for headphones. So headphones we measure in our lab so the user doesn’t have to measure them, but also for the headphones we strive to achieve frequency response wise for you as the listener, the same frequency response as that of the calibrated speakers. And when we were still allowed to travel, we were often doing doing the studio demos where you can actually measure a set of speakers and then also set up a celebrated headphones and allow the user to compare and see that actually frequency response wise, it’s really, really sounds very similar and that enables a lot of kind of portability and ability to work like while traveling or late at night when you kind of lost your speakers or in different places where you don’t really have access to speakers.
So it’s really this portability between headphone use cases and speaker use cases. That’s also one of the unique things behind somnolence.
That’s interesting, this idea of linearity and portability.
I did an interview with Alex Awana from Audio Test Kitchen and he yeah, he pointed out to me that one of the reasons that lots of studios at one point started installing SSL consoles was so that you could record at one studio, go to another one and have it pretty much sound the same. Now, I’m not sure how much that SSL console imparts its own sound. It seems like, you know, the room and the position of the speakers would also have a big effect.
But still, I understand sort of people’s pursuit of this idea of like let’s figure out how we can create some consistency.
Mm hmm. Yeah, and I think it’s especially kind of nowadays when everything is moving so fast towards the reality where music is actually more and more produced over distance and then different types of home studios, this kind of ability to work on the same platform of reference sound I think is more important than ever, because kind of, I don’t know, 20 years ago when most of the hit songs were kind of still mixed and mastered in some of the kind of high end studios under the big labels or whatever.
That’s reality. I mean, it’s still real for a few musicians, but for the majority and if not the reality anymore. And this kind of mobile, portable global world really asks for a consistency, I think, in sound.
So from what you your description of the way reference for works, one of the things that I’m understanding is that if I were to try to just replicate the same thing at home in my own studio without reference for, I could take thirty seven measurements around my room, average them together and then create some sort of complementary IQ and a manual way. Or I could use some kind of fire filter creator with some auto things anyway. I could do a lot of that manually, but I’m not going.
I don’t know how to localize that microphone in the space. How does reference for use that location information in its final result?
So I mean, there are these two things that uses the localization information for. One is this ability to see the measurement points in context and in this relative MAPP between each other. And then we’re talking about an average. But it’s not that simple in the sense that kind of how can you combine the spatial information to actually calculate the profile is kind of one of the one of the secret sources. And then the other thing is this kind of consistency of user interface.
I mean, sure, there are kind of tools like Erev or some other that are smart that allow you kind of if you want to geek out about how you measure your room, then you can of measure all sorts of parameters and then of do your own thing. And there are people who do that. And I don’t get to hold anything against them, obviously. I mean, that’s perfectly cool thing to do. But then you have to realize that you are then in full control and taking full responsibility for the things that you are measuring, for the things that you’re tuning.
And I think I mean, one of the analogies that I like about our software is if you talk about the kind of visual design world, right. Then there is Photoshop and there is Instagram. I mean, there is Photoshop, it has gazillions of features. You have all the freedom that you want creatively. There is this thick Photoshop Bible and you can probably spend like a good five to 10 years actually mastering the tool. And once you do that, then you can create wonders.
And there is no limit to kind of how you can express your imagination through kind of visual design. But there’s also Instagram that’s kind of more focused towards people who don’t want to invest five years in kind of mastering that art and they just want enough to push a button. So that’s a pretty picture, be done with it and move on with their lives. So as I see the music create a world, most of the people I mean, there are some people obviously who like to kind of geek out the bathroom acoustics, but there are less and less of those people.
And most of the people I know among our kind of users is people who are actually very passionate about music. And they would want to spend as much time as possible actually thinking about music and creating music. And the fact that the music from their studio doesn’t translate that well to the outside world is just the problem that they would like to get rid of as quickly and as seamlessly as possible. So our philosophy behind building the product is also to give them that ability.
We’re always kind of thinking, hey, how can we ask less questions to the user? How can we make it even more seamless? How can we kind of smooth out all the workflows so that the user can be as I mean, the user experience from our perspective would be you install solar rocks, you press one button. The system says you’re calibrated, kind of your sound is good. You can go back to your creativity and you can say thank you and do that.
So that’s the kind of ideal user interface. So unfortunately, we have to ask the user to do more things, but the less questions we can ask and the less the quicker we can let the user return to creating music, the more successful we think we are. So that’s also kind of, I think, a very important aspect to take into account. So totally.
I really appreciate how you guys have simplified the process and also attempted to remove the opportunity for me to make errors, you know, as as the end user, because I can see an opportunity where if I really wanted to geek out about the measurements myself and pursue a path where I go and I. Get out a smart rig and I and I take 37 measurements, but then I would also need to figure out a way to average them together in a weighted manner where I would say, OK, measurements one through ten, I want those to be worth five percent in measurements, you know, 11 through 20.
I want those to be worth seven percent.
And then, you know, you build up this complex average.
So you guys have thought all that stuff through and probably saved me a couple of days of trying to figure that all out. And I’m able to do it in, you know, ten minutes or think I was I would say more.
I mean, ultimately, the real test of what we do is kind of once you really do the calibration and then you mix your song on this new reference sound, then the real test, whether we’re bringing value to you is, hey, is it now translating better? Are you getting a better result soon or are you able to deliver better sounding song than you could before? So that’s the real test. And the trick was actually finding that the right method of measurement and calculating the profile and there are kind of more psychoacoustics and acoustical things that kind of go into the equation, but is really arriving at the curve where you can say, OK, this actually works.
This actually helps things translate. And with sonar works, I mean, we really get users every day. Somebody writes to our support saying, hey, thank you. Like before I was doing ten cycles back and forth to my car to check my mics. And now since I’m still zone, I would say I just did one cycle to the car and I liked everything about it. Like, so if there are these really user stories coming in. So we’re quite confident that the sound that we deliver actually helps people get their translation faster.
That was one one engineer when we met actually at the MAPP show last year. And he was like, hey, guys. Like, yes, what? I installed the reference and the first song I mixed on this, I got the grab it. So that’s funny. So it works.
So, Martin, I know that you you mentioned that you have measured a lot of systems in studios and you also have seen and done a lot of support for the people who are using reference for.
So I wondered if you could sort of, you know, aggregate some of this learning and share with us some of the biggest mistakes you see people making who are maybe new to studio monitors set up and calibration. What are some of the things people are doing wrong in terms of placement? AIM and then I don’t know other like maybe key things that they’re doing?
I would say kind of if you talk about specifically studio systems, I mean the kind of installment of the physical things, I think the kind of biggest thing I would say is not the way how people set their systems up. It’s actually how much of this historical attachment to the hardware is keeping people in the mind frame of mind that says, hey, it’s all about the speaker or it’s all about the headphone or it’s all about the room, fine tuning.
I mean, the way how I find most people think about it is basically first I should get the best speakers I want and then I can get. And then if I have more money, I should invest more into speakers. Then at some point they say, OK, I probably should invest into room tunings. And I mean and for the speakers, you can probably I mean, there are very, very kind of affordable speakers and there are like the thousand dollars, a pair to two thousand dollars a pair of speakers.
And then you quickly get into the five to ten thousand dollar range. Right. And then usually people think that, hey, it’s probably if I can’t get into the ten thousand dollar for a pair of speakers range, then then I’ve been that’s my limitation. And then if you talk about the room tuning, then you quickly get into like you can do some small job yourself, but then you quickly also get into tend to like, I mean, unlimited amount of dollars that you can invest into room tuning.
And I think people really think that that’s the goal and that’s the limitation. Whereas really I think that of now in the day and age with where we are with like software tuning technology like ours, I mean, by all means, we are not saying that we are kind of the replacement of room tuning. You have to get a decent pair of speakers and you have to get a decent amount of room tuning. You can’t really go into a glass cubicle and then kind of expect everything to be solved by calibrating your speakers with sonar works.
It wouldn’t work. You have to invest in speakers and you have to invest into room tuning. But really, the place where the kind of return on investment from sonar works becomes really your best way to improve your studio. Sound is closer kind. Then people think I would say kind of, I don’t know, invest your first thousand dollars into getting a pair of speakers, then invest your next thousand or two thousand dollars into room tuning. But from that point on, sonar works will really kind of be the thing that kind of takes you to a gets you to a much, much better place than additional two or four thousand dollar investment into the room tuning.
So we’ve had like a real story. We visited a friend in L.A. who is kind of working from his home studio and producing for a band. And we were kind of somebody introduced us and he went there to show what sonar was can do. And after we set it up and we calibrate the studio, he was like, well, guys, you know, I had these four thousand dollar speakers and I was thinking that that’s my limit. And that’s why I kind of I can’t really get my mixes to translate as well as I could.
So I was thinking of selling some more gear, selling those speakers and getting myself a ten thousand dollar speakers. But now, apparently, I don’t have to do that because it’s way better than I thought it would be with these more expensive speakers. So that’s, I think, is kind of the biggest the biggest thing I think I see. I mean I mean, obviously. But that’s very rarely where you see kind of some very massive errors and placement like people placing their speakers asymmetrically or kind of I mean, most of the people already know that, hey, like this equilateral triangle is the best kind of placement and you have to place the speakers at the level.
So that’s kind of I haven’t seen those problems too often, but this kind of thinking that, hey, I mean, with a few hundred dollars worth of headphones and the calibration, you’re probably better off than with a two thousand pair of headphones. So I like what you’re saying about don’t put the cart before the horse. So I can’t just go get some speakers out of the trash in the alley that I found with blown drivers and then install sonar works and expect it to sound amazing.
Sure, sure, sure.
I mean, there is enough kind of you have to get a decent pair of speakers, but most, many more people are there already than than they think they are. So small towns.
Let’s look at some questions that people sent me from Facebook. Alex Binz says, I’d love to know. Generates that, I guess it is talking about reference for I’d love to know how it generates the fees, part of the measurements marginal like 80, 30 ayes have one full wrap measured in smart, but it doesn’t appear in reference for.
All right. So this is a deep technical question. What he’s asking about is once you when you look at the plug in or the system wide of works, then you can select the different curves that the showing. And we’re trying to be really transparent about what the plugin is doing to your audio so you can turn on the phase response of the system. But what I think is what he’s referring to. But the phase response that the plug in is showing to you is only the phase response of the sonar effect.
So when we measure the speakers, we actually do not measure the phase response of the system. We only measure the frequency response of the system. So we don’t really attempt to measure the phase response of his pair of general. What we’re saying is that depending on the filter modes that you select inside the plug in, whether it’s a zero and whether it’s a zero latency or is it a phase linear thing, then if you’re in the zero latency mode, then the plugin introduces some change to the phase response of the audio signal coming through.
So what this graph shows is the phase distortion, if you will, that the plug in is introducing to your audio. And that’s the then layered on top, whatever the phase response of your speakers are. So that’s the answer to the question, I guess. Cool.
So, Carol, Mariotte wants to know, can you ask if they are fear based and tackle phase or just magnitude? We just covered part of that. So it’s just magnitude. But the plug in does show the resulting phase change from that magnitude change.
So, yeah, is the way you’re applying the filters are based. Yes.
So if I based when it’s working in phase linear mode, then it’s kind of full FBAR. When it’s working in the zero latency mode, then technically it’s IIR that’s implemented through the FLIR. But yeah, well that’s zero latency is the minimum phase filter.
And then he asks about the latency, is it fixed or variable. And it sounds like you have two settings. It’s either going to be linear phase or minimum phase.
Yeah, we actually we actually have three. I mean, the kind of depending on the depending on the use case and the really the preference of the user. The tradeoff is if you go for phase linear mode on the filter, then it introduces latency to the system. And that’s kind of inevitable. By the way, the mass of the phase linear filter works and in some cases that’s OK. And then people say, hey, I want this phase linearity of the system, but of the plug in.
But if you want to like if you’re tracking or you’re working in some other kind of latency sensitive tasks, then we also have in the other extreme we have the zero latency mode that’s then zero latency in the plug in. But then that costs you some change in the phase response. As far as we’ve tested, it’s not really audible, but kind of as I said, we’re being transparent about what it does. So you can check what the phase response is in the curve in the plug ins and then be your own judge about whether it’s OK in your use case.
So in the zero latency mode, then the plug in works zero latency that cost you some phase change. And then there is the optimal mode that kind of introduces a little bit of latency, but kind of goes to a little bit of a distortion. So it’s kind of trying to find the middle ground between the between the other two choices. So there are these three modes. I guess one of the things in the question might be also we have two ways how you can apply sonar works in your system.
One is the plug in. That’s fine. If if you have, like a serious production kind of setup on the rig, then that’s the most robust way to introduce sonar works into your system while you’re actually for your kind of mixing job. But we also have the system one, which is after the virtual sounds are in your computer and then processes all the audio and then releases to the real output. And that enables you to kind of calibrate all the audio coming from your machine, like YouTube, Spotify or whatever else you might want to use.
And some users also find it more convenient to work for their set up to kind of routed through the system wide. But then because of this virtual driver that then sits in your machine, that costs you additional latency so the plug ins can run in through zero latency and the system wide, even in the event of zero latency mode, it’s still cost to the latency of the buffer of the virtual sound. That’s kind of. And does that have to do with the way we’re looking?
Does that have to do with the sample rate? Can that be changed?
So the next question is, it has to do with how the operating system kind of. Handles sound card devices and of what are the operating system requirements for buffering audio that’s kind of going to the audio device. So we’re looking for ways how to how to optimize it by kind of thinking about new ways, how to ride these virtual drivers. But at the moment, it’s kind of limited by the operating system.
OK, so John Burton had a question about that in this system wide implementation.
So, John, they’re working on that then, Dave. Do they have any plans to add profiles for in your monitors to the headphone side of things? Reference for is amazing on my closed and open back headphones, but would love to be able to balance out my ultimate ears as well.
Generally, we’re constantly working on having new headphones to the database. So I think somewhere in the support page we have a place where people can vote in and say, Hey, I would like you to add this or that headphone, and then we’re kind of averaging all of that kind of decide which model do we proceed with. So generally we can calibrate both over the years and in years. So it’s just a matter of where our ultimate ears is currently in our pipeline.
So I’ll I’ll check it, then add another votes to ultimately get in the pipeline. And Ben, it sounds like you can go find that place on the support page and vote yourself. Brandon Crowley says. Yes, I love reference for and use it for girls mixing and listening purposes. I’d like to know what’s next for reference for it seems like it’s already the complete package. Could Studio Monitor Emulation B in its future? The short answer is yes, I mean, as we speak, actually today I had a couple of meetings which are currently really intensively working on planning the reference five.
I hope to kind of be able to come with enough news in the coming months about exactly what is going to be. But this ambulation is one of the one of the features we’re now intensively thinking about. So the answer is yes, it could, but I don’t want to make any hard promises yet. But that’s one of the things on the table. Martine’s, you’ve made this great product, a lot of people love it, and it seems like everything is just going great for you, but I’m sure that you’ve had some challenges and some painful moments in your career.
So I wondered if you would tell us about maybe one of the biggest and most painful mistakes you’ve made on the job and how you recovered. All right.
That is actually a very interesting story behind this. One reference is kind of where we started out as a company. So we built this tool for calibrating speakers and headphones for the music creators. But actually, since day one, when we started the company, we’ve always kind of dreamed bigger. And we’ve always asked ourselves the question about, hey, what is the ultimate answer to the question about the perfect sound? What is the ultimate sound that everybody is thriving for?
And kind of early on, we realized that there is a creator world, but obviously there’s also the listener world. So once you really create the song in the studio, I mean, reference for it helps you get the song to translate better. But what does that translation really mean? It means that it still sounds maybe OK on everything, but it still sounds different and maybe not perfect on anything because all the consumer devices out there still have various discrepancies in the frequency responses and the way how they sound.
So once you create your piece of art and let it out to the world for the people to listen, then I’ve met engineers who say, hey, I can’t really listen to my music outside my studio because my ears bleed about how wrong it’s really sound. So there is this even even with the reference for you still face the problem of, hey, so how do people outside in the world really listen in to my music? And it’s kind of early on, we’ve kind of dreamed about, hey, if we have this software technology that can kind of change and control the way speakers and headphones sound, why don’t we kind of go all the way and solve the translation problem, kind of not only across music creators, but also across the and listeners.
And then everybody will be listening to the same reference. Everybody would be hearing the same thing and everybody would be perfectly happy and we would have kind of killed with the translation problem, which is very much possible technologically, I think, day and age. So and based on that dream, we created a product that was called True Life. Maybe, maybe, you know, this is like three to four years ago. Yeah. So this was kind of the thing that we brought to the consumer world saying, hey, now you can actually listen to the sound that artists heard in the studio.
This is kind of how you should hear it. And we thought that that’s going to be like a Tugg and kind of and it will all go great. A long story short and coming to kind of the biggest mistake perhaps, is that it ended up with the realization that not enough people in the consumer world really liked that sound. And if some people were like, yeah, I always wanted to listen to the way how people hear things in the studio, but that was eventually a minority of the people who listened to it.
And that was like a kind of a low moment for us because we invested quite a bit of effort and the kind of parts into building that product and it sort of didn’t take off as good as possible. And from the mistakes perspective, I think what was happening is we kind of thought that perhaps we can kind of dictate to the consumers kind of and teach them what good sound is and what good music is or kind of what’s good taste is we could kind of come from down from top down and say, hey, hey, listeners, this is kind of how your sound should be.
And that didn’t really turn out to be that way. And that was probably the lesson learned that you can’t really kind of dictate too many things to the market. You can only kind of listen to kind of what they I mean, you can only walk from the place where people are and kind of you can only solve the problems that they think they have and then kind of you can work from there. So but then long story short, how we recover from it, we decided to we’ve been always like a data and kind of rational thinking, approach driven company.
So we kind of said, hey, this is interesting. Let’s kind of double down on that problem. Let’s figure out what’s really going on. So we launched a probably biggest ever research into we launched the biggest ever research into consumer sound preference to discover. OK, so you don’t like this. So tell us, what do you really like? And we have like close to fifty thousand users participating in the in the preference discovery test to discover that, hey, if you take I mean, the long story short is that everybody likes different sounds.
I mean, there is if you think about there is a difference in how people. What people. Some people genuinely life like some people like a lot of faith, some people genuinely hated, some people like a lot of trouble, some people genuinely hate it. And there is also a difference in how we hear physically and our hearing changes with age. So all of these things combined, I think, is kind of the underlying reality of these kind of religion wars between, hey, is beats a good sounding headphone or is it something that’s kind of we should kill everybody who ever listen to it then for four minutes through a law, or is Sennheiser the right sound or.
I don’t know, Grado the right sound. So I think that below all of that, if you kind of buck unpackaged and look at this from a data perspective, then really there are these genuine differences in what sound people like, and there is this difference in our hearing. And from that insight we have now built, what is what you can also see on our Web page, which is the latest product that we’ve released, which is called Sound Idy, which is this idea that you can find your personally perfect sound through discovering your preference and through adjusting the sound to your hearing.
So we’re still using the studio reference sound as the starting point. We think that that’s the right place to start, like the way the artist wanted the music to sound. But then on top of that, we can discover the user’s preference for sound and we can adjust it for your hearing, for his hearing or her, so that when they listen to the music of the artist, they’re actually hearing the interpretation of it, if you will, that most suits their taste and has the best chance of actually emotionally engaging with the artist.
So for comparison from these research numbers, we now see that if you talk about a single fixed sound of a single fixed headphone sound as it comes out of the box, then it’s going to be the best possible sound for no more than 17 percent of the population. So no matter how good job headphone company is doing in their R&D, they’re going to hit the sweet spot for no more than 17 percent of the population. If they do the best job possible and they kind of air on that, then they are going to face a smaller percentage of the population.
Whereas if you personalize the sound with, say, sound, then we kind of see that currently over 80 percent of people actually say, hey, I like this sound better than what ever was the original sound of my headphones. So it’s a huge improvement. And as we evolve, the technology is actually kind of getting even better. So it feels that we’re really onto something. It’s still fresh. And if we just announced Sound Design Live this year and we’re really kind of just getting started with it.
So fingers crossed. But that’s kind of in a way, I think we feel that we have found the ultimate answer to the question about sound quality, and that is that the ultimate answer is individual. It’s kind of not one size fits all is actually individual for everybody of us. And the definition of perfect is really a personal matter. And we’re kind of working to get that into the consumer reality. And also, I think through that, we can solve the translation problem for music creators, because then you can kind of create this on the reference and be sure that whoever is listening to it will not be listening to like a random issue that happens to be their headphones.
But there’s going to be listening to it through an issue that’s actually kind of intelligently matched to that person’s preferences. Wow.
Well, I find this idea of personalized listening really interesting. And as you were talking, I was reminded of two things. The first one was we just wrapped up a live sound summit, 20/20, and one of the presenters was Lorien Bohanon, who mixes in your monitors for Michael Bolton Lizzo, among others. And she gave a presentation about not only how people’s hearing changes with age, but also with gender. And so it’s interesting that she is kind of a younger woman mixing these in your monitors for this older man.
And so she was talking about how for her she can barely stand to listen to the next from Michael Bolton, because for her, it’s way too loud, way too bright. But that totally makes sense for him as an aging man who’s like lost a lot of his high end and his hearing in general. And the other thing it makes me think of is hearing aids. So I have a friend who’s an audiologist and I have interviewed her on this podcast a few years ago.
And I also worked on a conference for a hearing aid manufacturer about a year ago. And was interesting for me that I learned from that conference is that the people doing sales have commissioned certain research to show that even people who don’t necessarily have a lot of hearing loss still enjoy like an improved experience of their lives when they have a hearing aid with some filters because the way they tune. Those things, as they measure, you’re hearing first and then they are similar to what we everything we’ve been talking about today, they apply some corrective IQ kind of processes.
That’s one of the things that the hearing aid does. And so even people who just have like a tiny bit of hearing loss just from age, not from maybe anything intense, find that they can like they can understand people better, they enjoy music better and things like that. So so, yeah, that seems like this. There’s a there’s a big opportunity for this kind of personalized listening experience.
Yes.
We we killed the topic of personalized listening.
OK, so I’ve got a few short questions here to wrap up Martine’s What’s One book that’s been immensely helpful to you.
Yeah, the one the one book that really comes to my mind is actually a book called Organizational Culture and Leadership by Edgar Shine’s. I read it, I think at some point when our team was kind of transitioning from being this kind of six people, kind of a little family to kind of growing bigger. And we can offer more and more getting into different types of kind of situations that kind of didn’t fit into my model of the reality at the time.
And I think this was kind of I mean, on a one on one end, it’s kind of a really simple message from the book that, hey, there are different cultures and they’re eventually based on different beliefs. And that’s why there are different types of miscommunications between these cultures. But it’s kind of I really like how that book goes quite kind of deep and wide. And so unpacking that subject and kind of putting it together in a very practical way for what it means for us on and on and on in everyday life.
I think that kind of at that point was kind of an eye opener for me that kind of helped me, helped me realize better how. Yeah, how to kind of how different people are growing up among occupations, among nationalities, among their backgrounds and so and where that all is coming from and how to kind of make life better for different people.
So Mārtiņš where’s the best place for people to follow your work? I’m not.
I have to kind of say that I’m not too active on social media, but I have an Instagram account, I have a Facebook account, and I post from time to time different things there. So I would say Instagram and Facebook are probably the places you can find me by my name. And certainly there are not too many Mārtiņš Popelises out there, so. .
Well, Mārtiņš Popelis, thank you so much for joining me on Sound Design Live.
Well, thank you so much for having me.
It’s been fun.
Leave a Reply