This transcript was automatically generated. Please let me know if you discover any errors.
JF 100 e.
And just going to make sure that I’m not putting any of the inputs, make it a little bit louder.
I think everything set correctly.
Then we can try noise reduction, which is similar to some of the windowing options that you have in rew for those of you who use rew.
Okay, so I’m using a lot of noise reduction to see if we can kind of get through some of the noise here because take a look at this phase graph compared to this one, right? So here’s all the noise that we had before. Here’s something interesting that you can do in rew and cross light and other audio analyzers is when you look at excess group delay and we scale this up a little. Let’s go up past the noise. Let’s go to 40 Hz.
These places where you see these big spikes in blue, that’s another indicator. That’s an indicator that your minimum phase doesn’t match your measured phase. And therefore, long story short, this could be a lie. So see where you see these peaks, that’s also where we have like a fake wrap around. So we see a lot of peaks here.
And so that’s just a helpful indicator that we should probably ignore that data. We should not try to EQ those dips. There are often dips there, right? We should not try to EQ those dips and we should ignore that phase data. I’m going to hide that.
So I’m going to make this the import. That guy from sub liner again.
Oh, wait, this is the one I imported from Smart. Sorry about that. But look how similar they look.
What I wanted is the one from Sublimer.
Okay, this is going to take a second because I don’t know, it’s doing mathematical stuff to convert it. Another thing that’s helpful about this is not only just kind of seeing through the noise and looking at the phase response and anything else, but we can also see how the room is contributing to this. So I did these measurements very close in, right?
And so if we sort of just make the high end match a little bit up here, then we can see oh, this is interesting. Normally the speaker’s response would be down like this, but because of the room interaction, it’s going up like this. So maybe this is a potential for EQ. Maybe I should put a low shelf in here and bring this down a little bit. But at least it gives us some insight into what’s happening in between the speaker and the microphone.
So is that green trace the one that you exported from Tracebook or from sub aligner? Sub aligner, okay. Yeah. After you fill up the delay and the polarity inversion. Right?
Well, sub aligner is when you export from Subliner, it gives you the aligned traces. So it already has the processing applied to it, right. So it’s a target for you to hit. So when we go back over here and we use the one with noise reduction and we go to phase. They aren’t going to be aligned because this guy is way up here.
So let’s go auto zoom high. This guy’s way up here. So we need to get him delayed back. So let’s go. We can automatically align these guys.
Okay, so pretty different, right? If you guys remember when we were, I should have left Smart open, but when we were in Smart, we were looking at this kind of information, right? And so having some windowing, having some control over noise reduction like we do in Rew and Crosslide and others can be helpful, doesn’t always work in my experience. But here we see it’s a pretty big significant help. Now we can really this is the one last place here at 126 Hz where we really can’t read it.
Okay, so we’ve got the main and sub setup. I’m sorry. We’ve got main and information from sub aligner. Let’s get our sub in here.
So this is the 600 E, something like that. And go.
Now the noise reduction procedure also will be affected by reflection. So it might not work, but we’ll give it a shot.
Looks like it found the peak. Okay.
And again, I’m going to make this really extreme so we can try to really remove a lot of that noise.
So as you can see, removed quite a bit of it. But look at all this. I think this is all still just noise. And I bet if we look at the excess group delay, we will see that.
And we got to kind of move up past this noise. So let’s go up to 40, 50. Okay. Yeah. Look at this big peak.
So we can be pretty sure that this line going through here is some noise. So let’s put a marker there just for fun. And all these peaks here, I think too indicate that this is kind of unreliable, right? So we’ll reset this, we’ll take this away hidden. We will import our data from sub aligner add channel below and it’s going to be importing that and hopefully we can do just what we did with the other guy, which is we, it will help us provide sort of a target.
Okay, so we need to add the same delay so we can go to multi channel edit. So we delayed this guy from sub aligner by this amount. Can we copy it? I don’t know, probably not. But we’ll just type the same number in here, 25.0625 milliseconds.
And for lucky, these two should be oh, and he’s a polarity inversion. Right? Okay. So those are on top of each other now.
Yeah. So check this out.
This is the marker showing where we had a lot of excess group delay. So we know 100% that this is a lie here, right? So we do not want to try to align to this guy. Don’t align to a room reflection. Right.
But we can see that some of this data here at least is in agreement, and so we should end up seeing some of these things on top of each other.
So I’m going to zoom in here a little bit to the low end. Where is our crossover region? Well, I could go back here, but I’m just going to switch over to this plot real quick. So where these lines turn bold, that is where you are within ten DB and is what I generally consider the crossover region. It’s the most critical area, right?
So that would start at 87 and end at 134. So if we go here and put our markers at 87 and 134, then it’s this little area here where we really need to have some critical data and notice that data we measured in the room, it’s tough to see. So let’s go back to sorry, multi channel edit.
So through that area, we’d really like to see these guys be within 60 degrees like they are here, but let’s see what we actually got in the room.
Did I lose my delay? Somehow?
Did anybody see what I did wrong? All of a sudden, our stuff is not in the right place.
What about the gain?
Looks like the gain all match.
Close this for a second. I’ll just go through here one by one. So mute all. Here’s our main reference to here we go. Yeah.
So I can adjust this a little bit. Try to get all the data on the screen here. Okay. So through this area, it’s really hard to tell, right? And I think that’s because of this coupling with the room.
So I think this is one of those times where we’re not going to be able to prove this with our audio analyzer. And so we can look at this data that we have from sub aligner, inspect channels, unmute everything.
Yeah, again, something is messed up here. Anyway, there’s some extra blue line in there. Sorry about that. Anyway, the point is that even with all these tools like noise reduction and windowing, we might not be able to really get actionable data from this room. And so it’s really nice to have a second tool, which is sub aligner to corroborate or if I wanted to, and I had time, it’s so quick to use sub aligner, I could just move to a different spot.
So I could walk over here, take my distance measurements again, and then just listen here, or walk over there, or walk over there. So it makes it really fast to try different alignment positions to see what you can hear. So that’s what I was going to share today. Any questions about what we looked at?
First of all, many thanks. You’re welcome.
Sublimer is an actively growing project. I still feel like I haven’t quite made it into something that’s super valuable. It does solve a problem, but as you guys are using it and trying it out, please let me know what ideas you have for features that could be useful. One person recently was telling me that they want always an alternative solution here. So imagine that you don’t want to delay the main, but instead you want to process the sub and maybe put an all pest filter on the sub so that you can effectively delay.
That not the same as delay, but let’s say that you don’t want to add any more delay to your main system. I could provide you with a solution maybe where you use all pest filters on the sub, but not everybody has access to all pest filters, so I don’t know if that’s the best way.
And then I told you, I just made some feature improvements here that include more information about your location. There’ll be some pictures will show up here if you put pictures or text into the show log, and then you can search through these. So there’s an opportunity here that I can even put a map on this page so you could see everywhere you’ve been in the world. And if you upload it with pictures, then you really have a show log from the past years. But then you can always, as I mentioned, search through this and see alignments that you measured in the past.
There is a question, is it possible to add speakers which are not in summer into DB? So we can use it? Yes. So if you go home and you go to Add New, then you can upload your own measurements. So follow these instructions and either upload them here or just email them to me and I will add them to Sublime or upload them to Facebook.
So if you go to Trace Book, sign up, then go to Upload, and you have the same kind of form here, just with a little bit more detail. The benefit of uploading your measurement to Tracebook versus Subliner is that on Facebook, all of your data will be public then. So you could link it to other people and say, hey, here’s my measurement, and it’s moderated by two other people. So your measurement has to be higher quality. I’ll basically accept anything as sub aligner because I only need those.
Okay. So I’ll tell you guys, the way I did these measurements, none of these speakers were in sub liner before I got here. I came in on Friday, I took the QSC speaker and the DB technology speaker out into the parking lot, and I measured those following the guidelines from Trace Book, and I put those in the Subliner. But with this speaker here, I didn’t want to get that down. I started getting it down and I was like, this could kill me, and it’s a pain in the ass.
I have to take the amp outside. So you cannot upload Institute measurements to tracebook yet, but you can send them to me. So what I did was I just put a ladder up and I attached this microphone to the ladder, and I just put the microphone really close, like a meter away from the speaker, and you can see.
Let’s go back to channel two and let’s go to magnitude and let’s mute everything.
So you can see in this measurement that I have some ripple here from the walls and the ceiling and everything, but it’s still good enough for sub aligner. We’ve got some nice so weird that the delay is messed up anyway.
We have actionable data here even with some amount of ripple here and some little bit of comb filter. You can do that too. If you have speakers in your venue that you can’t get down or they’re too heavy, then either call me and I’ll come help you, or you can do what I did and you can put a microphone on a tall stand or a ladder and just get it up so that it’s on access with the speaker. Solo. One element.
Don’t measure. An entire array, if possible. It’s much better if you can just do a single element. Take a measurement of that, make sure that you get the distance, then send me your native measurement from Smart or Rew or whatever and the processing preset that you use, and I can use that. It doesn’t need to be super clean.
There are specific requirements to get into tracebook but for sub aligner all I need is actionable data in the load and then someday this is where this is heading right now you have to send them to me and then I upload them. But where this is heading, if sub aligner generates some demand and people start signing up for it, then eventually I’ll be able to have it all online as a real time processing web mapp where you can take measurements in a room, upload them to Subliner and it will spit out an alignment with your own DIY data. Now that’s a little bit farther in the future because I would need to develop some significant development resources to that and that’s going to take a little while. But that is the goal assuming that this is the kind of thing so you should aim to build a DSP processor with all those libraries and all the information built in that VSP. That’s another possibility that I’ve already explored a little bit and I couldn’t really tell.
See the limiting factor is whether or not people can use it. So for example, sub aligner can spit out fr filters for you that would give you a perfect alignment. Do you have a way to load FIR filters? I don’t know.
It could also potentially be an online processor that you could pass audio through and it would do all of it for you. So it could not only spit out the preset but process the audio. I don’t know if that’s what people need. I think most people are doing their processing in an output processor or in their consoles. We’ll see where it goes, we’ll see what people say.
But I like that idea too. It can do more. All right, going back to Zim, for those of you who are still here, I would love it if you would turn your cameras on and we’ll do a quick class photo. Those of you who stuck around 910 1112, one for 4 hours, it’s amazing. So if you want to be in the class photo, then you should turn your camera on.
And Feta, you come over here so you can be in the camera. And I got to close the chat and the participants oh, wait, you’ll be in the camera. Well, just stand next to me.
Wow. Okay. Twelve people made it from the original 20. That’s great. All right, here we go with the class photo.
Three, two, one. Thank you. All right, thanks, everybody. Whatever questions come up, please email me nathansonlive.com. Otherwise, we’ll see you at the next workshop.