I’m bringing together some of my favorite teachers for two full days of online live sound training and networking that will cover topics from mixing to sound system tuning. Sign up here.
#1 – Laughing with Jon
One of my favorite sessions from Live Sound Summit in 2018 was How I Approach a Soundcheck with Jon Burton. His technical training was great, but the best part was really that Jon has stories to back up everything that he teaches and most of them are hilarious.
#2 – Networking with Bodo
Networking is definitely a weak point in my training. I have some templates for how a few things work, but if there are ever problems I’m not good at trouble shooting.
Why does console control need to be a static IP address, but Dante should use DHCP? Can I run them both on the same network? Will my little D-link switch do the job? Do I need a redundant backup for every step in the signal chain?
Bodo’s session is titled Don’t panic! …it’s only a network, which is perfect for my mild anxiety.
#3 – Console prep with Pooch
I’m a fan of the Digico consoles. I’m looking forward to learning how Ken ‘Pooch’ Van Druten sets up his SD7 for the Iron Maiden tour. Also, we’re working on the audio connections. Hopefully you’ll even be able to hear what’s happening in the console.
#4 – Gain Structure with Pat
At the end of some training events the only takeaway you have is, “I have a lot more to learn.” That’s a good thing, but I’m always a bit disappointed if everything was so far over my head that I leave with no new actions I can try in the field.
This is not so when training with Pat Brown. One of my biggest takeaways from attending SynAudCon’s Sound System Design seminar is that Pat is an experienced teacher. He knows how to step you into complex subject in a way that is not overwhelming or pedantic or unnecessarily scholarly.
#5 – Wireless with Stephen
Last year Stephen Pavlik blew me away with his demos. I had no idea that you could (and should) test a coaxial antenna cable for transmission problems. His presentation opened my eyes to how much more I can do to create a rock solid wireless system and I’m looking forward to this year’s session called Digital RF: The Future of Wireless?.
#6 – Pyro with Aleš
Aleš Štefančič has a lot of great stories about shows gone wrong. One of my favorites is when a pyro surge made him think that all of the speakers in one array were blown. There are a lot of laughs and surprises and of course, lessons to be learned with each.
This is a list of the best free and paid training for live sound engineers online today. I included all webinars, courses, and certifications I could find that required a deep time commitment from the student and excluded shorter demos and casual videos.
Did I miss something important? Comment below and let me know.
Audinate – Dante certification.
Digico – SD9 training.
Get Started with Sound System Tuning – Everything you need to know to take your first measurements with an audio analyzer (like Smaart or SATlive) using tools you already own.
Harman – Introduction to Audio System Design, Audio System Design for AV Professionals, BSS Audio Signal Processors, Crown Amplifiers, DBX Signal Processors, JBL Loudspeakers, Soundcraft Mixers.
Martin Audio – Free webinar trainings on their system optimization software, control software, and speaker deployment.
QSC – Q-SYS Level 1.
Shure – Ethernet Networking for Audio, Wired Microphones 101, Microflex Wireless Training, Wireless Workbench 6 Video Tutorials, Advanced Techniques for RF Coordination, Microphone Techniques for Theatrical Productions.
Sound Design Live – Intro to Sound System Tuning, How To Make Money as a Sound Engineer.
SoundGym – A curated collection of YouTube videos including Live Sound Basics and Worship Sound Tools.
Yamaha – Training on their CL, QL, and TF consoles.
Association of Sound Designers – Acoustics & System Design, Vocal reinforcement & Radio Mics, Computer and Audio Networking.
Pro Audio Workshop: Seeing Sound – Insider secrets to quickly and effectively maximize sound system tuning in any room.
SynAudCon – How Sound Systems Work, Principles of Audio, Transformer-Distributed Loudspeaker Systems, Digital Signal Processing, Audio Applications, Sound Reinforcement for Designers.
Hypothesis: Through 10 minute sessions of daily ear training I will increase the speed and accuracy of my pitch detection and EQ application.
Why: I’m sick of guessing and sweeping with the parametric EQ. I want to nail it every time. I don’t want to fear microphone feedback anymore. Ringing out stage monitors is a waste of time. I want to quickly remove any feedback as it occurs.
Results: After 30 days I made a 13% increase in my EQ ability and tripled the speed and accuracy of my pitch detection.
When it comes to listening abilities, I have always had a growth mindset. I don’t think I’ll ever have golden ears, but I do believe that I can train my ability up to a useful level. This is very important for me and all live sound engineers because we need shortcuts to survive.
During my Live Mix Mastery pilot course last year, I talked to a lot of sound engineers about their biggest problems out in the field. I got a variety of different answers, but the common trend among all of them was the need for speed. Everyone I talked to was confident that they could overcome any obstacle thrown at them if only they had enough time. As a result, I put together all of the best time saving techniques I’ve learned over the years and taught them to 20 students over four weeks.
Every technique I taught has been field tested to deliver results except for two things: pitch memory for feedback detection and EQ training for faster mixing.
With Live Mix Mastery I had a great opportunity to test this with a group of professional audio engineers. Here are the steps we took:
- User Ear Doctor in SoundGym to test your hearing.
- Schedule 10 minutes of daily ear training in your calendar. Three minutes playing Audio Frequency Trainer and seven minutes cycling through games on SoundGym.
Logically, playing these games to improve our ear training to increase our speed in the field makes sense. But I had never really taken the time to practice with a system and measure my results.
What is pitch memory?
You have listened to your favorite song so many times that you can start singing it right now with pitch accuracy. Unless you were born with perfect pitch (yes, this exists) then you memorized those pitches through repetition. This is how the kid at guitar camp with me was able to identify almost any pitch. Songs were his reference. He had learned to play so many of them that playing any note would trigger his memory of a song and then its location on his fretboard. For me at the age of 18, this was mind-blowing.
I had my first taste of this in college when I set my wake up alarm to the song How It Feels to be Something On by Sunny Day Real Estate. One day I was walking into a piano rehearsal room, humming that song, sat down, and realized that I was singing a perfect A. By accident, I had taught myself pitch memory.
One of the first things you learn in music school is the interval relationships between notes on a scale in western music. Once you’ve got the pitch of any note, you can find the pitch of any other note through the memorized interval or by simply following a chromatic scale. The good news for musicians is that there are only 12 notes. The bad news for sound engineers is that microphone feedback could potentially happen at any frequency. And I guarantee you that it will never happen at the exact frequency of one of the sliders of your graphic EQ.
The only thing that graphic EQs are really good for is ear training, which is exactly what we used them for in Live Mix Mastery. Why did we use 1/3 octave spaced frequencies instead of 1/12 octave, which would relate more to our musical experience up until now? Three reasons:
- I didn’t think of it at the time.
- Audio Frequency Trainer was the best game I could find.
- Audio engineers are more familiar with the whole numbers seen on a graphic EQ. It’s a lot easier to work with 1K, 1.25K, and 1.6K than it is to work with 987.77, 1046.5, and 1174.66.
Audio Frequency Trainer will allow you to set a minimum and maximum test frequency, which is why we visited the Ear Doctor first. There are four levels that increase in difficulty by adding more frequencies to identify. I quickly moved out of Beginner, spent about two week on Intermediate, but never graduated from Pro. That shit is hard!
A technique that I used here, which I found helpful, was to move quickly and get emotional. My intention was to send signals to my brain’s pleasure and pain centers that this was important stuff.
Unfortunately, what I didn’t get to do was try out some feedback detection in the field, yet. I will come back and update this article when I do.
Another important thing I learned is that pitch memory either improves or deteriorates. I stopped practicing after the course ended and while I haven’t slide all the way back down to Beginner, I also haven’t been able to maintain a perfect score on Intermediate.
The mystery of EQ
For many people, EQ is a big mystery. It’s one of the most difficult skills to train because we are always under enormous pressure. Wouldn’t we all love to have 30 minutes to listen to a kick mic while searching for the perfect frequencies to boost or cut. Those of you who have tried this have either never done so again, or moved into lighting.
EQ training at home is another thing that always made sense, logically, but I had never sat down to prove. Although none of these games we used are the same as work in the field with all of the chaos of a live room, they do provide the next best solution in terms of variety and tracking. Any time I have a few minutes I can log into SoundGym and play a game. At the moment, unfortunately, the games are not available for mobile, which is why I schedule my practice sessions for times when I know I will be home.
The great thing about this experiment that we embarked on together is that we didn’t have to worry about how to EQ. We just played the game and watched our results improve. The most enjoyable discovery for me was connecting the sounds I have known for years to specific frequencies. Previously I may have know where I needed to hear a filter, but would have had to guess and sweep up to it. The game Peak Master helped me to finally connect those sounds to frequencies. Here’s what one of my students, Sergio, said about it:
I had a big improvement detecting bothering or missing frequencies by ear.
And here’s what Martin said:
I was able to improve my skills to identify and remove distracting elements in my mix in less time. I no longer think, “Hmm, the electric guitar sounds weird somehow.” Now I can identify that the problem is in the low mids and make a dip at 300Hz, for example.
So it looks like we hit our goal in terms of increasing speed.
My big takeaway from this whole experience is to stop wondering how to EQ and improve my hearing instincts instead through ear training. Everyone knows when they hear a problem. The skill is finding it fast.
Did I prove my hypothesis?
Although my students did see improvements in the field in increased speed and accuracy of pitch detection and EQ application, I personally haven’t done enough work to give a firm Yes. That being said, I’m really happy to have discovered a method I can track instead of just hoping for golden ears.
Subscribe on iTunes, SoundCloud, Google Play or Stitcher.
Support Sound Design Live on Patreon.
#24 Timothy: Polarity issues
Hi Timothy, so I don’t know exactly what your polarity issues are, but polarity is a very important step in our verification process and it is pretty easy to do. Just change the delay locator in your audio analyzer to automatically update continuously, in SATlive it’s called Auto adjust delay and you can find it in the pop-up menu of the delay- display in the lower menu bar and in Smaart it’s call delay tracking and you can start it by hitting the d key. Then, take a solo measurement of each speaker and/or driver in the near field to verify that everyone’s polarity matches.
#24 Eric: Is a spread out speaker array better than brute force towers blasting from the stage?
So Eric, it’s hard for me to know what kind of system you are imagining in your head, but like everything, it’s a balance, right? So speakers from the stage might have a better sonic image, but a more distributed approach might give you less variance across the frequency response. But in audio, brute force is almost never a good thing. When you say brute force towers I’m imagining just a pile of speakers on stage trying to cover a giant range ratio. The people in the front rows are getting blasted and those in the back can barely hear. I’m definitely not in favor of that. The best solution is going to deliver a similar result to everyone in the audience with a minimum of complexity.
#25 Jim: How do you avoid microphone feedback?
So Jim, I would like to do another webinar training on microphone feedback, but I’ll start by telling you this: don’t focus on ringing out the monitors and ringing out the system. That’s our last line of defense in guerilla audio.
The way that you avoid microphone feedback is by improving the headroom of your sound system. And that isn’t improved at one place. It’s improved at every step in the chain.
So if I were to make one suggestion to you, Jim, it would be to not assume you need to ring out the system. In fact, don’t assume anything. Look at every step in the signal chain and see where you can make improvements.
This is what I did a few years ago and it made people kind of nervous. My colleague would say, “Did you ring out the system?” And I would say, “No, but here’s what I did instead.”
I listened to it. I placed and aimed my speakers for best GBF and after everything was set up, I did some tests to see if I was getting enough microphone gain before taking any kind of action. And 9 times out of 10, I was already. And I realized that I had only done the ringing out step out of fear that I hadn’t set things up properly.
But I didn’t just go cold turkey pushed the system into feedback and made note of those frequencies on my equalizer so that if anything started to ring during the sound check or the event I would be able to quickly put in a filter. That also put people at ease since they had no way of knowing if I knew what I was talking about.
Here’s a quick and easy test to see how effective your ringing out is. Ring out your system, as you normally would. Then, bypass whatever EQ you inserted and move the microphone a foot. If you get different results, then your EQ is not going be effective because you know that that microphone is going to move as soon as it has to interact with a performer or the ambient conditions change. So Jim, I hope that answers part of your question.
#26 Marcelino: Como hacer y poner los monitores en escenario y el retardo.
So Marcelino, my 3 best tips on monitor placement and delay are 1) get the monitor as close to the performer’s head as possible, 2) aim it at the null point of the microphone, and 3) don’t use delay. If I understand what you are asking, it’s kind of an unproven feedback fighting tip, which is to try adding delay to a microphone or stage monitor output for better GBF. My experience is that that does not work, it only moves the feedback to different frequencies. If you have had success with that strategy, let me know how you did it.
#27 Black: How do you avoid phase issues with microphone placement?
First, isolation. We use directional microphones, close miking, and gating to try to avoid as much bleed as possible. The signals might be out of phase, but if there is a 20 dB difference in level, we win by isolation. And if you can’t beat-em, join-em. That’s why I like a coincident pair for my drum overheads. Everything arrives in time.
Second, is polarity inversion and delay. Everybody knows that if you have your top snare mic and your bottom snare mic 1″ away from the head on each side, then one of those mics is going to need a polarity inversion. Then, if you want to fine-tune every other mic on the kit, you can pick a reference point, like the OH, and delay every other mic back to those.
#28 Mark: How do I place speakers for least reflections off of walls?
In the horizontal plane, start by using the right coverage shape and placing it at the center of its coverage area. That way you make it to the extents of the coverage area without overlap onto the wall.
Another strategy is to sharpen the edges of your coverage shape using subdivision. So if you would normally use a single 100º speaker, use two 50º speakers. Or better, yet, use three 30º speakers.
In the vertical plane, make sure that you are aiming at the edge of coverage and not at the back wall. If you’re stuck with speakers on sticks, you can get a speaker tilt adapter from K&M.
#29 Greg: Fast measurement/adjustment in a portable church setup where we have 45 minutes to setup and start soundcheck
So Greg, what I’m wondering about this is whether you are setting up in a new location every time or the same location? But here’s my thought, even if you are setting up in a new location because you can’t be at a new location every time, unless you are on tour, and I’m guessing that you’re not. So what I’m thinking is that you could come up with a streamlined verification and optimization process where all you have to do is basically check each step against the last known good data. So you always have your reference traces stored in your analyzer and all you have to do is compare today’s setup against them. You would have your speaker positions mapped out along with aim and splay angles and measurement microphone positions. So you would take a measurement at location A1 and compare it to reference trace A1 and I know you don’t have much time, but this should be really fast if you are doing it the same way every time, just verifying that things are as they should be.
#30 Robert: Best placement for Sub hung or ground stack
Hey Robert, so two things for you to think about: 1) If your goal is even coverage, getting the sub up in the air is going to improve your front-to-back distance ratio. 2) If your goal is power, ground stacked is going to give you half-space loading of +6 dB for free.
#31 Samuel: Where and how to place multiple speakers?
So Samuel, I want to call you Samwise Gamgi from Lord of the Rings, but my wife said you might not get the reference, so I definitely won’t do that. We talked a bit about how to estimate placement for multiple speakers in the last podcast, but this gives me the opportunity to approach it from a different angle. The reason to use more than one speaker is that one speaker will not cover the entire audience either because it’s too big or the shape won’t allow it. In either case, we need to find those points in the audience where the main isn’t cutting it anymore and bring in another speaker to restore the sound back to its original glory, as it was on-axis with the main speaker. And two of the most useful tools to do that are range ratio and forward and lateral aspect ratio.
#32 Mark: How to arrange 2 subs so they are useful?
So Mark, my question to you is: what do you mean by useful?
Does useful mean more power? If so, put those subs together and get 6 dB from their coupling. Push them into a corner for even more power.
Does useful mean more even coverage? If so, get’em up in the air for an improved distance ratio.
Does useful mean directional? If so, setup either an inline gradient or inverted stack for up to 20 dB of broadband cancellation in the rear.
#33 Alexander: How do LF radiation patterns change when you place subwoofers under the stage or close to boundaries?
So Alexander, as long as that boundary is long enough, like a wall, a floor, or a stage, it will change the LF radiation pattern in much the same way as another speaker. For example, if you have a speaker on the ground, that’s just like having two speakers stacked one on top of each other. Think of the boundary as a mirror, with another speaker on the other side.
This is why you can’t put a cardioid sub array below a stage. The stage acts as a mirror and ruins the coverage pattern.
#34 Ricky: Can you talk about determining the distance between speakers as it relates to comb filtering where the speakers combine?
So Ricky, I’m going to assume you are thinking about the low end since that is the hardest to control with aim. First of all, if possible, put the speakers right next to each other for the smallest contrast in path length at all positions. Problem solved.
If you can’t do that, keep in mind that you are always going to have some amount of subtraction when two frequencies meet beyond 120º of phase offset. So one thing you can do is make sure that your speakers are within 2/3 wavelength of the highest frequency at which they are going to combine. Imagine a subwoofer whose operational range goes up to 120Hz. 120Hz has a wavelength of 9.4ft, so we’ll want to keep those subs within 2/3 of that, which is 6.3. As long as those subs are within 6.3 ft of each other, we will have some amount of summation at all positions through the operational range.
#35 Bigman: Does phase cancellation occur when two point source speakers are placed side by side? mostly when they are connected in parallel
So Bigman, the coupled point source array, when properly splayed, is one of the most efficient and stable arrays because we are close enough that we get summation in the low end, but splayed in a way that we get isolation in the high end. And as long as they are symmetrical, meaning same make and model, you can run them in parallel.
#36 Carmine: How to compensate for phase problems caused by reflected waves in a live room?
So Carmine, if room reflections are negatively affecting your show, my first thought is to try to remove the reflections. Can you do anything about the architecture? Ideal knock out a wall. If not, add absorption.
If not, your next line of defense is speaker position and aim. I’m not sure what your situation is like, but maybe you can aim those speakers away from the wall for fewer reflections. Maybe you can move your subs closer to whatever is causing the reflection to reduce path length differences and minimize their destructive interaction.
One thing is for sure: it is impossible to counteract acoustic problems with electronic solutions. Once you have a 12dB cut from comb filtering, you can’t put that back with EQ.