Takeaway: This is a great overview of almost every major tool and concept you will run into in live audio. Don’t expect to walk away ready to take Scott Adamson’s job mixing for Passion Pit, but if you didn’t go to school for audio or just want a refresher, I whole heartedly recommend this course. At $29/month for over 100 video lessons, it’s a steal. Take a look at the curriculum and sample lessons here.
I was excited to get into Adamson’s new course. I had already heard from one of my student’s that it was worthwhile, especially around using effects. I was also nervous that Adamson was going to make the same mistake I see other books make, which is to talk about lots of concepts in general terms, but never show their specific application with context.
For example, Adamson gives a great overview of FX routing, but what I really want to know is how he handles his FX returns when he works with Passion Pit, Haim, Matt Kim, St. Vincent, and Sleater-Kinney. I know he has lots of great insight and experiences to share because I have interviewed him on Sound Design Live and I visited him when he came through Minneapolis with Haim. I got to poke around his board a bit and it was interesting to find his FX returns immediately following the inputs they are associated with. Kick 91, Kick Out, then Kick Verb. Snare Top, Snare Bottom, then Snare Verb.
Luckily, Adamson does not disappoint. There are lots of concepts discussed in general terms, but there are also plenty of specific examples, plus some additional Q&A videos and a webinar that share more experiences and stories. For example:
I tend to use 800ms decay for drums and 1.5s for vocals.
This is gold for me. Hearing what specific settings a successful FOH touring mixer uses means a lot.
Here are a few of the general things I really liked about Essential Live Sound Training:
- I can jump to any lesson.
- Graphics and explanations are clear.
- Site is fast and easy to navigate.
- The listening demos are really helpful.
I also really liked Adamson’s attitude. He is obviously someone that cares about great sound, but doesn’t obsess about the details unnecessarily. This is really good for someone like me that can get an unhealthy obsession with the details and lose sight of the overall picture.
Here area few things I thought could be better:
- More field demos. I wanted Adamson to show me how he uses each lesson in the field, but I realize that with over 100 videos, this may be unrealistic.
- More specific examples. I wanted Adamson to say, “Here’s what a gate does and here are the gate settings I used on the kick drum on the last show I worked on.”
- Where are the mixing examples? A few times in the course, Adamson refers to mixing examples that aren’t there, yet. He plans to have these available the end of the summer, though. Yay!
Things I do differently
I automatically compare any new information with old information. Watching new training videos gives me the opportunity to reexamine old information and past experiences under new light. Below I’ll mention a few of the things Adamson said that got me thinking and if you are one of my students or are considering taking one of my courses, you should know that these are things that I do differently. I had a chance to talk about these with Adamson so I’ll include his responses in blue.
Let’s flip the phase and see what it sounds like.
You can’t flip phase. It needs a time component. What you can flip is polarity. I know that this still gets confused, though, since some console manufacturers like DiGiCo still label their polarity switch as a phase switch.
*Technically you’re right, but this is still common language, so don’t be surprised if you hear this on stage.
The speaker system is the end of the signal chain.
I wasn’t going to be picky about this one, but then it was a question in the quiz for lesson 9.2. Anyway, you know what I’m going to point out here: Sound still has to go through the air and your ear, which are part of the signal chain.
*It is the end of the electrical signal chain, which is still commonly referred to as the end of the chain.
To really get even sound coverage for a large crowd, line arrays are pretty much key.
I don’t want to get into the live array vs point source debate here (please, no hate mail), but it would be more accurate to say that line arrays (aka asymmetrical coupled point source proportional beam width array) are good at solving the problem of deep audiences with high front-to-back distance ratios with a single array instead of multiple relay or delay arrays. They are just another tool, not the only tool.
*You’re right. They are not the only tool. But I’m talking about crowds of 10,000 people. For practical purposes, you’re not going to cover that audience with point source arrays.
If your horns sound super harsh, you can EQ that in the processing.
Adamson is suggesting using the system processor to EQ the output to specific drivers. I would never do this unless I really know what I’m doing. Processing of individual drivers is the domain of the speaker manufacturer.
*I have done it, but I wouldn’t suggest that beginners do it. It’s pretty advanced and not something you want to do on someone else’s PA that you are walking into for the day.
The most important development in the last 25 years as been the line array.
Hmm, Harry Olson published Acoustical Engineering in 1957 and L-Acoustics’ came out with the first commercial line array V-DOSC in 1992 so…sure.
*I was talking more about their common implementation. You didn’t see V-DOSC out on shows until the mid-90s.
If you still need [stage monitors] to be loud, the other option is EQ.
What about microphone choice/placement and and speaker choice/placement? The battle for GBF (gain before feedback) is not won at any single point in the signal chain. In live sound, the sound quality off-axis is just as important as on-axis. I also wouldn’t use a GEQ (graphic EQ) to fight feedback unless I absolutely had to. I can never find the frequency that you actually need. I always have to choose one lower or higher.
*I talk about this in the polar patterns section [Lesson 2.4]. Also, you can’t guarantee that a stage monitor will stay exactly where you put it, so you can’t rely on that entirely. In practice, the GEQ is still the first thing people go to. Most people are working with very limited time and resources so they won’t have the opportunity to change the mic or speaker. In an ideal world, you wouldn’t need a GEQ, but in practice, it’s GEQ first.
Personally, I use an aux send to do this.
I avoid using subs on an aux, but when I do, I use a group instead. The problem is that if you are sending different content to the sub channel than you are sending to the main channel, then those are no longer coherent sources, they are separate, and our measurement system doesn’t know what to do with them. This can be a problem when during measurements during soundcheck or the show. So when I have the option, I’ll choose simplicity and objectivity.
*If it doesn’t sound right to me, I need to make a change. I have found that changing the sub level, EQ, and mix is key. The way I mix (and many other people mix) is with a separate mix going to the subs. The only way to do that is with subs on an aux. At minimum I advise some kind of separate send out of the console for more control, which is why I discuss using a matrix. The most important thing is to understand that the option exists.
Adamson helps us dip our toes into many topics in this course that raise many questions. For example:
If the amps overload the circuit they are plugged into, the circuit will trip and you will lose all the sound in your PA.
What? That’s terrifying! How do I know if I have enough power?
It’s great to know that I still have lots to learn about live audio and there is a big opportunity here for Adamson to answer many of these questions in future courses and updates, which I look forward to.