interview series (4) – Bob Olhsson

Bob, you are a professional recording and mastering engineer for over thirty years and already legend. What is more important: The ability to hear or the ability to use the right device according to the context and to apply the right adjustment?

You need to BOTH be able to hear AND find the right device and settings! Probably the most important thing to understand is that just raising the volume a tenth of a dB. will always sound better. All comparisons must be checked with the average level compensated. Otherwise, it’s easy to wind up with something that’s louder but far worse sounding.

Another challenge is mastering for what the recording needs and not what your monitors want. Our goal is for it to sound great everywhere. The best way I’ve found to tackle this problem is sticking to broad strokes unless I’m removing distractions. If a half dB. sounds different but not better, I’ll generally leave it alone.

I hope you’ll agree: Delivering across a variety of media and listening environments is the challenge for mixing and mastering today. What’s your strategy to deal with that and which demands for technical devices will arise?

Mastering is where technical knowledge intersects creativity, manufacturing requirements and the record business. It requires doing one’s homework and understanding how the decisions are made to expose records. This is all in flux right now. Homework was the first thing I was taught at Motown. Our benchmark was the top five singles in the Billboard charts. They sat on our mixing console and we picked them apart technically in the mastering room.

Today with the internet, it’s easy to both learn and listen to what’s on the air and what’s being pitched to radio. If an artist is obviously seeking airplay, I need to make sure their master makes the best possible impression in the sales and programming meetings that might otherwise trip them up. If it’s not for radio, a more hi-fi approach that serves their fans more than their career is called for. I never forget that an artist is always betting their career on their new recording. It’s really a more serious responsibility than many people seem to realize.

Technically, we need to prepare for both selling lossy-compressed files and internet streaming. I think the Sonnox real-time codec is an absolute must in the monitoring chain. In fact people mixing should also be checking this from time to time. The deepest dark secret of the record business has always been that it’s largely about the word of mouth created by people’s impression upon hearing the recording. Today that first listen is likely to be internet streaming or an MP3 file.

Dealing with loudness today: We’ve had quite a number of really bad examples during the last years but then the Daft Punk album was released recently, providing a rather relaxed, easy to listen loudness experience. A new hope?

I hope more major artists who are virtually guaranteed air play will step up and do the right thing by the folks who buy their recordings. We can’t ask an unknown artist to take a chance on not being loud enough in a programming meeting but the big names really can help move the bar back down to a better sounding place. It just requires having enough guts. The irony is that over-compressed music sounds really wimpy on the air which is the last thing any pop artist is looking for. You can hear this by comparing music videos from the ‘80s and early ‘90s to many of today’s.

Concerning mastering: Analog, digital or hybrid? What is the role of compression during mastering?

Digital is challenging because there are far more bad sounding settings available than with analog gear! I’ve actually had a Ren EQ. sound better than a Sontec or any more expensive plug-in on a song or two. Most people today don’t realize that the Manley Massive Passive is the Ren Eq. executed in hardware by the same design engineer, Craig “Hutch” Hutchinson. His essay in the Ren Eq. documentation is priceless.

I think the limitations of analog gear remove a lot of the temptations for excess and speed up finding the right settings. The Waves Q-Clone controlled by my API equalizer was quite a revelation about how important ergonomics are. I always keep an open mind because I’ve learned there is never really any “best” anything. I’ll try a few likely analog or digital suspects and go with what I think feels the best and distracts least from the music.

Ideally I think overall compression and Eq. belong in mixing where the individual tracks can still be tweaked to fit. Mastering should never be “finishing the mix.”

I use compression and Eq. largely to correct some of what the limiter is doing to the balance. Certainly sometimes there can be an opportunity for enhancement but it’s often mostly correction for mix room monitoring. I like to think in terms of what I think the mixer might have done in my room listening to my monitor system. I’ll always begin with the assumption that the mix sounded the way everybody liked in the room where it was mixed. In fact the very first words out of Bob Ludwig’s mouth at a mastering session were “What were your monitors?” As a mastering engineer I never want to step on a mix.

EQ and compression – in which order is that in your chain typically?

Today it’s whichever sounds best!

Back in the ‘60s and ‘70s I was convinced it was best to Eq. after compression because it brought the sound back to life. I later learned that a lot of what I had been hearing in different sequences was how the different output stages react to different loads. I think this is probably why there are endless conflicting opinions about which gear sounds best. Taken outside the context of the entire signal path it’s all pretty meaningless. I think most people actually agree about sound quality issues but many don’t realize how frequently they are comparing apples with oranges.

Is mastering with stem’s still increasing or was that just a temporary fashion?

Stem “mastering” is really mixing and not mastering. A mix needs to be checked in a variety listening environments before it should ever be considered “done.” Saving that process for after mastering is a mistake and sometimes a costly one. The only time stems are appropriate is when the mixer is in no position to have even a concept of the context the mix will be heard in. Even then the mixer and producer really need to attend the mastering/stem-mixing session.

Talking about dynamic range, is there a specific target in mastering today? How did that changed over the years?

The target has always been whatever is most needed by the artist.  Every recording is a massive investment of time and money in their career.

We cut some of the hottest 45 singles in history at Motown because we knew the decision makers were listening to the first thirty seconds of a dozen singles to decide which would go in the wastebasket and which would be given further consideration. This paranoia still really drives the “loudness wars” today. That and the endless posers who want to sound exactly like the “big” artists.

What do you think could be improved in audio dynamic processors for mastering?

I think we’ll all know it when we hear it! What I generally don’t like is the sound of two multiband compressors in a row. The ones at the broadcast stations are more than enough!

Related Links

What do you think?

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: