Fabien, can you tell us a little about yourself and how you arrived in the music biz?
I’m a half French/German music enthusiast and professional software/media development consultant. I run Tokyo Dawn Records since nearly 2 decades together with my colleague Marc Wallowy. We began as an Amiga demo/tracking group to later become one of the of the very first net-labels on the web (we used to distribute .mod files before mp3 conquered the world!). The label always had a strong link to music technology and since I used to master our records “in-house”, the idea to create our own set of audio processing tools was just waiting around the corner!
So the label has its own studio?
Yes, at least when it comes to mastering. We like to handle these things on our own.
If it’s not a secret, what’s typically in your mastering chain?
It’s no secret at all.
My primary tools are a simple EQ and my monitoring system. The main concern in “real world” mastering is the spectral balance between different tracks, especially at transitions (instead of wasting time trying to fix/restore what the mixer messed up). This is where I spend most of the time. Call me old-fashioned, I love the Waves Ren EQ (Req). I’m sure most other EQs can deliver similar or even better performances, but I particularly like its usability.
However, even the best EQ behaves completely counter-productive without reliable monitoring conditions, so the latter is where most invests go. I run a pair of B&W 805s via Linn amplification and my main monitoring DAC (Benchmark DAC1) in a room filled with huge Basotect bass-traps and panels. My headphone chain consists of Grado PS 1000 headphones driven by a SPL Phonitor. I take this subject very seriously.
The main limiter and dynamics are all my own productions. I also like to use a K-Stereo clone. Most producers already compress their material like crazy, it’s not uncommon for me to mute the compressor and use subtle upward expansion instead. Often in parallel.
On the “color” side of things, I like to use the Triode/Pentode/Tape function on the HEDD (Crane Song) and rarely analogue compression (Crane Song STC-8 in the rack).
But to be honest, the “how” doesn’t really matter that much in audio, it’s the “why” one should worry about! The chain itself doesn’t matter much to me.
A HW/SW mixed chain – interesting! Is the EQing done upfront compression or behind (or both) and why? And since you mentioned the already over-compressed sources: Do you think the application of compression in music production has been changed dramatically over the years? Also, is mastering with stem’s a sustainable trend?
I usually end up with post compression EQ (if any compression at all). I sometimes use pre EQ for severe corrective purposes, maybe 2 or 3 times a year. My typical post EQ work-flow probably relates to the fact that the label is blessed with talented musicians and producer. A public mastering service probably faces less well-selected material and needs to spend more time with corrective pre compression EQing. As a label, it’s easy for us to correct or circumvent such issues far before someone decides to release them on a record.
Without any doubt, the whole dynamics processing field is extremely over-rated. Compressors aren’t creative, they don’t make sound, they don’t make music. Dance floors don’t fill because someone compressed his music, they don’t make people dance and sing. After all, compression isn’t a particularly impressive sound effect in 2013 anymore. It’s a very useful dynamics shaping and overload protection tool, but one still needs a very good reason to use them!
The stem trend probably simply relates to today’s semi-professional studios and their horrible acoustics. It seems to be more efficient for most people to ask the mastering engineer to “remix” their worst issues. Is is sustainable? Definitely. The audio market somehow democratized and amateurish over the last decade, and it’s certainly here to stay. Personally, I doubt the effectiveness of stem mastering, especially quality wise. But it really depends, all classic roles such as the recording engineer, the mixing engineer and so on shifted dramatically over the years. Nowadays, a mastering engineer is more and more becoming an audio restoration specialist, mixing engineer and sort of broadcast-“ish” processor. Radio and even TV lost its weight in large parts of the music biz, so people begin to add their own super-processed “radio” sound during mastering. Is it “right”? I don’t know. Let the music customer decide, after all, he’s the one who finances all these things.
From your point of view, What makes a dynamic processor stand out these days especially but not limited to mastering?
Main concerns are usability, musicality and effectiveness. I expect a rewarding user/algorithm interaction. A music processor should be fun to use! Even chain-saws are designed in such a way. 😉
Digital dynamics quickly gained a horrible reputation. The early and theoretically flawed “text-book” compressor implementations sounded bad. But digital audio dynamics are now seeing a golden age, as they are just a minute away from the very best analogue technology has to offer. The technical improvements of digital dynamics processing we saw over the last years are quite obvious. However, there’s still room for improvements. For example, it’s still very difficult to find plug-ins built around the classic analogue “costs don’t matter” idea (in this case CPU cycles and memory). That’s the reason why I decided to put effort into my own dynamics tools.
IMHO, we’re just at the edge to see DSP surpass the qualitative performance of the very best analogue dynamics processors. The most exciting news and developments in the high-end “analogue” compressor market are all about high-rate digital side-chains (the new GML compressor, Dave Hills’ “Titan”, the legendary STC-8 is arguably digital too). And it probably won’t take long until the last VCA gets replaced by a few bit-shifts just because it sounds and acts better. To prevent misunderstandings, I am talking about technical high-end, not the 956th overpriced Neve EQ clone.
You’ve recently released the TDR Feedback Compressor II – how long have you been working on it and what was the challenge? Was it designed for specific tasks or is it a rather general purpose device?
The Feedback Compressor began as a promising prototype about 6-7 years ago and was constantly improved over the years. I designed it for my little world, which mostly consists of mastering. A “no compromise” dynamics compressor plug-in which offers well thought out access to truly relevant compression parameters embedded in a carefully thought out processing structure, similar to the solutions Weiss, Crane Song or GML have to offer. It took a while until I was satisfied enough to release the first public version. Version II is an elegant re-design aimed to optimize usability, flexibility and sound. I am very satisfied with the latest version, it can be very fast, very slow or both at the same time, while still providing reasonably musical sounding results under most circumstances. There’s no “sweet spot”, it’s a “sweet everything”. I’m really proud of that detail.
How would you describe your technical design process? Is there also modelling involved? And where comes the inspiration from?
There is no clear design process. It’s wide-spread, I sometimes spend months on obscure non-linear filter modules watching the scope for interesting things, sit hours in front of Fireworks (Adobe) thinking about new control schemes or just harvest the libraries and web for interesting papers and patents, even if they just describe thermostats or washing machine control circuits. The final “compilation” of all these elements is the really tough part; the Feedback Compressor easily had about 30 prototypes and countless beta iterations until it felt right. It was a very intensive process.
I am particularly interested about all the “pleasant” sounding results of analogue processing. Same as mentioned before, it’s the “why”, not the “how” that catches my attention. In my humble opinion, DSP can easily handle the “how” in a much more flexible manner than analogue circuit design can do. I call it “music modelled” for the lack of a better term. The modern compressor master-pieces by Crane Song, GML, Weiss inspired me most. But I equally respect the broadcast processing scene which seems to be technologically 3-4 year ahead of the classic music engineering scene (see Orban, Omnia, Aphex and early Dourrough processors).
What do you think could be the next step in developing audio dynamic processors?
That’s a good question. I see an increasing emancipation from old concepts, interfaces and workflows to a much more human-perception oriented focus. Fabfilter’s highly accessible user interfaces or Valhalla DSP’s minimalistic “Bauhaus” UIs are good examples for this positive development. At the same time, I’m sure audio developers will sooner or later learn to use the power and reliability of DSP to solve true perception based issues instead of cloning analogue electronics concepts to death. There is no reason why a piece of plastic and copper should be more musical than a binary state! Analogue audio is already at its peak, digital audio has just begun!