epicPLATE released

epicPLATE delivers an authentic recreation of classic plate reverberation. It covers the fast and consistent reverb build up as well as that distinct tonality the plate reverb is known for and still so much beloved today. Its unique reverb diffusion makes it a perfect companion for all kinds of delay effects and a perfect fit not only for vocals and drums.

delivering that unique plate reverb sound

  • Authentic recreation of classic plate reverberation.
  • True stereo reverb processing.
  • Dedicated amplifier stage to glue dry/wet blends together.
  • Lightweight state-of-the-art digital signal processing.

Available for Windows VST in 32 and 64bit as freeware. Download your copy here.

The former epicVerb audio plugin is discontinued.

how I listen to audio today

Developing audio effect plugins involves quite a lot of testing. While this appears to be an easy task as long as its all about measurable criteria, it gets way more tricky beyond that. Then there is no way around (extensive) listening tests which must be structured and follow some systematic approach to avoid ending up in fluffy “wine tasting” categories.

I’ve spend quite some time with such listening tests over the years and some of the insights and principles are distilled in this brief article. They are not only useful for checking mix qualities or judging device capabilities in general but also give some  essential hints about developing our hearing.

No matter what specific audio assessment task one is up to, its always about judging the dynamic response of the audio (dynamics) vs its distribution across the frequency spectrum in particular (tonality). Both dimensions can be tested best by utilizing transient rich program material like mixes containing several acoustic instruments – e.g. guitars, percussion and so on – but which has sustaining elements and room information as well.

Drums are also a good starting point but they do not offer enough variety to cover both aspects we are talking about and to spot modulation artifacts (IMD) easily, just as an example. A rough but decent mix should do the job. On my very own, I do prefer raw mixes which are not yet processed that much to minimize the influence of flaws already burned into the audio content but more on that later.

Having such content in place allows to focus the hearing and to hear along a) the instrument transients – instrument by instrument – and b) the changes and impact within particular frequency ranges. Lets have a look into both aspects in more detail.

a) The transient information is crucial for our hearing because it is used not only to identify intruments but also to perform stereo localization. They basically impact how we can separate between different sources and how they are positioned in the stereo field. So lets say if something “lacks definition” it might be just caused by not having enough transient information available and not necessarily about flaws in equalizing. Transients tend to mask other audio events for a very short period of time and when a transient decays and the signal sustains, it unveils its pitch information to our hearing.

b) For the sustaining signal phases it is more relevant to focus on frequency ranges since our hearing is organized in bands of the entire spectrum and is not able to distinguish different affairs within the very same band. For most comparision tasks its already sufficient to consciously distinguish between the low, low-mid, high-mid and high frequency ranges and only drilling down further if necessary, e.g. to identify specific resonances. Assigning specific attributes to according ranges is the key to improve our conscious hearing abilities. As an example, one might spot something “boxy sounding” just reflecting in the mid frequency range at first sight. But focusing on the very low frequency range might also expose effects contributing to the overall impression of “boxyness”. This reveals further and previously unseen strategies to properly manage such kinds of effects.

Overall, I can not recommend highly enough to educate the hearing in both dimensions to enable a more detailed listening experience and to get more confident in assessing certain audio qualities. Most kinds of compression/distortion/saturation effects are presenting a good learning challenge since they can impact both audio dimensions very deeply. On the other hand, using already mixed material to assess the qualities of e.g. a new audio device turns out to be a very delicate matter.

Lets say an additional HF boost applied now sounds unpleasant and harsh: Is this the flaw of the added effect or was it already there but now just pulled out of that mix? During all the listening tests I’ve did so far, a lot of tainted mixes unveiled such flaws not visible at first sight. In case of the given example you might find root causes like too much mid frequency distortion (coming from compression IMD or saturation artifacts) mirroring in the HF or just inferior de-essing attempts. The most recent trend to grind each and every frequency resonance is also prone to unwanted side-effects but that’s another story.

Further psychoacoustic related hearing effects needs to be taken into account when we perform A/B testing. While comparing content at equal loudness is a well known subject (nonetheless ignored by lots of reviewers out there) it is also crucial to switch forth and back sources instantaneously and not with a break. This is due to the fact that our hearing system is not able to memorize a full audio profile much longer than a second. Then there is the “confirmation bias” effect which basically is all about that we always tend to be biased concerning the test result: Just having that button pressed or knowing the brand name has already to be seen as an influence in this regard. The only solution for this is utilizing blind testing.

Most of the time I listen through nearfield speakers and rarely by cans. I’m sticking to my speakers since more than 15 years now and it was very important for me to get used to them over time. Before that I’ve “upgraded” speakers several times unnecessarily. Having said that, using a coaxial speaker design is key for nearfield listening environments. After ditching digital room correction here in my studio the signal path is now fully analog right after the converter. The converter itself is high-end but today I think proper room acoustics right from the start would have been a better investment.

a brilliant interview

FlavourMTC “Mixbus Tone Control” released

FlavourMTC follows classic “passive” equalizer designs where the EQ circuits itself are not able to amplify signals but a dedicated amplifier stage takes care of it. Those EQ designs are well known for allowing very transparent frequency changes while their amplifier designs do add some icing on the cake quite often.

mixbus tone control – closest to analog

FlavourMTC implements this by utilizing 1st order shelving filter designs avoiding unwanted resonances and takes advantage of “zero delay” implementations for most accurate higher order filtering and w/o introducing curve warping near Nyquist frequency. The output amplifier stage of the plugin can be calibrated according specific mixing levels, provides a distinct “box tone” and glues everything together. Parts of the plugin are oversampled internally for maximum transparency and sound quality.

Available for Windows VST in 32 and 64bit as freeware. Download your copy here.

that unique plate reverb sound

Unlike digital reverberation, the plate reverb is one of the true analog attempts in recreating convincing reverberation build right into a studio device. It is basically an electro-mechanical device containing a plate of steel, transducers and a contact microphone to pickup the induced vibrations from that plate.

The sound is basically determined by the physical properties of the plate and its mechanical damping. Its not about reflecting waves from the plates surface but about the propagation of waves within the plate. While the plate itself has a fixed, regular shaped size and can be seen as a flat (two dimensional) room itself it actually does not produce early reflection patterns as we are used to from real rooms with solid walls. In fact there are no such reflections distinguishable by human hearing. On the other hand there appears to be a rather instant onset and the reverb build-up has a very high modal density already.

Also reverb diffusion appears to be quite unique within the plate. The wave propagation through metal performs different compared to air (e.g. speed/frequency wise) and also the plate itself – being a rather regular shape with a uniform surface and material – defines the sound. This typically results in a very uniform reverb tail although the higher frequencies tend to resonate a little bit more. Also due to the physics and the damping of the plate, we usually do not see hear very long decay times.

All in all, the fast and consistent reverb build up combined with its distinct tonality defines that specific plate reverb sound and explains why it is still so much beloved even after decades. The lack of early reflections can be easily compensated for just by adding some upfront delay lines to improve stereo localization if a mix demands it. The other way around, the plate reverb makes a perfect companion for all kinds of delay effects.

the album is dead, long live the album

Just enjoyed listening the new Röyksopp album as a whole. The album concept was declared dead ever so much during the last decade but I hope we will see more music releases like this again.

lost & found

Stuff to share from the interwebs. All related to music making, sound design, audio production.

You know you’re getting old if you once were used to those:

  • The Museum of Endangered Sounds: Imagine a world where we never again hear the symphonic startup of a Windows 95 machine. Imagine generations of children unacquainted with the chattering of angels lodged deep within the recesses of an old cathode ray tube TV …
  • »Conserve the sound« is an online museum for vanishing and endangered sounds. The sound of a dial telephone, a walkman, a analog typewriter, a pay phone, a 56k modem, a nuclear power plant or even a cell phone keypad are partially already gone or are about to disappear from our daily life …
  • And also this:

Modern scoring: ‘Tenet’: Ludwig Göransson Put Chris Nolan’s Breath in His Score — and Rethought Composing Altogether

About the future of audio mastering: How AI is solving one of music’s most expensive problems

FerricTDS is about glueing things together and not about distortion (I already told you):

Weird sound devices:

everything just fades into noise at the end

When I faced artificial reverberation algorithms to the very first time I just thought why not just dissolve the audio into noise over time to generate the reverb tail but it turned out to be not that easy, at least when just having the DSP knowledge and tools of that time. Today, digital reverb generation has come a long way and the research and toolsets available are quite impressive and diverse.

While the classic feedback delay network approaches got way more refined by improved diffusion generation, todays computational power increase can smooth things out further just by brute force as well. Still some HW vendors are going this route. Sampling impulse responses from real spaces also evolved over time and some DSP convolution drawbacks like latency management has been successfully addressed and can be handled more easily given todays CPUs.

Also, convolution is still the ticket whenever modeling a specific analog device (e.g. a plate or spring reverb) appears to be difficult, as long as the modeled part of the system is linear time invariant. To achieve even more accurate results there is still no way around physical modeling but this usually requires a very sophisticated modeling effort. As in practise everything appears to be a tradeoff its not that much unusual to just combine different approaches, e.g. a reverb onset gets sampled/convoluted but the reverb tail gets computed conventionally or – the other way around – early reflections are modeled but the tail just resolves into convoluted noise.

So, as we’ve learned now that everything just fades into noise at the end it comes to no surprise that the almost 15 years old epicVerb plugin becomes legacy now. However, it remains available to download for some (additional reverb) time. Go grab your copy as long as its not competely decayed, you’ll find it in the downloads legacy section here. There won’t be a MkII version but something new is already in the making and probably see the light of day in the not so far future. Stay tuned.

BootEQ mkIII released

BootEQ mkIII – a musical sounding Preamp/EQ

BootEQ mkIII is a musical sounding mixing EQ and pre-amplifier simulation. With its
four parametric and independent EQ bands it offers special selected and musical
sounding asymmetric and proportional EQ curves capable of reproducing several
‘classic’ EQ curves and tones accordingly.

It provides further audio coloration capabilities utilizing pre-amplifier harmonic distortion as well as tube and transformer-style signal saturation. Within its mkIII incarnation, the Preamp itself contains an opto-style compression circuit providing a very distinct and consistent harmonic distortion profile over a wide range of input levels, all based now on a true stateful saturation model.

Also the EQ curve slopes has been revised, plugin calibration takes place for better gain-staging and metering and the plugin offers zero latency processing now.

Available for Windows VST in 32 and 64bit as freeware. Download your copy here.

sustaining trends in audio land, 2022 edition

Forecasts are difficult, especially when they concern the future – Mark Twain

In last years edition about sustaining trends in audio land I’ve covered pretty much everything from mobile and modular, DAW and DAW-less up to retro outboard and ITB production trends. From my point of view, all points made so far are still valid. However, I’ve neglected one or another topic which I’ll now just add here to that list.

The emergence of AI in audio production

What we can currently see already in the market is the ermergence of some clever mixing tools aiming to solve very specific mixing tasks, e.g. resonance smoothing and spectral balancing. Tools like that might be based on deep learning or other smart and sophisticated algorithms. There is no such common/strict “AI” definition and we will see an increasing use of the “AI” badge even only for the marketing claim to be superior.

Some other markets are ahead in this area, so it might be a good idea to just look into them. For example, AI applications in the digital photography domain are already ranging from smart assistance during taking a photo itself up to complete automated post processing. There is AI eye/face detection in-camera, skin retouching, sky replacement and even complete picture development. Available for all kinds of devices, assisted or fully automated and in all shades of quality and pricing.

Such technology not only shapes the production itself but a market and business as a whole. For example, traditional gate keepers might disappear because they are no longer necessary to create, edit and distribute things but also the market might get flooded with mediocre content. To some extend we can see this already in the audio domain and the emergence of AI within our production will just be an accelerator for all that.

The future of audio mastering

Audio Mastering demands shifted slightly over the recent years already. We’ve seen new requirements coming from streaming services, the album concept has become less relevant and there was (and still is) a strong demand for an increased loudness target. Also, the CD has been loosing relevance but Vinyl is back and has become a sustaining trend again, surprisingly. Currently Dolby Atmos gains some momentum, but the actual consumer market acceptance remains to be proven. I would not place my bet on that since this has way more implications (from a consumer point of view) than just introducing UHD as a new display standard.

Concerning the technical production, a complete ITB shift – as we’ve seen it in the mixing domain – has not been completed yet but the new digital possibilities like dynamic equalizing or full spectrum balancing are slowly adopted. All in all, audio mastering slowly evolves along the ever changing demands but remains surprisingly stable, sustaining as a business and this will probably continue for the next (few) years.

Social Media, your constant source of misinformation

How To Make Vocals Sound Analog? Using Clippers For Clean Transparent Loudness. Am I on drugs now? No, I’ve just entered the twisted realm of social media. The place where noobs advice you pro mixing tips and the reviews are paid. Everyone is an engineer here but its sooo entertaining. Only purpose: Attention. Currency: Clicks&Subs. Tiktok surpassed YT regarding reach. Content half-life measured in hours. That DISLIKE button is gone. THERE IS NO HOPE.

The (over-) saturated audio plugin market and the future of DSP

Over the years, a vast variety of vendors and products has been flooded the audio plugin market, offering literally hundreds of options to choose from. While this appears to be a good thing at first glance (increaed competition leads to lower retail prices) this has indeed a number of implications to look at. The issues we should be concerned the most about are the lack of innovation and the drop in quality. We will continue to see a lot of “me too” products as well as retro brands gilding their HW brands with yesterday SW tech.

Also, we can expect a trend of market consolidation which might appear in different shapes. Traditionally, this is about mergers and aquisitions but today its way more prominently about successfully establishing a leading business platform. And this is why HW DSP will be dead on the long run becuse those vendors just failed in creating competitive business platforms. Other players stepped in here already.