Stay tuned!
sustaining trends in audio land, 2023 edition
So, the year 2023 is slowly getting underway – time to take another look at the sustaining trends in audio land. Two of the 2022 themes have already been further confirmed and manifested – so let’s take a quick look. A third topic, however, has developed at an incredible speed and in an unbelievable way, to the surprise of all of us. But one thing at a time.
The (over-) saturated audio plugin market
A continuing trend towards market consolidation was to be expected as a result of a constantly oversaturated market, and indeed last year saw a whole series of merger and acquisition activities as well as new alliances. Involved in such activities were brands such as Slate Digital, Sonnox, Focusrite, Brainworx, Plugin Alliance, NI, iZotope and many more.
This is something quite normal in saturated markets and not a bad thing per se, but we might worry about a lack of innovation and diversity as a result. Alongside this, we will continue to see many companies late to the party offering “me too” products and retro brands gilding their HW brands with yesterday’s SW technology. The smarter companies will continue their efforts to successfully establish leading business platforms.
The future of HW DSP as a plugin platform
Since the HW DSP market has not succeeded in creating such a competitive (plug-in) business platform, we are currently witnessing the decline of this domain and in the long run everything will be offered natively. Last year, we’ve seen some late movers also starting such transformations, e.g. UA.
The emergence of AI in audio production
Of course, this was not only predictable but also announced, but no one had ever expected the extent and speed of its emergence over the past year. This applies first and foremost to its appearance in general, but also its impact to the whole music domain in particular. This impact will be immense and dramatic, affecting not only tools and work processes, but also music culture and its economy. The effects will be very, very profound, similar to the way the internet entered all areas of our lives.
The current trend of emulating effect devices with deep learning seems less exciting in this context, as it is just yet another form of effect sampling where we might see little innovation. Much more exciting will be the impact on areas such as composition, mixing and mastering, but also music distribution and value creation in general. But that will be the subject of another detailed article in this Blog.
We live in exciting times.
Stay tuned!
sidechain linking techniques
How an audio compressor responds to stereo content depends largely on how the channel linking is implemented in the sidechain. This has a major influence on how the spatial representation of a stereo signal is preserved or even enhanced. The task of the compressor designer is to decide which technical design is most suitable for a given overall concept and to what extent the user can control the linkage when using the device.
In analog compressor designs, in addition to unlinked “dual mono” operation, one usually finds simple techniques such as summing both stereo channels (corresponding to the center of the stereo signal) or the extraction of the maximum levels of both channels using a comparator circuit implementing the mathematical term max(L,R).
More sophisticated designs improve this by making the linking itself frequency dependent, e.g. by linking the channels only within a certain frequency range. It is also common to adjust the amount of coupling from 0 to 100%, and the API 2500 hardware compressor serves as a good example of such frequency dependent implementation. For the low and mid frequency range, simple summing often works slightly better in terms of good stereo imaging, while for the mid to high frequency range, decoupling to some degree often proves to be a better choice.
The channel coupling can also be considered as RMS (or vector) summing, which can be easily realized by sqrt(L^2+R^2). As an added sugar, this also elegantly solves the rectification problem and results in very consistent gain reduction across the actual level distributions that occur between two channels.
If, on the other hand, one wants to focus attention on correlated and uncorrelated signal components individually (both of which together make up a true stereo signal), then a mid/side decomposition in the sidechain is the ticket: A straight forward max(mid(L,R), side(L,R)) on the already rectified channels L and R is able to respond to any kind of correlated signal not only in a very balanced way but also to enhance its spatial representation.
More advanced techniques usually combine the methods already described.
TesslaPRO mkIII released
the magic is where the transient happens
The Tessla audio plugin series once started as a reminiscence to classic transformer based circuit designs of the 50s and 60s but without just being a clone stuck in the past. The PRO version has been made for mixing and mastering engineers working in the digital domain but always missing that extra vibe delivered by some highend analog devices.
TesslaPRO brings back the subtle artifacts from the analog right into the digital domain. It sligthly colors the sound, polishes transients and creates depth and dimension in the stereo field to get that cohesive sound we’re after. All the analog goodness in subtle doses: It’s a mixing effect intended to be used here and there, wherever the mix demands it.
The mkIII version is a technical redesign, further refined to capture all those sonic details while reducing audible distortions at the same time. It further blurs the line between compression and saturation and also takes aural perception based effects into account.
Available for Windows VST in 32 and 64bit as freeware. Download your copy here.
epicPLATE released
epicPLATE delivers an authentic recreation of classic plate reverberation. It covers the fast and consistent reverb build up as well as that distinct tonality the plate reverb is known for and still so much beloved today. Its unique reverb diffusion makes it a perfect companion for all kinds of delay effects and a perfect fit not only for vocals and drums.

delivering that unique plate reverb sound
- Authentic recreation of classic plate reverberation.
- True stereo reverb processing.
- Dedicated amplifier stage to glue dry/wet blends together.
- Lightweight state-of-the-art digital signal processing.
Available for Windows VST in 32 and 64bit as freeware. Download your copy here.
The former epicVerb audio plugin is discontinued.
everything just fades into noise at the end
When I faced artificial reverberation algorithms to the very first time I just thought why not just dissolve the audio into noise over time to generate the reverb tail but it turned out to be not that easy, at least when just having the DSP knowledge and tools of that time. Today, digital reverb generation has come a long way and the research and toolsets available are quite impressive and diverse.
While the classic feedback delay network approaches got way more refined by improved diffusion generation, todays computational power increase can smooth things out further just by brute force as well. Still some HW vendors are going this route. Sampling impulse responses from real spaces also evolved over time and some DSP convolution drawbacks like latency management has been successfully addressed and can be handled more easily given todays CPUs.
Also, convolution is still the ticket whenever modeling a specific analog device (e.g. a plate or spring reverb) appears to be difficult, as long as the modeled part of the system is linear time invariant. To achieve even more accurate results there is still no way around physical modeling but this usually requires a very sophisticated modeling effort. As in practise everything appears to be a tradeoff its not that much unusual to just combine different approaches, e.g. a reverb onset gets sampled/convoluted but the reverb tail gets computed conventionally or – the other way around – early reflections are modeled but the tail just resolves into convoluted noise.
So, as we’ve learned now that everything just fades into noise at the end it comes to no surprise that the almost 15 years old epicVerb plugin becomes legacy now. However, it remains available to download for some (additional reverb) time. Go grab your copy as long as its not competely decayed, you’ll find it in the downloads legacy section here. There won’t be a MkII version but something new is already in the making and probably see the light of day in the not so far future. Stay tuned.
BootEQ mkIII released
BootEQ mkIII – a musical sounding Preamp/EQ
BootEQ mkIII is a musical sounding mixing EQ and pre-amplifier simulation. With its
four parametric and independent EQ bands it offers special selected and musical
sounding asymmetric and proportional EQ curves capable of reproducing several
‘classic’ EQ curves and tones accordingly.
It provides further audio coloration capabilities utilizing pre-amplifier harmonic distortion as well as tube and transformer-style signal saturation. Within its mkIII incarnation, the Preamp itself contains an opto-style compression circuit providing a very distinct and consistent harmonic distortion profile over a wide range of input levels, all based now on a true stateful saturation model.
Also the EQ curve slopes has been revised, plugin calibration takes place for better gain-staging and metering and the plugin offers zero latency processing now.
Available for Windows VST in 32 and 64bit as freeware. Download your copy here.
sustaining trends in audio land, 2022 edition
Forecasts are difficult, especially when they concern the future – Mark Twain
In last years edition about sustaining trends in audio land I’ve covered pretty much everything from mobile and modular, DAW and DAW-less up to retro outboard and ITB production trends. From my point of view, all points made so far are still valid. However, I’ve neglected one or another topic which I’ll now just add here to that list.
The emergence of AI in audio production
What we can currently see already in the market is the ermergence of some clever mixing tools aiming to solve very specific mixing tasks, e.g. resonance smoothing and spectral balancing. Tools like that might be based on deep learning or other smart and sophisticated algorithms. There is no such common/strict “AI” definition and we will see an increasing use of the “AI” badge even only for the marketing claim to be superior.
Some other markets are ahead in this area, so it might be a good idea to just look into them. For example, AI applications in the digital photography domain are already ranging from smart assistance during taking a photo itself up to complete automated post processing. There is AI eye/face detection in-camera, skin retouching, sky replacement and even complete picture development. Available for all kinds of devices, assisted or fully automated and in all shades of quality and pricing.
Such technology not only shapes the production itself but a market and business as a whole. For example, traditional gate keepers might disappear because they are no longer necessary to create, edit and distribute things but also the market might get flooded with mediocre content. To some extend we can see this already in the audio domain and the emergence of AI within our production will just be an accelerator for all that.
The future of audio mastering
Audio Mastering demands shifted slightly over the recent years already. We’ve seen new requirements coming from streaming services, the album concept has become less relevant and there was (and still is) a strong demand for an increased loudness target. Also, the CD has been loosing relevance but Vinyl is back and has become a sustaining trend again, surprisingly. Currently Dolby Atmos gains some momentum, but the actual consumer market acceptance remains to be proven. I would not place my bet on that since this has way more implications (from a consumer point of view) than just introducing UHD as a new display standard.
Concerning the technical production, a complete ITB shift – as we’ve seen it in the mixing domain – has not been completed yet but the new digital possibilities like dynamic equalizing or full spectrum balancing are slowly adopted. All in all, audio mastering slowly evolves along the ever changing demands but remains surprisingly stable, sustaining as a business and this will probably continue for the next (few) years.
Social Media, your constant source of misinformation
How To Make Vocals Sound Analog? Using Clippers For Clean Transparent Loudness. Am I on drugs now? No, I’ve just entered the twisted realm of social media. The place where noobs advice you pro mixing tips and the reviews are paid. Everyone is an engineer here but its sooo entertaining. Only purpose: Attention. Currency: Clicks&Subs. Tiktok surpassed YT regarding reach. Content half-life measured in hours. That DISLIKE button is gone. THERE IS NO HOPE.
The (over-) saturated audio plugin market and the future of DSP
Over the years, a vast variety of vendors and products has been flooded the audio plugin market, offering literally hundreds of options to choose from. While this appears to be a good thing at first glance (increaed competition leads to lower retail prices) this has indeed a number of implications to look at. The issues we should be concerned the most about are the lack of innovation and the drop in quality. We will continue to see a lot of “me too” products as well as retro brands gilding their HW brands with yesterday SW tech.
Also, we can expect a trend of market consolidation which might appear in different shapes. Traditionally, this is about mergers and aquisitions but today its way more prominently about successfully establishing a leading business platform. And this is why HW DSP will be dead on the long run becuse those vendors just failed in creating competitive business platforms. Other players stepped in here already.
the twisted world of guitar pedals II
Meanwhile I had the opportunity to put my hands on some Fairfield Circuitry effect pedal stuff mentioned earlier here and the “Meet Maude” analog BBD delay was right here on my desk for a deeper inspection. My actual experience was a rather mixed one.
Focusing on a rather dark and LoFi sound quality on the one hand plus a rather simplistic feature set concept wise on the other, they do not appear to be very flexible in practise and this at a rather steep price point. They appear to be very noisy featuring all kinds of artifacts even when integrated to the mixing desk via reamping. One may call this the feature itself but at the end it makes it a one-trick pony. If you need exactly that, here you have it but you get nothing beyond that. To me this trade off was too big and so I send it back.
However, I found their nifty low pass gate implementation (very prominently featured within their “Shallow Water”) that much unique and interesting that I replicated it as a low pass filter alternative in software and to have it available e.g. for filtering delay lines in my productions. The “Shallow Water” box made me almost pull the trigger but all in all I think this stuff seems to be a little bit over-hyped thanks to the interwebs. This pretty much sums it up for now, end of this affair.
Timeline & BigSky – The new dust collectors?
Going into the exact opposite direction might be a funny idea and so I grabbed some Strymon stuff which aims to be the jack of all trades at least regarding digital delay and reverb in a tiny stomp box aka desktop package. To be continued …
Further readings about BBD delays:
interview series (12) ā Daniel Weiss
First of all, congrats on your Technical Grammy Award this year! Daniel, you’ve once started DSP developments during the early days of digital audio. What was the challenge to that time?
Thank you very much, Herbert.
Yes, I started doing digital audio back in 1979 when I joined Studer-Revox. In that year Studer started their digital audio lab with a group of newly employed engineers. At that time there were no DSPs or CPUs with enough power to do audio signal processing. We used multiplier and adder chips from the 74 chip series and/or those large multiplier chips they used in military applications. The ādistributed arithmeticā technique we applied. Very efficient, but compared to todayās processors very inflexible.
The main challenges regarding audio applications were:
- A/D and D/A converters had to be designed with audio in mind.
- Digital audio storage had to rely on video tape recorders with their problems.
- Signal processing was hardware coded, i.e. very inflexible.
- DAWs as we know them today have not been feasible due to the lack of speedy processors and the lack of large harddisks. (The size of the first harddisks started at about 10 MByteā¦).
- Lack of any standards. Sampling frequencies, wordlengths and interfaces have not been standardized back then.
Later the TMS32010 DSP from TI became available ā a very compromised DSP, hardly useable for pro audio.
And a bit later I was able to use the DSP32 from AT&T, a floating point DSP which changed a lot for digital audio processing.
What makes such a converter design special in regards to audio and was the DSP math as we know it today already in place or was that also something rather emerging to that time?
The A/D and D/A converters back then had the problem that they either were not fast enough to do audio sampling frequencies (like 44.1 kHz) and/or their resolution was not high enough, i.e. not 14 Bits or higher.
There were some A/D and D/A modules available which were able to do digital audio conversion, but those were very expensive. One of the first (I think) audio specific D/A converters was the Philips TDA1540 which is a 14 bit converter but which has a linearity better than 14 bit. So we were able to enhance the TDA1540 by adding an 8 bit converter chip to generate two more bits for a total of about 16bits conversion quality.
The DSP math was the same as it is today ā mathematics is still the same, right? And digital signal processing is applied mathematics using the binary numbering system. The implementation of adders and multipliers to some extent differed to todayās approaches, though. The ādistributed arithmeticā I mentioned for instance worked with storage registers, shift registers, a lookup table in ROM and an adder / storage register to implement a complete FIR filter. The multiplication was done via the ROM content with the audio data being the addresses of the ROM and the output of the ROM being the result after the multiplication.
An explanation is given here: http://www.ee.iitm.ac.in/vlsi/_media/iep2010/da.pdf
Other variants to do DSP used standard multiplier and adder chips which have been cascaded for higher word-lengths. But the speed of those chips was rather compromised when comparing to todayās processors.
Was there still a need to workaround such word-length and sample rate issues when you designed and manufactured the very first digital audio equipment under your own brand? The DS1 compressor already introduced 96kHz internal processing right from the start, as far as I remember. What were the main reasons for 96kHz processing?
When I started at Studer the sampling frequencies have been all over the place. No standards yet. So we did a universal Sampling Frequency Converter (Studer SFC16) which also had custom built interfaces as those havenāt been standardized either. No AES/EBU for instance.
Later when I started Weiss Engineering the 44.1 and 48 kHz standards had already been established. We then also added 88.2 / 96kHz capabilities to the modular bw102 system, which was what we had before the EQ1, DS1 units. It somehow became fashionable to do high sampling frequencies. There are some advantages to that, such as a higher tolerance to non-linearly treated signals or less severe analog filtering in converters.
The mentioned devices were critically acclaimed not only by mastering engineers over the years. What makes them so special? Is it the transparency or some other distinct design principle? And how to achieve that?
There seems to be a special sound with our devices. I don’t know what exactly the reason is for that. Generally we try to do the units technically as good as possible. I.e. low noise, low distortion, etc.
It seems that this approach helps when it comes to sound quality….
And maybe our algorithms are a bit special. People sometimes think that digital audio is a no brainer – there is that cookbook algorithm I implement and that is it. But in fact digital offers as many variants as analog does. Digital is just a different representation of the signal.
Since distortion is such a delicate matter within the design of a dyncamic processor: Can you share some insights about managing distortion in such a (digital) device?
The dynamic processor is a level controller where the level is set by a signal which is generated out of the audio signal. So it is an amplitude modulator which means that sidebands are generated. The frequency and amplitude of the sidebands depend on the controlling signal and the audio signal. Thus in a worst case it can happen that a sideband frequency lies above half the sampling frequency (the Nyquist frequency) and thus gets mirrored at the Nyquist frequency. This is a bad form of distortion as it is not harmonically related to the audio signal.
This problem can be solved to some extent by rising the sampling frequency (e.g. doubling it) before the dynamic processing is applied, such that the Nyquist frequency is also doubled.
Another problem in dynamics processors is the peak detection. In high frequency peaks the actual peak can be positioned between two consecutive samples and thus get undetected because the processor only sees the actual samples. This problem can be solved to some extent by upsampling the sidechain (where the peak detection takes place) to e.g. 2 or 4 times the audio sampling frequency. This then allows to have kind of a “true peak” measurement.
Your recent move from DSP hardware right into the software plugin domain should not have been that much of a thing. Or was it?
Porting a digital unit to a plug-in version is somewhat simpler compared to the emulation of an analog unit.
But the porting of our EQ1 and DS1 units was still fairly demanding, though. The software of five DSPs and a host processor had to be ported to the computer platform. The Softube company did that for us.
Of course we tried to achieve a 1:1 porting, such that the hardware and the plugin would null perfectly. This is almost the case. There are differences in the floating point format between DSPs and computer, so it is not possible to get absolutely the same – unless one would use fixed point arithmetic; which we do not like to use for the applications at hand.
The plugin versions in addition have more features because the processing power of a computer CPU is much higher than the five (old) DSPs the hardware uses. E.g. the sampling frequency can go up to 192kHz (hardware: 96kHz) and the dynamics EQ can be dynamic in all seven bands (hardware: 4 bands maximum).
Looking into the future of dynamic processing: Do you see anything new on the horizon or just the continuation of recent trends?
We at Weiss Engineering haven’t looked into the dynamics processing world recently. Probably one could do some more intelligent approaches than the current dynamics processors use. Like e.g. look at a whole track and decide on that overview what to do with the levels over time. Also machine learning could help – I guess some people are working in that direction regarding dynamics processing.
From your point of view: Will the loudness race ever come to an end and can we expect a return of more fidelity back into the consumer audio formats?
The streaming platforms help in getting the loudness race to a more bearable level. Playlists across a whole streaming platform should have tracks in them with a similar loudness level for similar genres. If one track sticks out it does not help. Some platforms luckily take measures in that direction.
Daniel, do you use any analog audio equipment at all?
We may have a reputation in digital audio, but we do analog as well. A/D and D/A converters are mostly analog and our A1 preamp has an analog signal path. Plus more analog projects are in the pipeline…
Related Links
- Weiss Engineering Ltd.
- interview series (1) ā Fabien from TDR
- interview series (2) ā Nico from BigTone
- interview series (3) ā Tony from Klanghelm
- interview series (4) ā Bob Olhsson
- interview series (5) ā Dave Hill
- interview series (6) ā Christopher Dion
- interview series (7) ā Dave Gamble
- interview series (8) ā Sascha Eversmeier
- interview series (9) ā D.W. Fearn
- interview series (10) ā Vladislav Goncharov
- interview series (11) ā Andreas Eschenwecker
Recent Comments