why the Thrillseeker compressors complement each other so well

Audio compressors use either a “feed forward” or “feedback” design to control the gain of an audio signal. In a feed forward compressor, the input signal is used directly to control the gain of the output signal. Essentially, the compressor compares the input signal to a threshold and reduces the gain of the output signal if the input signal exceeds the threshold. In a feedback compressor, the output signal is fed back into the compressor and used to control the gain of the input signal. So, the compressor compares the output signal to a threshold and reduces the gain of the input signal if the output signal exceeds the threshold. Both feed forward and feedback compressors can be effective at controlling the dynamic range of an audio signal, but they operate in slightly different ways and do have different characteristics in terms of their sound and response.

However, the specific sound of a device depends largely on other features of the circuit design and its components. For example, an optoelectric compressor uses a photoresistor or photodiode to detect and control the degree of gain reduction of the signal. But the make-up amplifier afterwards may contribute the most to the sound, depending on its design (tube or solid state). A variable gain tube compressor, on the other hand, uses a vacuum tube to control the gain of the compressor. The vacuum tube is used to amplify the signal, and the gain of the compressor is controlled by changing the bias voltage of the tube. This alone provides a very typical, distinctive sound that is very rich in harmonic overtones.

Both opto-electrical and variable-mu tube compressors are commonly used in audio production to control the dynamic range of a signal, but they operate in different ways and can produce different tonal characteristics. Opto-electrical compressors are known for their fast attack times and smooth release characteristics, while variable-mu tube compressors are known for their warm and smooth sound.

what is a “box tone”?

“Box tone” is a term that is often used to describe the characteristic sound of a particular piece of audio equipment, particularly when it comes to classic analog effects devices such as equalizers and compressors.

The box tone of an effect is often described as the unique timbre or tonal coloration that the device imparts on the audio signal as it passes through it. This can be due to a variety of factors, including the type and quality of the components used in the device, the design of the circuitry, and the way the device processes the signal.

Some audio engineers and producers may seek out specific box tones for their recordings and mixes, as they can add character and depth to the sound. Others may prefer a more neutral or transparent sound, in which case they may choose equipment that has a more subtle or less noticeable box tone.

It’s important to note that the term “box tone” is often used informally and can be somewhat subjective, as different people may have different opinions on what constitutes a distinctive or desirable box tone.

ThrillseekerLA mkII released

ThrillseekerLA mkII – bringing mojo back

ThrillseekerLA is an optical stereo compressor optimized for gentle mix bus coloring. It combines smoothest optical compression with vibrant coloration options that deliver a unique box tone in their own right, including thrilling bass and elegant top end void of any harshness in the mids. Its compression not only glues things together effortlessly but also enhances the stereo image by increasing depth and dimension.

10 years after – new in version 2:

  • Technical redesign with advanced opto cell emulation
  • Simplified gainstaging including automatic output gain compensation
  • Streamlined coloring options: Interstage, Tube and Loudness
  • New compress/limit option and reworked sidechain filtering

The mkII update is available for Windows VST in 32 and 64bit as freeware. Download your copy here.

Related Links:

the beauty of opto-electrical compression – volume 2

When I was looking for a sophisticated stereo compressor for the outboard studio rack a year ago, I was surprised to see how many of the more interesting models now use opto-electric compression technology. Whether transparent or coloring, tube or solid-state amplifiers, transformer or transformerless, even two-channel layouts in mid/side encoding: far advanced compared to all the classic mono replicas.

Optical compressors are usually characterized by their distinct program-dependent compression behavior, mainly based on a physical memory effect in the detector itself. Other subtle nuances are found across the frequency spectrum that affect timing and curve characteristics, creating a complexity that cannot be reduced to simple two-stage controlled release curves, and which is the beauty of opto-electrical compression in its entirety.

Significant audio signal colorations, however, are shaped not by the gain reduction circuitry but by the make-up gain amplifier, whether it is tube or solid-state. Here, the audio transformer also plays an important role in polishing the transients and creating a cohesive sound.

ThrillseekerLA was designed from the beginning in 2012 as a modern stereo compressor with exciting sound coloring possibilities. It is a compressor with authentic opto-electric control behavior in feed-forward circuit topology.

The upcoming mkII update is a technical redesign dedicated solely to improving the sound. It delivers a unique box tone with thrilling bass and elegant top end void of any harshness in the mids. The compression not only glues everything together effortlessly, but also enhances the stereo image by adding depth and dimension.

The release is scheduled for mid-December.

bringing mojo back – volume 2

ThrillseekerVBL is an emulation of a vintage broadcast limiter design that follows the classic Variable-Mu design principles from the early 1950s. These tube-based devices were initially used to prevent audio overloads in broadcast transmission by managing sudden level changes in the audio signal. From today’s perspective, and compared to digital dynamic processors, they appear to be rather slow and can be considered more of a gain structure leveler. However, they still shine when it comes to gain riding in a very musical way – they’ve written warmth and mojo all over it.

ThrillseekerVBL is a modded version that not only features basic gain control, but also gives detailed access to both compression behavior and the characteristic of tube circuit saturation effects. Used in subtle doses, this provides the analog magic we so often miss when working in the digital domain while overdriving the circuit achieves much more drastic musical textures as a creative effect.

ThrillseekerVBL offers an incredibly authentic audio transformer simulation that models not only the typical low-frequency harmonic distortion, but also all the frequency- and load-dependent subtleties that occur in a transformer-coupled tube circuit and that contribute to the typical mojo we know and love from the analog classics.

new in version 2

Conceptually, the mkII version has been refined in that the peak limiting itself is no longer the main task but versatile and musically expressive gain control as well as a thrilling saturation experience. The saturation is now an integral part of the compression and is perfectly suited for processing transient-rich material. Both compression and saturation can be individually activated and controlled.

The circuit-related frequency loss in the highs has been almost eliminated and the brilliance control – originally intended just for compensation – can now also perform exciter-like tasks. The bias control has been extended to shape the harmonic spectrum in much greater detail by allowing the contribution of second order harmonics as well as the adjustment of the saturation behavior in the transient area of the signals. The transformer circuit has also been technically revised not only to resolve all the subtleties realistically but also to reproduce an overall tighter sound image.

ThrillseekerVBL has become a real tonebox, able to reproduce a wide range of tonalities. It provides access to the attack and release behavior and all compression controls can also affect the saturation of the signal, even when the compression function is turned off. This allows specific textures of signal saturation to be realized. As with the good old outboard devices, the desired sound colorations can be achieved just by controlling the working range. And if too much of a good thing is used, the DRY/WET control simply shifts down a gear.

To further improve the user experience some additional UI elements have been added giving more visual feedback. Although oversampling has been added, the actual cpu load was significantly reduced thanks to efficient algorithms and assembler code optimizations.

ThrillseekerVBL mkII will be released October 14th for Windows VST in 32 and 64bit as freeware.

sidechain linking techniques

How an audio compressor responds to stereo content depends largely on how the channel linking is implemented in the sidechain. This has a major influence on how the spatial representation of a stereo signal is preserved or even enhanced. The task of the compressor designer is to decide which technical design is most suitable for a given overall concept and to what extent the user can control the linkage when using the device.

In analog compressor designs, in addition to unlinked “dual mono” operation, one usually finds simple techniques such as summing both stereo channels (corresponding to the center of the stereo signal) or the extraction of the maximum levels of both channels using a comparator circuit implementing the mathematical term max(L,R).

More sophisticated designs improve this by making the linking itself frequency dependent, e.g. by linking the channels only within a certain frequency range. It is also common to adjust the amount of coupling from 0 to 100%, and the API 2500 hardware compressor serves as a good example of such frequency dependent implementation. For the low and mid frequency range, simple summing often works slightly better in terms of good stereo imaging, while for the mid to high frequency range, decoupling to some degree often proves to be a better choice.

The channel coupling can also be considered as RMS (or vector) summing, which can be easily realized by sqrt(L^2+R^2). As an added sugar, this also elegantly solves the rectification problem and results in very consistent gain reduction across the actual level distributions that occur between two channels.

If, on the other hand, one wants to focus attention on correlated and uncorrelated signal components individually (both of which together make up a true stereo signal), then a mid/side decomposition in the sidechain is the ticket: A straight forward max(mid(L,R), side(L,R)) on the already rectified channels L and R is able to respond to any kind of correlated signal not only in a very balanced way but also to enhance its spatial representation.

More advanced techniques usually combine the methods already described.

next level saturation experience & still missing VoS plugins

The magic is where the transient happens.

Since a year or so I’m not just updating my audio plugin catalog but also focusing on bringing the original Stateful Saturation approach to the next level. That concept was already introduced 2010, embracing the fact that most analog circuit saturation affairs are not static but a frequency and load dependent matter which can be best modeled by describing a system state – hence the name Stateful Saturation.

The updated 2022 revision is now in place and got further refined regarding the handling of audio transient states while reducing audible distortions at the same time. It further blurs the line between compression and saturation and also takes aural perception based effects into account. This was profoundly influenced by working with audio exciters over the recent years but also by deep diving further into the field of psychoacoustics.

This important update was also the reason why I actually did hold back some of the plugin updates, namely TesslaPRO and the Thrillseeker compressors since they heavily rely on that framework. Meanwhile, TesslaPRO has been rewritten based on the framework update already and will be released early September. ThrillseekerLA and VBL are in the making and scheduled for Q4.

interview series (12) – Daniel Weiss

First of all, congrats on your Technical Grammy Award this year! Daniel, you’ve once started DSP developments during the early days of digital audio. What was the challenge to that time?

Thank you very much, Herbert.

Yes, I started doing digital audio back in 1979 when I joined Studer-Revox. In that year Studer started their digital audio lab with a group of newly employed engineers. At that time there were no DSPs or CPUs with enough power to do audio signal processing. We used multiplier and adder chips from the 74 chip series and/or those large multiplier chips they used in military applications. The “distributed arithmetic” technique we applied. Very efficient, but compared to today’s processors very inflexible.

The main challenges regarding audio applications were:

  • A/D and D/A converters had to be designed with audio in mind.
  • Digital audio storage had to rely on video tape recorders with their problems.
  • Signal processing was hardware coded, i.e. very inflexible.
  • DAWs as we know them today have not been feasible due to the lack of speedy processors and the lack of large harddisks. (The size of the first harddisks started at about 10 MByte…).
  • Lack of any standards. Sampling frequencies, wordlengths and interfaces have not been standardized back then.

Later the TMS32010 DSP from TI became available – a very compromised DSP, hardly useable for pro audio.

And a bit later I was able to use the DSP32 from AT&T, a floating point DSP which changed a lot for digital audio processing.

What makes such a converter design special in regards to audio and was the DSP math as we know it today already in place or was that also something rather emerging to that time?

The A/D and D/A converters back then had the problem that they either were not fast enough to do audio sampling frequencies (like 44.1 kHz) and/or their resolution was not high enough, i.e. not 14 Bits or higher.

There were some A/D and D/A modules available which were able to do digital audio conversion, but those were very expensive. One of the first (I think) audio specific D/A converters was the Philips TDA1540 which is a 14 bit converter but which has a linearity better than 14 bit. So we were able to enhance the TDA1540 by adding an 8 bit converter chip to generate two more bits for a total of about 16bits conversion quality.

The DSP math was the same as it is today – mathematics is still the same, right? And digital signal processing is applied mathematics using the binary numbering system. The implementation of adders and multipliers to some extent differed to today’s approaches, though. The “distributed arithmetic” I mentioned for instance worked with storage registers, shift registers, a lookup table in ROM and an adder / storage register to implement a complete FIR filter. The multiplication was done via the ROM content with the audio data being the addresses of the ROM and the output of the ROM being the result after the multiplication.

An explanation is given here: http://www.ee.iitm.ac.in/vlsi/_media/iep2010/da.pdf

Other variants to do DSP used standard multiplier and adder chips which have been cascaded for higher word-lengths. But the speed of those chips was rather compromised when comparing to today’s processors.

Was there still a need to workaround such word-length and sample rate issues when you designed and manufactured the very first digital audio equipment under your own brand? The DS1 compressor already introduced 96kHz internal processing right from the start, as far as I remember. What were the main reasons for 96kHz processing?

When I started at Studer the sampling frequencies have been all over the place. No standards yet. So we did a universal Sampling Frequency Converter (Studer SFC16) which also had custom built interfaces as those haven’t been standardized either. No AES/EBU for instance.

Later when I started Weiss Engineering the 44.1 and 48 kHz standards had already been established. We then also added 88.2 / 96kHz capabilities to the modular bw102 system, which was what we had before the EQ1, DS1 units. It somehow became fashionable to do high sampling frequencies. There are some advantages to that, such as a higher tolerance to non-linearly treated signals or less severe analog filtering in converters.

The mentioned devices were critically acclaimed not only by mastering engineers over the years. What makes them so special? Is it the transparency or some other distinct design principle? And how to achieve that?

There seems to be a special sound with our devices. I don’t know what exactly the reason is for that. Generally we try to do the units technically as good as possible. I.e. low noise, low distortion, etc.
It seems that this approach helps when it comes to sound quality….
And maybe our algorithms are a bit special. People sometimes think that digital audio is a no brainer – there is that cookbook algorithm I implement and that is it. But in fact digital offers as many variants as analog does. Digital is just a different representation of the signal.

Since distortion is such a delicate matter within the design of a dyncamic processor: Can you share some insights about managing distortion in such a (digital) device?

The dynamic processor is a level controller where the level is set by a signal which is generated out of the audio signal. So it is an amplitude modulator which means that sidebands are generated. The frequency and amplitude of the sidebands depend on the controlling signal and the audio signal. Thus in a worst case it can happen that a sideband frequency lies above half the sampling frequency (the Nyquist frequency) and thus gets mirrored at the Nyquist frequency. This is a bad form of distortion as it is not harmonically related to the audio signal.
This problem can be solved to some extent by rising the sampling frequency (e.g. doubling it) before the dynamic processing is applied, such that the Nyquist frequency is also doubled.

Another problem in dynamics processors is the peak detection. In high frequency peaks the actual peak can be positioned between two consecutive samples and thus get undetected because the processor only sees the actual samples. This problem can be solved to some extent by upsampling the sidechain (where the peak detection takes place) to e.g. 2 or 4 times the audio sampling frequency. This then allows to have kind of a “true peak” measurement.

Your recent move from DSP hardware right into the software plugin domain should not have been that much of a thing. Or was it?

Porting a digital unit to a plug-in version is somewhat simpler compared to the emulation of an analog unit.
But the porting of our EQ1 and DS1 units was still fairly demanding, though. The software of five DSPs and a host processor had to be ported to the computer platform. The Softube company did that for us.

Of course we tried to achieve a 1:1 porting, such that the hardware and the plugin would null perfectly. This is almost the case. There are differences in the floating point format between DSPs and computer, so it is not possible to get absolutely the same – unless one would use fixed point arithmetic; which we do not like to use for the applications at hand.
The plugin versions in addition have more features because the processing power of a computer CPU is much higher than the five (old) DSPs the hardware uses. E.g. the sampling frequency can go up to 192kHz (hardware: 96kHz) and the dynamics EQ can be dynamic in all seven bands (hardware: 4 bands maximum).

Looking into the future of dynamic processing: Do you see anything new on the horizon or just the continuation of recent trends?

We at Weiss Engineering haven’t looked into the dynamics processing world recently. Probably one could do some more intelligent approaches than the current dynamics processors use. Like e.g. look at a whole track and decide on that overview what to do with the levels over time. Also machine learning could help – I guess some people are working in that direction regarding dynamics processing.

From your point of view: Will the loudness race ever come to an end and can we expect a return of more fidelity back into the consumer audio formats?

The streaming platforms help in getting the loudness race to a more bearable level. Playlists across a whole streaming platform should have tracks in them with a similar loudness level for similar genres. If one track sticks out it does not help. Some platforms luckily take measures in that direction.

Daniel, do you use any analog audio equipment at all?

We may have a reputation in digital audio, but we do analog as well. A/D and D/A converters are mostly analog and our A1 preamp has an analog signal path. Plus more analog projects are in the pipeline…

Related Links

interview series (11) – Andreas Eschenwecker

Andy, your Vertigo VSC compressor has already become a modern classic. What has been driven you to create such a device?

I really like VCA compressors. VCA technology gives you a lot of freedom in design and development and the user gets a very flexible tool at the end. I was very unhappy with all VCA compressors on the market around 2000. Those were not very flexible for different applications. These units were working good in one certain setting only. Changing threshold or other parameters was fiddley and so on. But the main point starting the VSC project was the new IC VCA based compressors sounded one dimensional and boxy.

Does this mean your design goal was to have a more transparent sounding device or does the VSC also adds a certain sound but just in a different/better way?

Transparency without sounding clean and artificial. The discrete Vertigo VCAs deliver up to 0,6% THD. Distortion can deliver depth without sounding muddy.

Does this design favour certain harmonics or – the other way around – supresses some unwanted distortions?

The VSC adds a different distortion spectrum depending when increasing input level or adding make-up. The most interesting fact is that most of the distortion and artifacts are created in the release phase of the compressor. The distortion is not created on signal peaks. It’s becoming obvious when the compressor sets back from gainreduction to zero gainreduction. Similar to a reverb swoosh… after the peak that was leveled.

Where does your inspiration comes from for such technical designs?

With my former company I repaired and did measurements on many common classic and sometimes ultra-rare compressors. Some sounded pretty good but were unreliable – some were very intuitive in a studio situation, some not…
At this time I slowly developed an idea what kind of compressor I would like to use in daily use.

From your point of view: To which extend did the compressor design principles changed over the years?

The designs changed a lot. Less discrete parts, less opto compressors (because a lot of essential parts are no longer produced), tube compressors suffer from poor new tube manufacturing and some designers nowadays go more for RMS detection and feed forward topology. With modern components there was no need for a feedback SC arrangement anymore. I think RMS is very common now because of its easy use at the first glance. For most applications I prefer Peak detection.

Having also a VSC software version available: Was it difficult to transfer all that analog experience into the digital domain? What was the challenge?

In my opinion the challenge is to sort out where to focus on. What influence has the input transformer or the output stage? Yes some of course. Indeed most of the work was going into emulating the detection circuit.

Which advantages did you experienced with the digital implementation or do you consider analog to be superior in general?

I am more an analog guy. So I still prefer the hardware. What I like about the digital emulations is that some functions are easy to implement in digital and would cost a fortune in production of the analog unit.

Any plans for the future you might want to share?

At the moment I struggle with component delays. 2021/22 is not the right time for new analog developments. I guess some new digital products come first.

Related Links

interview series (10) – Vladislav Goncharov

Vlad, what was your very first DSP plugin development, how did it once started and what was your motivation behind?

My first plugin was simple a audio clipper. But I decided to not release it. So my first public released plugin was Molot compressor. I was a professional software engineer but with zero DSP knowledge (my education was about databases, computer networks and stuff like that). I played a guitar as a hobby, recorded demos at home and one day I found that such thing as audio plugins exist. I was amazed by their amount and also by the fact that there are free plugins too. And I realised that one day I can build something like this myself. I just had to open a DSP book, read a chapter or two and it was enough to start. So my main motivation was curiosity, actually.

Was that Molot compressor concept inspired by some existing devices or a rather plain DSP text book approach?

That days there was a rumour that it’s impossible to make good sounding digital compressor because of aliasing and stuff. I tried to make digital implementation as fluid as possible, without hard yes/no logic believing this is how perfect digital compressor should sound. And the way I implemented the algorithm made the compressor to sound unlike anything I heard before. I didn’t had any existing devices in my head to match and I didn’t watch textbook implementations too. The sound was just how I made it. I did 8 versions of the algorithm trying to make it as usable as possible from user point perspective (for example “harder” knee should sound “harder”, I removed dual-band implementation because it was hard to operate) and the last version of the project was named “comp8”.

Did you maintained that specific sound within Molot when you relaunched it under the TDR joint venture later on? And while we are at it: When and how did that cooperation with Fabien started?

TDR Molot development was started with the same core sound implementation as original Molot had. But next I tried to rework every aspect of the DSP to make it sound better but keep the original feel at the same time. It was very hard but I think I succeeded. I’m very proud of how I integrated feedback mode into TDR Molot for example. About Fabien: He wrote me to discuss faults in my implementation he thought I had (I’m not sure it was Molot or Limiter 6), we also discussed TDR Feedback Compressor he released that days, we argued against each other but what’s strange the next day we both changed our minds and agreed with our opposite opinions. It was like “You were right yesterday. No, I think you were right”. Next there was “KVR Developer Challenge” and Fabien suggested to collaborate and create a product for this competition. That was 2012.

And the Feedback Compressor was the basis for Kotelnikov later on, right?

No, Kotelnikov is 100% different from Feedback Compressor. Fabien tried to make the sound of feedback compressor more controllable and found that the best way to achieve this is just to change the topology to feedforward one. It’s better to say, Feedback Compressor led to Kotelnikov. Also the interesting fact, early version of Kotelnikov had also additional feedback mode but I asked Fabien to remove it because it was the most boring compressor sound I ever heard. I mean if you add more control into feedback circuit, it just ruins the sound.

Must have been a challenge to obtain such a smooth sound from a feed-forward topology. In general, what do you think makes a dynamic processor stand out these days especially but not limited to mastering?

I think, it’s an intelligent control over reactions. For example Kotelnikov has some hidden mechanisms working under the hood, users don’t have access to them but they help to achieve good sound. I don’t think it’s good idea to expose all internal parameters to the user. There must be hidden helpers just doing their job.

I so much agree on that! Do you see any new and specific demand concerning limiting and maximizing purposes? I’m just wondering how the loudness race will continue and if we ever going to see a retro trend towards a more relaxed sound again …

I think even in perfect loudness normalized world most of the music is still consumed in noisy environments. The processing allowing the quietest details to be heard and cut through background noise, to retain the feel of punch and density even at low volumes is in demand these days. Loudness maximizers can do all this stuff but in this context they act like old broadcast processors. In my opinion, the loudness war will continue but it’s not for overall mix loudness anymore but how loud and clear each tiny detail of the mix should be.

Can we have a brief glimpse on what you are currently focused on, DSP development wise?

You may take a look at Tokyo Dawn Labs Facebook posts. We shared a couple of screenshots some time ago. That’s our main project to be released someday. But also we work on a couple of dynamic processors in parallel. We set high mark on the quality of our products so we have to keep it that high and that’s why the development is so slow. We develop for months and months until the product is good enough to be released. That’s why we usually don’t have estimation dates of release.

Related Links