interview series (12) – Daniel Weiss

First of all, congrats on your Technical Grammy Award this year! Daniel, you’ve once started DSP developments during the early days of digital audio. What was the challenge to that time?

Thank you very much, Herbert.

Yes, I started doing digital audio back in 1979 when I joined Studer-Revox. In that year Studer started their digital audio lab with a group of newly employed engineers. At that time there were no DSPs or CPUs with enough power to do audio signal processing. We used multiplier and adder chips from the 74 chip series and/or those large multiplier chips they used in military applications. The “distributed arithmetic” technique we applied. Very efficient, but compared to today’s processors very inflexible.

The main challenges regarding audio applications were:

  • A/D and D/A converters had to be designed with audio in mind.
  • Digital audio storage had to rely on video tape recorders with their problems.
  • Signal processing was hardware coded, i.e. very inflexible.
  • DAWs as we know them today have not been feasible due to the lack of speedy processors and the lack of large harddisks. (The size of the first harddisks started at about 10 MByte…).
  • Lack of any standards. Sampling frequencies, wordlengths and interfaces have not been standardized back then.

Later the TMS32010 DSP from TI became available – a very compromised DSP, hardly useable for pro audio.

And a bit later I was able to use the DSP32 from AT&T, a floating point DSP which changed a lot for digital audio processing.

What makes such a converter design special in regards to audio and was the DSP math as we know it today already in place or was that also something rather emerging to that time?

The A/D and D/A converters back then had the problem that they either were not fast enough to do audio sampling frequencies (like 44.1 kHz) and/or their resolution was not high enough, i.e. not 14 Bits or higher.

There were some A/D and D/A modules available which were able to do digital audio conversion, but those were very expensive. One of the first (I think) audio specific D/A converters was the Philips TDA1540 which is a 14 bit converter but which has a linearity better than 14 bit. So we were able to enhance the TDA1540 by adding an 8 bit converter chip to generate two more bits for a total of about 16bits conversion quality.

The DSP math was the same as it is today – mathematics is still the same, right? And digital signal processing is applied mathematics using the binary numbering system. The implementation of adders and multipliers to some extent differed to today’s approaches, though. The “distributed arithmetic” I mentioned for instance worked with storage registers, shift registers, a lookup table in ROM and an adder / storage register to implement a complete FIR filter. The multiplication was done via the ROM content with the audio data being the addresses of the ROM and the output of the ROM being the result after the multiplication.

An explanation is given here: http://www.ee.iitm.ac.in/vlsi/_media/iep2010/da.pdf

Other variants to do DSP used standard multiplier and adder chips which have been cascaded for higher word-lengths. But the speed of those chips was rather compromised when comparing to today’s processors.

Was there still a need to workaround such word-length and sample rate issues when you designed and manufactured the very first digital audio equipment under your own brand? The DS1 compressor already introduced 96kHz internal processing right from the start, as far as I remember. What were the main reasons for 96kHz processing?

When I started at Studer the sampling frequencies have been all over the place. No standards yet. So we did a universal Sampling Frequency Converter (Studer SFC16) which also had custom built interfaces as those haven’t been standardized either. No AES/EBU for instance.

Later when I started Weiss Engineering the 44.1 and 48 kHz standards had already been established. We then also added 88.2 / 96kHz capabilities to the modular bw102 system, which was what we had before the EQ1, DS1 units. It somehow became fashionable to do high sampling frequencies. There are some advantages to that, such as a higher tolerance to non-linearly treated signals or less severe analog filtering in converters.

The mentioned devices were critically acclaimed not only by mastering engineers over the years. What makes them so special? Is it the transparency or some other distinct design principle? And how to achieve that?

There seems to be a special sound with our devices. I don’t know what exactly the reason is for that. Generally we try to do the units technically as good as possible. I.e. low noise, low distortion, etc.
It seems that this approach helps when it comes to sound quality….
And maybe our algorithms are a bit special. People sometimes think that digital audio is a no brainer – there is that cookbook algorithm I implement and that is it. But in fact digital offers as many variants as analog does. Digital is just a different representation of the signal.

Since distortion is such a delicate matter within the design of a dyncamic processor: Can you share some insights about managing distortion in such a (digital) device?

The dynamic processor is a level controller where the level is set by a signal which is generated out of the audio signal. So it is an amplitude modulator which means that sidebands are generated. The frequency and amplitude of the sidebands depend on the controlling signal and the audio signal. Thus in a worst case it can happen that a sideband frequency lies above half the sampling frequency (the Nyquist frequency) and thus gets mirrored at the Nyquist frequency. This is a bad form of distortion as it is not harmonically related to the audio signal.
This problem can be solved to some extent by rising the sampling frequency (e.g. doubling it) before the dynamic processing is applied, such that the Nyquist frequency is also doubled.

Another problem in dynamics processors is the peak detection. In high frequency peaks the actual peak can be positioned between two consecutive samples and thus get undetected because the processor only sees the actual samples. This problem can be solved to some extent by upsampling the sidechain (where the peak detection takes place) to e.g. 2 or 4 times the audio sampling frequency. This then allows to have kind of a “true peak” measurement.

Your recent move from DSP hardware right into the software plugin domain should not have been that much of a thing. Or was it?

Porting a digital unit to a plug-in version is somewhat simpler compared to the emulation of an analog unit.
But the porting of our EQ1 and DS1 units was still fairly demanding, though. The software of five DSPs and a host processor had to be ported to the computer platform. The Softube company did that for us.

Of course we tried to achieve a 1:1 porting, such that the hardware and the plugin would null perfectly. This is almost the case. There are differences in the floating point format between DSPs and computer, so it is not possible to get absolutely the same – unless one would use fixed point arithmetic; which we do not like to use for the applications at hand.
The plugin versions in addition have more features because the processing power of a computer CPU is much higher than the five (old) DSPs the hardware uses. E.g. the sampling frequency can go up to 192kHz (hardware: 96kHz) and the dynamics EQ can be dynamic in all seven bands (hardware: 4 bands maximum).

Looking into the future of dynamic processing: Do you see anything new on the horizon or just the continuation of recent trends?

We at Weiss Engineering haven’t looked into the dynamics processing world recently. Probably one could do some more intelligent approaches than the current dynamics processors use. Like e.g. look at a whole track and decide on that overview what to do with the levels over time. Also machine learning could help – I guess some people are working in that direction regarding dynamics processing.

From your point of view: Will the loudness race ever come to an end and can we expect a return of more fidelity back into the consumer audio formats?

The streaming platforms help in getting the loudness race to a more bearable level. Playlists across a whole streaming platform should have tracks in them with a similar loudness level for similar genres. If one track sticks out it does not help. Some platforms luckily take measures in that direction.

Daniel, do you use any analog audio equipment at all?

We may have a reputation in digital audio, but we do analog as well. A/D and D/A converters are mostly analog and our A1 preamp has an analog signal path. Plus more analog projects are in the pipeline…

Related Links

Audio analyzers currently in use here

During tracking, mixing and mixdown I’m utilizing different analyzers whether thats freeware or commercial, hard- or software. Each of them doing a decent job in its very own area:

VU Meter

Always in good use during tracking and mixing mainly for checking channel levels and gainstaging all kinds of plugins. I also love to have a VU right on the mixbus to get a quick visual indication about Peak vs RMS dynamic behaviour.

TBProAudio mvMeter2 is freeware and actually meters not only VU but also RMS, EBU LU as well as PPM. It is also resizeable (VST3 version) and supports different skins.

Spectrum Analyzer I

To me, the Voxengo SPAN is an all-time classic analyzer and ever so reliable. I’ve always used it to have a quick indication about an instruments frequency coverage or the overall frequency balance on the mixbus. There is always one running at the very end of the summing bus in the post-fader section.

Voxengo SPAN is also freeware and highly customizable regarding the analyzer FFT resolution, slope smoothing and ballistics.

Spectrum Analyzer II

Another spectrum analyzer I’m using is Voxengo TEOTE which actually is not only an analyzer but a full spectrum dynamic processor. However, let alone the analyzer itself (fully working in demo mode!) is an excellent assistant when it comes to assess the overall frequency balance. The analyzer does this in regards to a full spectrum noise profile which is adjustable with a Tilt EQ, basically. Very handy for judging deviations (over time) from an ideal frequency response.

Voxengo TEOTE demo version available on their website.

Loudness Metering

I’m leaving all EBU R128 related business to the TC Electronic Clarity M. Since it is a hardware based monitoring solution it always is active here on my desktop no matter what and also serves for double-checking equal RMS levels (for A/B comparisions) and a quick look at the frequency balance from time to time. The hardware is connected via USB (could be SPDIF as well) and is driven by a small remote plugin sitting at the very end of the summing bus in my setup here. It also offers a vector scope and provides audio correlation information. It supports a vast variety of professional metering standards.

Courtesy of Music Tribe IP Ltd.

Image Courtesy of Music Tribe IP Ltd.

 

 

 

A brief 2021 blogging recap and 2022 outlook

Currently on my desk, awaiting further analysis: The Manultec Orca Bay EQ

Rebuilding my studio and restarting blogging activities one year ago was pretty much fun so far. Best hobby ever! To get things started in Jan/Feb this year, I did a short summary about the recent trends in audio and I might revise and update that in January again. Quite some audio gear caught my attention over the year and some found its way into the Blog or even in my humble new studio setup, e.g. the unique SOMA Lyra-8 and the Korg MS-20 remake as well as the Behringer Clone of the ARP 2600.

I also went into more detail on how to get the most out of the SPL Tube Vitalizer or the renaissance of the Baxandall EQs just to name the two topics and also had a more realistic look at the Pultec style equalizer designs which might be something I will continue to dig into a little bit further in 2022. As of lately I’m also intrigued by some analog effect pedal designs out there, namely the Fairfield Circuitry stuff. And as always, I’m highly interested in everything psychoacoustic related.

By end of August I started re-releasing my very own plugins and also did mkII versions for FerricTDS, ThrillseekerXTC and TesslaSE. I will continue that route and on top of my list is to have the whole Thrillseeker plugin series complete and available again. Some are asking me if I will develop brand new audio plugins as well. While I’m doing that already but just for my very own, at this point in time it remains unclear if some of that stuff will ever gonna make it into a public release. But you never know, the TesslaSE remake was also not planned at all.

Something I will continue for sure is that special developer interview series I did over the years. This year I already had the chance to talk to Vladislav Goncharov from Tokyo Dawn Labs and Andreas Eschenwecker from Vertigo Sound which gave some detailed insights about creating analog and digital audio devices, especially dynamic processors. To be published in January, the very next interview has also been done already and this time it will be with this years Technical Grammy Award winner, Daniel Weiss.

I’m looking forward to 2022!

Stay tuned
Herbert

interview series (11) – Andreas Eschenwecker

Andy, your Vertigo VSC compressor has already become a modern classic. What has been driven you to create such a device?

I really like VCA compressors. VCA technology gives you a lot of freedom in design and development and the user gets a very flexible tool at the end. I was very unhappy with all VCA compressors on the market around 2000. Those were not very flexible for different applications. These units were working good in one certain setting only. Changing threshold or other parameters was fiddley and so on. But the main point starting the VSC project was the new IC VCA based compressors sounded one dimensional and boxy.

Does this mean your design goal was to have a more transparent sounding device or does the VSC also adds a certain sound but just in a different/better way?

Transparency without sounding clean and artificial. The discrete Vertigo VCAs deliver up to 0,6% THD. Distortion can deliver depth without sounding muddy.

Does this design favour certain harmonics or – the other way around – supresses some unwanted distortions?

The VSC adds a different distortion spectrum depending when increasing input level or adding make-up. The most interesting fact is that most of the distortion and artifacts are created in the release phase of the compressor. The distortion is not created on signal peaks. It’s becoming obvious when the compressor sets back from gainreduction to zero gainreduction. Similar to a reverb swoosh… after the peak that was leveled.

Where does your inspiration comes from for such technical designs?

With my former company I repaired and did measurements on many common classic and sometimes ultra-rare compressors. Some sounded pretty good but were unreliable – some were very intuitive in a studio situation, some not…
At this time I slowly developed an idea what kind of compressor I would like to use in daily use.

From your point of view: To which extend did the compressor design principles changed over the years?

The designs changed a lot. Less discrete parts, less opto compressors (because a lot of essential parts are no longer produced), tube compressors suffer from poor new tube manufacturing and some designers nowadays go more for RMS detection and feed forward topology. With modern components there was no need for a feedback SC arrangement anymore. I think RMS is very common now because of its easy use at the first glance. For most applications I prefer Peak detection.

Having also a VSC software version available: Was it difficult to transfer all that analog experience into the digital domain? What was the challenge?

In my opinion the challenge is to sort out where to focus on. What influence has the input transformer or the output stage? Yes some of course. Indeed most of the work was going into emulating the detection circuit.

Which advantages did you experienced with the digital implementation or do you consider analog to be superior in general?

I am more an analog guy. So I still prefer the hardware. What I like about the digital emulations is that some functions are easy to implement in digital and would cost a fortune in production of the analog unit.

Any plans for the future you might want to share?

At the moment I struggle with component delays. 2021/22 is not the right time for new analog developments. I guess some new digital products come first.

Related Links

effect pedal affairs

Quite recently I had a closer look into the vast amount of (guitar) effect pedals out there. Most are already DSP based which surprised me a little bit since I still ecpected more discrete analog designs after all. While looking for some neat real analog BBD delay I finally stumbled across Fairfield Circuitry’s “Meet Maude” which got me intrigued, having a rather rough look&feel at first sight but some very delicate implementation details under the hood.

Their delay modulation circuit has some randomness build in and also there is a compression circuit in the feedback loop – both designs I’ve also choosen for NastyDLA and which makes a big impact on the overall sound for granted. But the real highlight is the VCF in the delay feedback path which actually appears to be a low-pass gate – a quite unique design and soundwise also different but appealing in its very own regard.

They employed very similar concepts to their vibrato/chorus box “Shallow Water” featuring also random delay modulation and a low pass gate but this time a little bit more prominent on the face plate. On top, their JFET op-amp adds some serious grit to any kind of input signal. All in all, I did not expect such a bold but niche product to exist. If I ever will own such a thingy, there will be a much more detailed review here for sure.

TesslaSE mkII released

TesslaSE mkII – All the analog goodness in subtle doses

TesslaSE never meant to be a distortion box but rather focused on bringing all those subtle saturation and widening (side-) effects from the analog right into the digital domain. It sligthly colors the sound, polishes transients and creates depth and dimension in the stereo field. All the analog goodness in subtle doses. It’s a mixing effect intended to be used here and there where the mix demands it. It offers a low CPU profile and (almost) zero latency.

With it’s 2021 remake, TesslaSE mkII sticks to exactly that by just polishing whats already there. The internal gainstaging has been reworked so that everything appears gain compensated to the outside and is dead-easy to operate within a slick, modernized user interface. Also the transformer/tube cicuit modeling got some updates to appear more detailed and vibrant, while all non-linear algorithms got oversampled for additional aliasing supression.

Available for Windows VST in 32 and 64bit as freeware. Download your copy here.

myths and facts about aliasing

Written 10 years ago, more relevant than ever.

Variety Of Sound

A recent trend in the audio producer scene seems to be to judge an audio effect plug-in just by analyzing the harmonic spectrum, which is usually done by throwing a static sine-wave right into the plug-in and then look at the output with a FFT spectrum analyzer afterwards. In this article I’m going to talk about what this method is capable of and where its limitations and problems lie and that aliasing gets confused with a lot of other phenomenons quite often. I’m also clearly showing that this method alone is not sufficient enough to judge an audio plug-in’s quality in a blackbox situation.

a spectrum plot showing noise, harmonic distortion and aliasing

View original post 939 more words

The TesslaSE Remake

There were so many requests to revive the old and rusty TesslaSE which I’ve once moved already into the legacy folder. In this article I’m going to talk a little bit about the history of the plugin and its upcoming remake.

The original TesslaSE audio plugin was one of my first DSP designs aiming at a convincing analog signal path emulation and it was created already 15 years ago! In its release info it stated to “model pleasant sounding ‘electric effects’ coming from transformer coupled tube circuits in a digital controlled fashion” which basically refers to adding harmonic content and some subtle saturation as well as spatial effects to the incoming audio. In contrast to static waveshaping approaches quite common to that time, those effects were already inherently frequency dependent and managed within a mid/side matrix underneath.

(Later on, this approach emerged into a true stateful saturation framework capable of modeling not only memoryless circuits and the TesslaPro version took advantage of audio transient management as well.)

This design was also utilized to supress unwanted aliasing artifacts since flawless oversampling was still computational expensive to that time. And offering zero latency on top, TesslaSE always had a clear focus on being applied over the entire mixing stage, providing all those analog signal path subtleties here and there. All later revisions also sticked to the very same concept.

With the 2021 remake, TesslaSE mkII won’t change that as well but just polishing whats already there. The internal gainstaging has been reworked so that everything appears gain compensated to the outside and is dead-easy to operate within a slick, modernized user interface. Also the transformer/tube cicuit modeling got some updates now to appear more detailed and vibrant, while all non-linear algorithms got oversampled for additional aliasing supression.

On my very own, I really enjoy the elegant sound of the update now!

TesslaSE mkII will be released by end of November for PC/VST under a freeware license.

Dynamic 1073/84 EQ curves?

Yes we can! The 1073 and 84 high shelving filters are featuring that classic frequency dip upfront the HF boost itself. Technically speaking they are not shelves but bell curves with a very wide Q but anyway, wouldn’t it be great if that would be program dependent in terms of expanding and compressing according to the curve shape and giving a dynamic frequency response to the program material?

Again, dynamic EQs makes this an easy task today and I just created some presets for the TDR Nova EQ which you can copy right from here (see below after the break). Instructions: Choose one of the 3 presets (one for each specific original frequency setting – 10/12/16kHz) and just tune the Threshold parameter for band IV (dip operation) and band V (boost operation) to fit to the actual mix situation.

They sound pretty much awesome! See also my Nova presets for the mixbus over here and the Pultec ones here.

[Read more…]

Dynamic Pultec EQ curves?

Wouldn’t it be great if the Pultec boost/cut performance would be program dependent? Sort of expanding and compressing according to the boost/cut settings and giving a dynamic frequency response to the program material.

Well, dynamic EQs makes this an easy task today and I just created some presets for the TDR Nova EQ which you can copy right from here (see below after the break). Instructions: Choose one of the 4 presets (one for each specific original frequency setting – 20/30/60/100Hz) and tune the Threshold parameter for band II (boost operation) and band III (cut operation) to fit to the actual mix situation.

See also my presets for the mixbus over here.

[Read more…]