everything just fades into noise at the end

When I faced artificial reverberation algorithms to the very first time I just thought why not just dissolve the audio into noise over time to generate the reverb tail but it turned out to be not that easy, at least when just having the DSP knowledge and tools of that time. Today, digital reverb generation has come a long way and the research and toolsets available are quite impressive and diverse.

While the classic feedback delay network approaches got way more refined by improved diffusion generation, todays computational power increase can smooth things out further just by brute force as well. Still some HW vendors are going this route. Sampling impulse responses from real spaces also evolved over time and some DSP convolution drawbacks like latency management has been successfully addressed and can be handled more easily given todays CPUs.

Also, convolution is still the ticket whenever modeling a specific analog device (e.g. a plate or spring reverb) appears to be difficult, as long as the modeled part of the system is linear time invariant. To achieve even more accurate results there is still no way around physical modeling but this usually requires a very sophisticated modeling effort. As in practise everything appears to be a tradeoff its not that much unusual to just combine different approaches, e.g. a reverb onset gets sampled/convoluted but the reverb tail gets computed conventionally or – the other way around – early reflections are modeled but the tail just resolves into convoluted noise.

So, as we’ve learned now that everything just fades into noise at the end it comes to no surprise that the almost 15 years old epicVerb plugin becomes legacy now. However, it remains available to download for some (additional reverb) time. Go grab your copy as long as its not competely decayed, you’ll find it in the downloads legacy section here. There won’t be a MkII version but something new is already in the making and probably see the light of day in the not so far future. Stay tuned.

towards stateful saturation

the static waveshaper y = tanh(x)

Still today, most developers are sticking to static waveshaping algorithms when it comes down to digital saturation implementations. This wasn’t very convincing to me from the very beginning and in fact it was one of the motivations why I’ve started my own audio effect developments – to come a little bit closer to what I thought what saturation and non-linearity in general is all about.

And so the Rescue audio plug-in was born in summer 2007 and was already an approach to relate audio transient events to the signal saturation itself. Not that much later TesslaSE appeared which was a different exercise leaning towards a frequency dependent non-linearity implementation coupled in a feedback structure. I still really love this plug-in and how it sounds and prefer it over much more sophisticated designs even today in quite some cases. Following, the pre-amp stage in BootEQmkII then focused on “transformer style” low-end weirdness and did feature oversampling on the non-linear sections of the device. A really great combination with the EQ – smooth and very musical sounding. The TesslaPRO thingy sums up all this and puts it into one neat little device with an easy to use “few knob” interface. Don’t let you fool by this simplistic (but so beautiful) design: It already features everything which makes a saturator to stand out from the crowd today: transient awareness, frequency dependency, dedicated low-end treatments. Sound-wise this results in a way smoother saturation experience and a better stereo imaging en passant.

With FerricTDS not only the notion of  subtle frequency dependent compression got extended to a core saturator algorithm. Since revision 1.5 I’ve ditched the oversampling based core and included a version which premiered the notion of memory into the non-linearity which transforms it from a stateless into a stateful algorithm. One could basically see this as a system which reacts different on the very same actual input signal depending on the recent history of events (on a very microscopical level). The input stage algorithms which I’ve included in NastyVCS and NastyDLA (both are actually the same) are a cpu and feature wise stripped down version of that to have the basic sound of it already as an option when mixing the tracks and its according fx.

Quite recently, I’ve started to look into implicit stateful models where memory is not applied from the outside of the algorithm but the algorithm itself contains a sort of memory. As an example, I’ve implemented a stateful version of the well-known tanh() function so that it is aware of recently occurred events but provides the very same harmonic structure compared to the original. Given some analyzer plots it even shows the very same transfer curve but in fact it does not limit strictly anymore but allows some minor overshots of some peak signals. Interestingly, the sound appears a little bit brighter (without letting you see that through the analyzer plot) and the low-end appears not to be that hard “brickwalled” but a little bit smoother. Lets be assured that I’m going to follow this path and then lets see where this will lead to in 2011.