epicCLOUDS public beta testing started

A public beta version of epicCLOUDS is available for testing. As already announced, the beta testing will be supported by the new VoS Discord server to gather feedback and discuss issues as best as possible. Feel free to join, its open for everyone!

Click here to join the VoS Discord server. A Discord login is required. Once on my server, you must agree the welcome message just shown to you by ticking it with a “thumbs up” to get full rights for posting etc.

VoS on Discord!

Now that I’ve set up a brand new Variety of Sound Discord server, you are of course all welcome to join in. Naturally, the main focus will be on the plugins and everything related to them, but I’m also looking forward to getting to know some of you a bit better.

I hope everything is working so far, but if you have any problems logging in, just let me know here with a short message – thanks in advance! Thanks also to rktic for supporting me setting up the server!

The upcoming public beta will be available on the server in the next few days and I’m looking forward to many downloads and diligent testing and feedback, as always 🙂

Click here to join the VoS Discord server. A Discord login is required. Once on my server, you must agree the welcome message just shown to you by ticking it with a “thumbs up” to get full rights for posting etc.


VoS on Discord?

Brian Eno – Imaginary Landscapes 1989

He is always a true source of inspiration …

what I’m currently working on – Vol. 13

1. Finalizing epicCLOUDS

When epicVerb got discontinued in April last year, it was already clear that after the brand new plate reverb there would have to be a worthy successor for really large rooms at some point, and it was also clear what it should be able to deliver: Representation of large rooms up to epic dimensions but with accurate stereo imaging. The reverb should also surround the source signal as best as possible, avoiding the feeling of just being attached to it. On the other hand, the so-called ambient reverbs have impressed me very much and influenced my work – I also really like their simplicity. But how do you realise all this and get everything under one hat? That was still completely unclear last year and there were only a few sketches and experiments. But this challenge is also the most fulfilling part of the whole thing, which always excites me anew. And now it has taken shape, already got a beautiful UI and I’m polishing the last bits. If only it weren’t for the manual!

2. NastyDLA mkIII

Can you do any better than the mkII? Not really, and so there will be no changes to the concept or the feature set, but just an update of the technology. The internal routing will be fixed, the filters optimised and internal oversampling added. The biggest change under the hood will be an update of the input stage. All in all, the changes will bring about subtle improvements in sound, but this will also break backwards compatibility, sound wise. The mkIII version therefore gets its own plugin ID and can also be operated in parallel to the old one.

3. VST3, public beta via Discord

VST3 versions could be next and to ensure the stability of the updates, a public beta will be held. The beta will be available shortly via a new Discord server.

Stay tuned!

iconic design: 1960s BRAUN Hifi

Just seen in an exhibition downtown.

artificial reverberation: from mono to true stereo

“True stereo” is a term used in audio processing to describe a stereo recording, processing or playback technique that accurately represents the spatial location of sound sources in the stereo field. In true stereo, the left and right channels of a stereo recording contain distinct and separate audio information that accurately reflects the spatial location of sound sources in the recording environment.

This is in contrast to fake/pseudo stereo, where the stereo image is created through artificial means, such as by applying phase shifting techniques to create the impression of stereo. True stereo is generally considered to be superior to fake stereo, as it provides a more natural and immersive listening experience, allowing the listener to better locate and identify sound sources within the stereo field. In the domain of acoustic reverberation, this is essential for the perception of envelopment.

Artificial reverberation has come a long way since its early beginnings. The first mechanical devices for generating artificial reverberation, such as spring or plate reverberation, were initially only available as mono devices. Even when two-channel variants emerged, they usually did summing to mono internally or did processing in separate signal paths, known as dual mono processing. Typically, in a plate reverb, a two-channel output signal was achieved simply by mounting two transducers on the very same reverb plate.

The first digital implementations of artificial reverberation did not differ much from the mechanical ones regarding this principle. Quite common was summing the inputs to mono and the independent tap of two signals from a single reverb tank to obtain a two-channel output. Then, explicit early reflection models were added, which were typically processed for left and right separately and merged into the outputs later to preserve a basic representation of spatial information. Sometimes, also the first reflections were just taken from a (summed) mono signal. The Ursa Major 8×32 from 1981 is a good example for this design pattern. Later, the designs became more sophisticated, and even today it is common to distinguish between early and late reverberation in order to create a convincing impression of immersion.

However, ensuring proper sound localisation through early reflection models is a delicate matter. First and foremost, a real room does not have a single reflection pattern, but a vast variety of ones that depend on the actual location of the sound source and the listening position in that room. A true-to-life representation of this would, therefore, have to be represented by a whole set of individual reflection patterns per sound source and listening position in the virtual room. As far as I know, the VSL MIR solution is the only one that currently takes advantage of this, and with an enormous technical effort.

Another problem is that first reflections can also be detrimental to the sound experience. Depending on their frequency and delay in relation to the direct signal, the direct signal can be masked and affected in terms of phase coherence so that the overall sound becomes muddy and lacks clarity. This is one of the reasons why a real plate reverb is loved so much for its clarity and immediacy: it simply has no initial reflections in this range. As a side note, in the epicPLATE implementation, this behaviour is accurately modeled by utilizing a reverberation technique that completely avoids reflections (delays).

Last but not least, in a real room there is no clear separation between the first reflections and the late reverberation. It is all part of the same reverberation that gradually develops over time, starting with just an auditory event. This also means that there is no clear distinction between events that can be located in space and those that can no longer be identified – this also continuously evolves over time.

A good example of how to realise digital reverb without this kind of separation between early and late reverberation and at the same time in “true stereo” was impressively demonstrated by the Quantec QRS back in the early 80s already. Its ability to accurately reproduce stereo was one of the reasons why it became an all-time favourite not only in the music production scene, but also in post-production and broadcasting.

Artificial reverberation is full of subtleties and details and one might wonder why we can perceive them at all. In the end, it comes down to the fact that in the course of evolution there was a need for such fine-tuning of our sensory system. It was a matter of survival and important for all animal species to immediately recognise at all times: What is it and where is it? The entire sensory system is designed for this and even combines the different sensory channels to always answer these two questions. Fun Fact: This is exactly why some visual cues can have a significant impact on what is heard and why blind tests (in both meanings) are so important for assessing certain audio qualities. See also the “McGurk Effect” if you are interested.

Have fun listening!

good question

dear acoustics researchers!

Thank you for all the latest research papers in acoustics and especially the basics of acoustic design for concert halls. I have learned so much about the ambivalance of early reflections, auditory proximity, the critical timing of sound distribution but also amazing things about natural frequency dependent compression in real rooms. Thanks also for making many contributions available as an easy introduction via YT.

However, many of these YT contributions shine through the poorest audio quality imaginable. Recorded in bad acoustic environment (!), poorly miked, badly placed headsets, hissing, booming, humming, dropouts – the whole lot. Resulting in recordings of the lowest audio quality and speech intelligibility. Sometimes so bad that you can hardly follow the content. Seriously, guys, the kids over at Tiktok can do better. So next time please do your homework and walk what you talk, okay?

Sincerely,
Herbert

lets talk about multi-channel production

Multi-channel production has been pushed strongly again for some time, and not only by Dolby and Apple. But what does “multi-channel” actually mean in your music production? Is it already relevant for recording and mixing or rather a downstream production step? Does it play a role at all or is it irrelevant for you as a music producer?

something epic is coming

Stay tuned!