artificial reverberation: from mono to true stereo

“True stereo” is a term used in audio processing to describe a stereo recording, processing or playback technique that accurately represents the spatial location of sound sources in the stereo field. In true stereo, the left and right channels of a stereo recording contain distinct and separate audio information that accurately reflects the spatial location of sound sources in the recording environment.

This is in contrast to fake/pseudo stereo, where the stereo image is created through artificial means, such as by applying phase shifting techniques to create the impression of stereo. True stereo is generally considered to be superior to fake stereo, as it provides a more natural and immersive listening experience, allowing the listener to better locate and identify sound sources within the stereo field. In the domain of acoustic reverberation, this is essential for the perception of envelopment.

Artificial reverberation has come a long way since its early beginnings. The first mechanical devices for generating artificial reverberation, such as spring or plate reverberation, were initially only available as mono devices. Even when two-channel variants emerged, they usually did summing to mono internally or did processing in separate signal paths, known as dual mono processing. Typically, in a plate reverb, a two-channel output signal was achieved simply by mounting two transducers on the very same reverb plate.

The first digital implementations of artificial reverberation did not differ much from the mechanical ones regarding this principle. Quite common was summing the inputs to mono and the independent tap of two signals from a single reverb tank to obtain a two-channel output. Then, explicit early reflection models were added, which were typically processed for left and right separately and merged into the outputs later to preserve a basic representation of spatial information. Sometimes, also the first reflections were just taken from a (summed) mono signal. The Ursa Major 8×32 from 1981 is a good example for this design pattern. Later, the designs became more sophisticated, and even today it is common to distinguish between early and late reverberation in order to create a convincing impression of immersion.

However, ensuring proper sound localisation through early reflection models is a delicate matter. First and foremost, a real room does not have a single reflection pattern, but a vast variety of ones that depend on the actual location of the sound source and the listening position in that room. A true-to-life representation of this would, therefore, have to be represented by a whole set of individual reflection patterns per sound source and listening position in the virtual room. As far as I know, the VSL MIR solution is the only one that currently takes advantage of this, and with an enormous technical effort.

Another problem is that first reflections can also be detrimental to the sound experience. Depending on their frequency and delay in relation to the direct signal, the direct signal can be masked and affected in terms of phase coherence so that the overall sound becomes muddy and lacks clarity. This is one of the reasons why a real plate reverb is loved so much for its clarity and immediacy: it simply has no initial reflections in this range. As a side note, in the epicPLATE implementation, this behaviour is accurately modeled by utilizing a reverberation technique that completely avoids reflections (delays).

Last but not least, in a real room there is no clear separation between the first reflections and the late reverberation. It is all part of the same reverberation that gradually develops over time, starting with just an auditory event. This also means that there is no clear distinction between events that can be located in space and those that can no longer be identified – this also continuously evolves over time.

A good example of how to realise digital reverb without this kind of separation between early and late reverberation and at the same time in “true stereo” was impressively demonstrated by the Quantec QRS back in the early 80s already. Its ability to accurately reproduce stereo was one of the reasons why it became an all-time favourite not only in the music production scene, but also in post-production and broadcasting.

Artificial reverberation is full of subtleties and details and one might wonder why we can perceive them at all. In the end, it comes down to the fact that in the course of evolution there was a need for such fine-tuning of our sensory system. It was a matter of survival and important for all animal species to immediately recognise at all times: What is it and where is it? The entire sensory system is designed for this and even combines the different sensory channels to always answer these two questions. Fun Fact: This is exactly why some visual cues can have a significant impact on what is heard and why blind tests (in both meanings) are so important for assessing certain audio qualities. See also the “McGurk Effect” if you are interested.

Have fun listening!

dear acoustics researchers!

Thank you for all the latest research papers in acoustics and especially the basics of acoustic design for concert halls. I have learned so much about the ambivalance of early reflections, auditory proximity, the critical timing of sound distribution but also amazing things about natural frequency dependent compression in real rooms. Thanks also for making many contributions available as an easy introduction via YT.

However, many of these YT contributions shine through the poorest audio quality imaginable. Recorded in bad acoustic environment (!), poorly miked, badly placed headsets, hissing, booming, humming, dropouts – the whole lot. Resulting in recordings of the lowest audio quality and speech intelligibility. Sometimes so bad that you can hardly follow the content. Seriously, guys, the kids over at Tiktok can do better. So next time please do your homework and walk what you talk, okay?


something epic is coming

Stay tuned!

the world of sound localization according to psychoacoustics

Sound localization refers to the ability of the human auditory system to determine the location of a sound source in space. This is done by analyzing the differences in the arrival time, intensity, and spectral content of the sound waves that reach the two ears. The human ear is able to localize sounds both horizontally (azimuth) and vertically (elevation) in the auditory space.

The brain processes the incoming sound signals from both ears to calculate the interaural time difference (ITD) and interaural level difference (ILD), which are used to determine the location of the sound source. Interaural time difference refers to the difference in the time it takes for a sound wave to reach each ear, while interaural level difference refers to the difference in the level of the sound wave that reaches each ear.

The auditory system uses both ITD and ILD as complementary cues that work together to allow for accurate sound localization in the horizontal plane, aka stereo field. For example, sounds coming from straight ahead might have similar ITDs at both ears but different ILDs, while sounds coming from the side might have similar ILDs at both ears but different ITDs.

It’s also worth noting that the relative importance of ITD and ILD can vary depending on the frequency of the sound. At low frequencies, ITD is the dominant cue for sound localization, while at high frequencies, ILD becomes more important. Research has suggested that the crossover frequency between ILD and ITD cues for human sound localization is around 1.5 kHz to 2.5 kHz, with ITD cues being more useful below this frequency range and ILD cues being more useful above this range.

In addition to ITD and ILD, the auditory system also uses spectral cues, such as the shape of the outer ear and the filtering effects of the head and torso, to determine the location of sounds in the vertical plane and also to identify backside audio events.

The temporal characteristics of an audio event, such as its onset and duration, can have an impact on sound localization as well. Generally speaking, sounds with a more distinct onset, such as a drum hit, are easier to localize than sounds with a more sustained signal, such as white noise. This is because the onset of a sound provides a more salient cue for the auditory system to use in determining the location of the sound source, especially in regards to ITD.

In the case of a drum hit, the sharp onset creates a more pronounced difference in the arrival time and intensity of the sound at the two ears, which makes it easier for the auditory system to use ITD and ILD cues to locate the sound source. In contrast, with a more sustained signal like white noise, the auditory system may have to rely more on spectral cues and reverberation in the environment to determine the location of the sound source.

epicPLATE released

epicPLATE delivers an authentic recreation of classic plate reverberation. It covers the fast and consistent reverb build up as well as that distinct tonality the plate reverb is known for and still so much beloved today. Its unique reverb diffusion makes it a perfect companion for all kinds of delay effects and a perfect fit not only for vocals and drums.

delivering that unique plate reverb sound

  • Authentic recreation of classic plate reverberation.
  • True stereo reverb processing.
  • Dedicated amplifier stage to glue dry/wet blends together.
  • Lightweight state-of-the-art digital signal processing.

Available for Windows VST in 32 and 64bit as freeware. Download your copy here.

The former epicVerb audio plugin is discontinued.

that unique plate reverb sound

Unlike digital reverberation, the plate reverb is one of the true analog attempts in recreating convincing reverberation build right into a studio device. It is basically an electro-mechanical device containing a plate of steel, transducers and a contact microphone to pickup the induced vibrations from that plate.

The sound is basically determined by the physical properties of the plate and its mechanical damping. Its not about reflecting waves from the plates surface but about the propagation of waves within the plate. While the plate itself has a fixed, regular shaped size and can be seen as a flat (two dimensional) room itself it actually does not produce early reflection patterns as we are used to from real rooms with solid walls. In fact there are no such reflections distinguishable by human hearing. On the other hand there appears to be a rather instant onset and the reverb build-up has a very high modal density already.

Also reverb diffusion appears to be quite unique within the plate. The wave propagation through metal performs different compared to air (e.g. speed/frequency wise) and also the plate itself – being a rather regular shape with a uniform surface and material – defines the sound. This typically results in a very uniform reverb tail although the higher frequencies tend to resonate a little bit more. Also due to the physics and the damping of the plate, we usually do not see hear very long decay times.

All in all, the fast and consistent reverb build up combined with its distinct tonality defines that specific plate reverb sound and explains why it is still so much beloved even after decades. The lack of early reflections can be easily compensated for just by adding some upfront delay lines to improve stereo localization if a mix demands it. The other way around, the plate reverb makes a perfect companion for all kinds of delay effects.

epicVerb vstpresets up again

The epicVerb vstpresets had been not available to download for quite a while and that was because the eV 1.5 update had broke compatibility. The stuff is back online again now.

What is that file about? It only matters for Cubase 4 (or higher) users and provides the original factory preset bank but with additional 25:75 and 50:50 dry/wet mix levels so one could use the presets more easily on the insert bus. I hope someday eV will have a “wet only” switch (or such alike) to make this obsolet.

To download the vstpresets archive just go to the downloads page here. Credits goes to user susiwong.

epicVerb 1.5 major update available

epicVerb 1.5

epicVerb digital reverberation simulator

[Read more…]

epicVerb release 1.5 is on the way



An important major update for the epicVerb digital reverberation simulator is (almost) done and will be released in the beginning of December 2009. It not only features some bugfixes and major stability improvements but also some re-worked algorithms which supports way more dense and three-dimensional reverberation processing.

in the studio – some gear pics


Some impressions from the studio yesterday. There is one device which adds some warmth to the production but can’t be emulated in digital – can you spot it (easy)? Can you spot all brands (hard)? [Read more…]