“True stereo” is a term used in audio processing to describe a stereo recording, processing or playback technique that accurately represents the spatial location of sound sources in the stereo field. In true stereo, the left and right channels of a stereo recording contain distinct and separate audio information that accurately reflects the spatial location of sound sources in the recording environment.
This is in contrast to fake/pseudo stereo, where the stereo image is created through artificial means, such as by applying phase shifting techniques to create the impression of stereo. True stereo is generally considered to be superior to fake stereo, as it provides a more natural and immersive listening experience, allowing the listener to better locate and identify sound sources within the stereo field. In the domain of acoustic reverberation, this is essential for the perception of envelopment.
Artificial reverberation has come a long way since its early beginnings. The first mechanical devices for generating artificial reverberation, such as spring or plate reverberation, were initially only available as mono devices. Even when two-channel variants emerged, they usually did summing to mono internally or did processing in separate signal paths, known as dual mono processing. Typically, in a plate reverb, a two-channel output signal was achieved simply by mounting two transducers on the very same reverb plate.
The first digital implementations of artificial reverberation did not differ much from the mechanical ones regarding this principle. Quite common was summing the inputs to mono and the independent tap of two signals from a single reverb tank to obtain a two-channel output. Then, explicit early reflection models were added, which were typically processed for left and right separately and merged into the outputs later to preserve a basic representation of spatial information. Sometimes, also the first reflections were just taken from a (summed) mono signal. The Ursa Major 8×32 from 1981 is a good example for this design pattern. Later, the designs became more sophisticated, and even today it is common to distinguish between early and late reverberation in order to create a convincing impression of immersion.
However, ensuring proper sound localisation through early reflection models is a delicate matter. First and foremost, a real room does not have a single reflection pattern, but a vast variety of ones that depend on the actual location of the sound source and the listening position in that room. A true-to-life representation of this would, therefore, have to be represented by a whole set of individual reflection patterns per sound source and listening position in the virtual room. As far as I know, the VSL MIR solution is the only one that currently takes advantage of this, and with an enormous technical effort.
Another problem is that first reflections can also be detrimental to the sound experience. Depending on their frequency and delay in relation to the direct signal, the direct signal can be masked and affected in terms of phase coherence so that the overall sound becomes muddy and lacks clarity. This is one of the reasons why a real plate reverb is loved so much for its clarity and immediacy: it simply has no initial reflections in this range. As a side note, in the epicPLATE implementation, this behaviour is accurately modeled by utilizing a reverberation technique that completely avoids reflections (delays).
Last but not least, in a real room there is no clear separation between the first reflections and the late reverberation. It is all part of the same reverberation that gradually develops over time, starting with just an auditory event. This also means that there is no clear distinction between events that can be located in space and those that can no longer be identified – this also continuously evolves over time.
A good example of how to realise digital reverb without this kind of separation between early and late reverberation and at the same time in “true stereo” was impressively demonstrated by the Quantec QRS back in the early 80s already. Its ability to accurately reproduce stereo was one of the reasons why it became an all-time favourite not only in the music production scene, but also in post-production and broadcasting.
Artificial reverberation is full of subtleties and details and one might wonder why we can perceive them at all. In the end, it comes down to the fact that in the course of evolution there was a need for such fine-tuning of our sensory system. It was a matter of survival and important for all animal species to immediately recognise at all times: What is it and where is it? The entire sensory system is designed for this and even combines the different sensory channels to always answer these two questions. Fun Fact: This is exactly why some visual cues can have a significant impact on what is heard and why blind tests (in both meanings) are so important for assessing certain audio qualities. See also the “McGurk Effect” if you are interested.
Have fun listening!
Recent Comments