Note: Some of the tips rely on features from the GE version.
Mixing against HP/LP combo
A good generic practice when EQing several tracks in a mix is too start by dialing in HP/LP combinations by an appropriate level and then do further EQing/mixing against those settings. Also using the tilt filter is a good idea to apply very first and rough tonal corrections and then working out the details afterwards with the three EQs.
Preserving low-end energy when high-pass filtering
A cool trick to preserve some low-end energy when high-pass filtering is applied is to boost the low-end while using the EQ-SAT feature. As you can see in the routing diagram the HPF comes after the main EQs and EQ-SAT. This way, harmonic overtones are generated based on the fundamentals before the HPF is applied.
Decoupling the low-end
The low-end EQ features a “Phi” option switch which allows to decouple the low-end by an allpass filter network. The crossover can be freely adjusted with the normal frequency control in this band while the gain control does not have any effect in this mode. This may work great for that mellow bass drums just as an example but in other cases it might loose some definition as a trade-off.
Compare different settings
SlickEQ contains two effect settings slots, A and B. Use them in combination with the automatic output gain control to AB test different settings. Within the plugin you can move settings between A and B but also copy&paste is there to freely copy settings between different plug-in instances. Also, undo/redo comes in handy here.
Adjusting precise values
The gain/frequency displays can also be used to enter specific values and also shortcuts are accepted, e.g. “5k” can be entered to set a value to 5000. And did you know that SlickEQ has mouse-wheel support?
This article explores how some different HDR imaging alike techniques can be adopted right into the audio domain.
The early adopters – game developers
In the lately cross-linked article “Finding Your Way With High Dynamic Range Audio In Wwise” some good overview was given on how the HDR concept was already adopted by some game developers over the recent years. Mixing in-game audio has its very own challenge which is about mixing different arbitrary occurring audio events in real-time when the game is actually played. Opposed to that and when we do mix off-line (as in a typical song production) we do have a static output format and don’t have such issues of course.
So it comes as no surprise, that the game developer approach turned out to be a rather automatic/adaptive in-game mixing system which is capable of gating quieter sources depending on the overall volume of the entire audio plus performing some overall compression and limiting. The “off-line mixing audio engineer” can always do better and if a mix is really too difficult, even the arrangement can be fixed by hand during the mixing stage.
There is some further shortcoming and from my point of view that is the too simplistic and reduced translation from “image brightness” into “audio loudness” which might work to some extend but since the audio loudness race has been emerged we already have a clear proof how utterly bad that can sound at the end. At least, there are way more details and effects to be taken into account to perform better concerning dynamic range perception. [Read more…]
Doug, when and how did you arrived in the music business?
I have had an interest in electronics ever since I was a kid growing up in the 1950s and 1960s. I built a crystal radio receiver when I was 8 and my first audio amplifier (tubes, of course) when I was 10. I passed the test for an amateur radio license when I was 12 and that experience of communicating using Morse code was excellent training for learning to hear. I built a lot of my own radio equipment, and experimented with my own designs.
The high school I attended had an FM broadcast station. Most of the sports and musical events were broadcast, and I learned about recording orchestras, marching bands, choirs, and plays. Friends asked me to record their bands, which was my first experience working with non-classical music.
Another major factor was that my father was a French horn player in the Philadelphia Orchestra. As a kid, I would attend concerts, rehearsals, and sometimes recording sessions and broadcasts. I learned a lot about acoustics by walking around the Academy of Music in Philadelphia during rehearsals.
It would seem logical that my musical exposure and my interest in electronics would combine to make the career in pro audio I have had for over 40 years now.
I was a studio owner for many years before starting the D.W. Fearn manufacturing business, which started in 1993. [Read more…]
This comprehensive and in-depth article about HDR imaging was written by Sven Bontinck, a professional photographer and a hobby-musician.
A matter of perception.
To be able to use HDR in imaging, we must ﬁrst understand what dynamic range actually means. Sometimes I notice people mistake contrast in pictures with the dynamic range. Those two concepts have some sort of relationship, but are not the same. Let me start by explaining in short how humans receive information with our eyes and ears. This is important because it influences the way we perceive what we see and hear and how we interpret that information.
We all know about the retina in our eyes where we ﬁnd the light-sensitive sensors, the rods and cones. The cones provide us daytime vision and the perception of colours. The rods allow us to see low-light levels and provide us black-and-white vision. However there is a third kind of photoreceptors, the so-called photosensitive ganglion cells. These cells give our brain information about length-of-day versus length-of-night duration, but also play an important role in the pupillary control. Every sensor need a minimum amount of incitement to be able to react. At the same time all kind of sensors have a maximum amount that they may be exposed to. Above that limit, certain protection mechanisms start interacting to prevent damage occurring to the sensors. [Read more…]
Back in time when I was at university, my very first DSP lectures were actually not about audio but image processing. Due to my interest in photography I followed this amazing and ever evolving domain over time. Later on, High Dynamic Range (HDR) image processing emerged and beside its high impact on digital photography, I immediately started to ask myself how such techniques could be translated into the audio domain. And to be honest, for quite some time I haven’t got a clue.
This image shows a typical problem digital photography still suffers from: The highlights are completely washed out and so the lowlights are turning into black abruptly w/o containing further nuances – the dynamic range performance is pretty much poor and this is actually not what the human eye would perceive since it features both: a higher dynamic range per se but also a better adoption to different (and maybe difficult) lighting conditions.
On top, we have to expect severe dynamic range limitations in the output entities whether that’s a cheap digital print, a crappy TFT display or the limited JPG file format, just as an example. Analog film and prints does have such problems in principle also but not to that much extend since they typically offer more dynamic resolution and the saturation behavior is rather soft unlike the digital hard clipping. And this is where HDR image processing chimes in.
It typically distinguishes between single- and multi-image processing. Within multi-image processing, a series of Low Dynamic Range (LDR) images are taken in different exposures and combined into one single new image which contains an extended dynamic range (thanks to some clever processing). Afterwards, this version is rendered back into an LDR image by utilizing special “tone mapping” operators which are performing a sort of dynamic range compression to obtain a better dynamic range impression but now in a LDR file.
Within single-image processing, there must be one single HDR image already available and then just tone mapping is applied. As an example, the picture below takes advantage of single-image processing from a RAW file which typically does have much higher bit-depth (12 or even 14 bit as of todays sensor tech) opposed to JPG (8 bit). As a result a lot of dynamic information can be preserved even if the output file still is just a JPG. As an added sugar, such a processed image also translates way better over a wide variety of different output devices, displays and viewing light conditions.
Sascha, are you a musician yourself or do you have some other sort of musical background? And how did you once got started developing your very own audio DSP effects?
I started learning to play bass guitar in early 1988, when I was 16. Bass is still my main instrument, although I also play a tiny bit of 6-string, but I’d say I suck at that.
The people I played with in a band in my youth where mostly close friends I grew up with, and most of us kept on making music together when we finished school a couple of years later. I still consider that period (mid-nineties) as sort of my personal heyday, musical-wise. It’s when you think you’re doing brilliant things but the world doesn’t take notice. Anyway. Although we all started out doing Metal, we eventually did Alternative and a bit of Brit-influenced Wave Rock back then.
That was also the time when more and more affordable electronic gear came up, so apart from doing the usual rock-band lineup, we also experimented with samplers, DATs, click tracks and PCs as recording devices. While that in fact made the ‘band’ context more complex – imagine loading in a dozen disks into the E-MU on every start of the rehearsal until we equipped it with an MO drive – we soon found ourselves moving away from writing songs through jamming and more to actually “assembling” them by using a mouse pointer. In hindsight, that was really challenging. Today, the DAW world and the whole process of creating music is so much simpler and intuitive, I think.
My first “DAW” was a PC running at 233Mhz, and we used PowerTracks Pro and Micro Logic – a stripped-down version of Logic -, although the latter never clicked with me. In 1996 or 97 – can’t remember – I purchased Cubase and must have ordered right within a grace period, as I soon got a letter from Steinberg saying they now finished the long-awaited VST version and I could have it for free, if I want. WTF? I had no idea what they were talking about. But Virtual Studio Technology, that sounded like I was given the opportunity to upgrade myself to being “professional”. How flattering, you clever marketing guys. Yes, gimme the damn thing, hehe.
When VST arrived, I was blown away. I had a TSR-8 reel machine, a DA-88 and a large Allen&Heath desk within reach and was used to run the computer as a midi sequencer mainly. And now, I could do it all inside that thing. Unbelievable. Well, the biggest challenge then was finding an affordable audio card, and I bought myself one that only had S/PDif in & outputs and was developed by a German electronics magazine and sold in small amounts through a big retail store in Cologne, exclusively. 500 Deutschmarks for 16 bits on an ISA card. Wow.
The first plugin I bought was Waves Audio Track, sort of a channel strip, which was a cross-promotion offer from Steinberg back then, 1997, I guess. I can still recall its serial number by heart.
Soon, the plugin scene lifted off, and I collected everything I could, like the early mda stuff, NorthPole and other classics. As our regular band came to nothing, we gathered our stuff and ran sort of a small project studio where we recorded other bands and musicians and started using the PC as the main recording device. I upgraded the audio hardware to an Echo Darla card, but one of my mates soon brought in a Layla rack unit so that we had plenty of physical ins and outs.
You really couldn’t foresee where the audio industry would go, at least I couldn’t. I went fine with this “hybrid” setup for quite a long time, and did lots of recording and editing back then, but wasn’t even thinking of programming audio software myself at all. I had done a few semesters of EE studies, but without really committing myself much.
Then the internet came along. In 1998, I made a cut and started taking classes in Informatics. Finished in 2000, I moved far away, from West Germany, to Berlin and had my first “real” job in one of those “new economy” companies, doing web-based programming and SQL. That filled the fridge and was fun to do somehow, but wasn’t really challenging. As my classes included C, C++ and also Assembler, and I still got a copy of Microsoft’s Visual Studio, I signed up to the VST SDK one day. At first, I might have done pretty much the same thing as everybody: compile the “gain” and “delay” plugin examples and learn how it all fits together. VST was still at version 1 at that time, so there were no instruments yet, but I wasn’t interested much in those anyway, or at least I could imagine writing myself a synthesizer. What I was more interested in was how to manipulate the audio so that it could sound like a compressor or a tube device. I was really keen on dynamics processing at that time, perhaps because I always had too few of those units. I had plenty available when I was working part-time as a live-sound engineer, but back in my home studio, a cheap Alesis, dbx or Behringer was all I could afford. So why not try to program one? I basically knew how to read schematics, I knew how to solder, and I thought I knew how things should sound like, so I just started out hacking things together. Probably in the most ignorant and naive way, from today’s perspective. I had no real clue, and no serious tool set, apart from an old student’s copy of Maple and my beloved Corel 7. But there were helpful people on the internet and a growing community of people devoted to audio software, and that was perhaps the most important factor. You just weren’t alone. [Read more…]
Dave, can you tell us a little about how you got into music, and your professional career as an audio effects developer so far?
Started writing trackers as a child, then wrote some code to allow me to DJ with trackers. By 14 I was writing commercial software. Had some great teachers and lecturers who helped me a lot. Did my final-year project with Focusrite. Won the project prize. Spent 4.5 years at Focusrite (I was employee 12 or 13) to add DSP to the company, during which time we acquired Novation, and grew quite a lot. We made a lot of money from audio interfaces, so that kinda took over, and I wanted to get back to the DSP (at Focusrite I did Forte suite, helped with Liquid Channel/Mix, Saffire suite, plus other non DSP projects). Left for Sonalksis, built all their shipping products (except CQ1 and DQ1), although I’d built tbk1 years before and they’d been selling it. Was fun but chaotic. Left to go freelance so I could start my own outfit, during which time I worked with Neyrinck, TAC System, Focusrite, Novation, Studio Devil, FXpansion, Brainworx/Plugin Alliance, etc. Then started dmgaudio. And here we are now. [Read more…]
Chris, you are the man behind the Canada-based Quantum-Music studio. What was your journey towards this venture?
My father (Alain Dion) was an internationally renown live sound engineer and technical producer (Nat King Cole, Sting, Celine Dion, Cirque du Soleil, and many locally-famous artists). Therefore, I grew up in an environment where high fidelity audio was the standard. My father hated everything that sounded less than perfect. Unconsciously, he trained my ears. I owe him a lot for that. Nowadays, every time we see each other, we spend much of our time talking about which compressors, consoles and techniques. [Read more…]
Dave, some of your Cranesong devices are already legend – how did that affair once started?
Before I started Crane Song I had been designing the Summit Audio Gear through and including the DCL-200, plus some gear that did not get finished. I was teaching electronics at a 2 year technology school at the start of the Summit thing and also was part owner of a small studio that had a 1” 8 track, and Ampex MM1000. The studio grew into what is Inland Sea Recording owned by me, which is a for commercial room with a lot of nice microphones and other gear. It now serves as a design environment and has a number of customers that help keep it going. Developing in a real studio environment helps make sure that what you are working on works correctly and sounds good. When doing a session if one needs to mess with the gear it questions the design, but if you can turn a knob and it makes some thing sound good, it tells you something about the design. [Read more…]