Plugins have been drastically increasing in quality over the last 10 years. We are at the point now where we have some very innovative developers creating some truly remarkable sounding plugins. Not just digital emulations of classic analog gear but also new types of processors that wouldn’t be possible in the real physical world.
Unlike hardware, plugins require the use of complex algorithms, and the sound of the plugin is dependent on the coding of the developer. The better coders will be able to achieve better-sounding plugins much like a better electrical engineer can design a better circuit for a compressor. Trained ears matched with talented developers allow software companies to turn out some very high-quality plugins.
So, what is oversampling or upsampling?
Oversampling is when a plugin converts the audio to a higher sample rate for processing. Processing at the higher sample rate usually removes some of the negative artifacts associated with processing digital audio, mainly aliasing. Aliasing happens when information outside of the frequency response range of the digital converters and the sample rate you’re using are interpreted by the converter to be different frequencies.
Oversampling mitigates issues, including aliasing, and will usually yield smoother, more pleasant-sounding results at the cost of using more CPU power. But all oversampling algorithms aren’t made equal, and some are better than others. You may even find that you prefer the sound of a plugin with the oversampling turned off. It’s not necessarily guaranteed that oversampling will make the audio sound “better.” If you see a plugin or DAW that offers oversampling and you have the CPU power to spare, try it out and see if you prefer the way it sounds. If you are short on CPU power, you’ll probably want to keep oversampling off unless you decide to freeze the tracks.
When I started my first internship at Sabella Studios, the place was littered with strange things I’d never seen before. But nothing looked as strange as this black and red box that looked more like a spaceship control panel than recording equipment. That box was our EMT 251 that had been sitting in the corner of the control room and had built up an impressive collection of dust. We’re a small studio with a lot of vintage equipment, so it’s not uncommon for a piece of gear to be temporarily out of service, but this was different. No one was sure if we’d get this thing ever to work again. We could send it out to the one specialist in California who knew how to fix it, but it would cost us $1,500 just to have it looked at. As a small studio, we pride ourselves on doing just about everything in house, including the maintenance and repairs for all of our equipment. It’s how we’ve been able to survive for so long.
Opening the front of the unit to see hundreds of ICs doesn’t make the task of repairing it seem any easier. To make things even more difficult, EMT, the manufacturer of the box, scratched off any identifying part numbers to keep the ingredients of their mystical digital reverb a secret. Help came from an unsuspecting place: an intern led us to his father, an electronic technician, who was originally from Russia and didn’t speak any English. Fast forward a week or two, and he was at the studio with his oscilloscope, trying to figure out what was wrong with our 251. He decided to take it home to look at it further, and within a week, we had it back up and running.
It’s hard to imagine that the first version of any digital technology could be the best. It’s easy to see why earlier analog gear sounds better as there were better manufacturing techniques, lower cost of goods, and easier availability of materials which contributed to better overall build quality. In today’s digital world, everything eventually has a newer, bigger (or smaller), better, and more powerful upgraded model. The first version is never the best. How is it that the sound of the first digital reverb unit can still surpass even the most complex and expensive modern units?
I didn’t know, but I needed to find out…
Digital audio is something everyone uses, from the home recording hobbyist to the professional recording studio. Recording digitally is built into the standard workflow when creating every genre of music. There was a time when nothing was digital, so how did the world go from entirely analog to just about completely digital? In modern music production, you don’t need to use any analog audio processing at all if you don’t want to.
The original reason people started to explore digital audio was for one reason: time based effects. Early in the history of recorded music, there was never an easy way to make delays and reverbs, except with expensive and large tools like reverb chambers, plates, and magnetic tape machines. There was a very limited amount of flexibility when it came to time based effects, which had become paramount to every single song on the radio since 1947 when Bill Putnam decided to put a speaker and microphone in his studio’s bathroom. Nowadays, we fire up whatever plugin we want, but before digital audio, you had to run it through a piece of hardware or mic in a physical space. Now how does it go from microphones in bathrooms to recording 48 tracks simultaneously into your laptop with a different digital effect on every track?
The EMT 250 was essentially one of the first plugins. It’s like if a Waves or Slate plugin you just bought came with a computer, interface, and converters and was all built into one box with the sole purpose of running that plugin, and with a $20k price tag, it certainly wasn’t cheap.
A Conversation with Dr. Barry Blesser
I had the pleasure of speaking with Dr. Barry Blesser, who is considered one of the grandfathers of digital audio. In 1974, Dr. Blesser oversaw the creation of the algorithm, and some of the hardware, for the first ever digital reverb unit.
Dr. Blesser was kind enough to speak with me and explain the history of digital audio and his involvement. He began the interview by telling me about Manfred R. Schroeder, a German physicist who worked at Bell Labs during the 1950s. Schroeder was the very first person to attempt digital signal processing. During this time, computer technology was so slow that digital was completely impractical. Processing a 3 minute piece of audio could take 24 hours. Although Schroeder’s experiments at this time were not of any practical use and were done completely out of curiosity and proof of concept, it did show that digital audio was possible.
Dr. Blesser then spoke of a chance encounter with Francis F. Lee, who would become the founder of Lexicon. “I was working in the MIT Labs at 3 in the morning because that was when I could get access to the minicomputers, and Frances Lee walked in. He was in the computer world; he didn’t know about anything digital audio. And I was in the [analog] audio world, so we bumped into each other at 3 in the morning and started brainstorming about how to merge these two. That’s how Frances Lee got started with Lexicon.”
The result of this encounter was the first ever digital signal processor, the Delta T-101, released in 1971. Lee had been working on a digital heart monitor and, from Dr. Blesser’s suggestion, experimented with running audio through it. After a lot of experimentation, the result was a 100 ms audio delay line which could be used to help overcome live sound propagation delays or used as a pre-delay for plate reverbs. You put audio in, and 100ms later, it comes out. That was it. It was revolutionary at the time, but by today’s standards seems like just a step above useless. Steve Temmer, owner of Gotham Audio, commissioned Lexicon to make 50 units that he could release under the Gotham Audio name. A second version the T-102, was eventually released under the Lexicon name with an improved signal to noise ratio.
Throughout the 1960s, Dr. Blesser worked with EMT on many of their analog audio products. “They rejected the idea of doing real digital audio until Francis Lee started Lexicon. After Lexicon was successful with the T-101 they got pissed, and they said, ‘ok, we want to be in that business.’”
Peter Bermes, an industrial designer working for EMT, recalls the initial meeting to plan and brainstorm the EMT 250 involved nine people seated at a roundtable. The meeting, which went on to be the catalyst for the first reverb, Bermes says, took only 4 hours. The meeting occurred in 1974 at the EMT plant in Kippenheim, Germany. Among the group were Erich Vogl, Karl Bäder, Barry Blesser, and Peter Bermes. Dr. Blesser, along with a team of engineers, went to work on developing an algorithm they could use for practical digital reverberation. Only having the 100 ms delay box and Manfred Schroeder’s experiments, Dr. Blesser’s team built a simulator that could be programmed to run different reverb algorithms for testing purposes. After about two years of research and development, the EMT 250 was ready, and 250 units were produced.
So that doesn’t explain why the reverb still holds up in today’s world of endless digital options and newer upgraded algorithms, and more advanced convolution technology. It comes down to the sound. It just sounds good. Forget all of the pioneering and innovation that took place to develop this device. Even if this unit was introduced tomorrow instead of in 1976, it would still hold up as being a great sounding reverb, and that’s just a testament to the designers. Most of all, they made sure it sounded good.
Luckily, you don’t have to spend $20k to get the amazing sound of a 250 anymore. Universal Audio had Dr. Blesser reverse engineer the whole algorithm so they could model it in their 250 plugin. Although Dr. Blesser said that he did not hear it for himself, he was told it was completely bit accurate.