Posted on Leave a comment

What the f*ck is dither?

What the f*ck is dither?
Audio Dither

To dither or not to dither that is the question…

The real question is, how far down the rabbit hole do you want to go? I’m sure there are plenty of incredibly talented engineers that produce exceptional sounding music that doesn’t know the first thing about dither. Dither isn’t going to make or break you, and even the most technical engineers will tell you that when working in 24-bit fixed point, the majority of people aren’t going to hear a difference. But the difference is there, and in our world, we are trying to accumulate small wins that will increase the quality of our work. Individually these changes don’t make a huge difference, but as they accumulate with other small victories it can start to make a dramatic difference and ultimately becomes what sets a good engineer from a great one.

Unfortunately, there is a lot of misinformation out there on dither. It’s much more difficult than it should be to figure out what the hell dither is if you should use it, and why. I like articles that get straight to the point and can sum up the topic in a practical way, quickly, efficiently, and without getting too complicated.  is how I prefer to look at the technical details of audio.

The truth is there is nothing I’d rather talk about less than the technicalities of digital audio, file formats, truncating bits, zeros, ones and the rest of that nonsense. Unfortunately, as an audio engineer and music producer, we have a responsibility to learn new technology if there is a chance it can improve the quality of our productions.

What is dither?

Dither is a specific type of low-level noise that is added when converting the bit depth of an audio file in order to reduce quantization distortion.

Why do you use dither?

Dither is required when reducing the number of bits in an audio file to help mask any quantization errors. Dither works by randomizing the quantization errors; the added noise has the effect of spreading the errors across the audio spectrum which in turn makes them less noticeable.

If you’re converting a file from 24-bit to 16-bit you need to get rid of or truncate the 8 extra bits of information. The truncation of bits causes quantization distortion which can cause the audio to sound brittle, and gritty, as well as shrink the stereo image. A music file that was correctly dithered compared to one that wasn’t is night and day, as the correctly dithered version will always sound better, even to untrained ears.

To hear what truncation distortion sounds like you can watch Ian Shepards video Dither or distort? Listen and decide for yourself”

Should you be using dither?

A lot of people say only dither once, dither is noise, and we shouldn’t constantly be adding noise to our audio right? Well, accomplished mastering engineer Ian Shepard of and The Mastering Show Podcast says  “After all, it’s just noise – and very quiet noise, at that.” Quiet noise never hurt anybody but hearing the effects of quantization errors can be quite jarring and painful. In order to minimize quantization errors that are introduced when converting fixed bit depth audio files, you should use dither every time you process a file in your DAW (more on this later).

All modern DAWs process and calculate in 32-bit floating regardless of the source bit depth. This means that bit depth conversions are happening every time you process the audio in any way like freezing tracks, bouncing in place and consolidating regions. To my knowledge, most if not all DAWs are not adding dither automatically, which means that bits are being truncated every time these processes are happening. You can prevent any conversion or need to dither entirely by always working in 32-bit floating. I would even recommend sending a 32-bit floating wav file to your mastering engineer and leave the dithering to them. You won’t need to dither until you finalize your master into a fixed point bit depth.


Even though when working at 24 bits the audible artifacts of quantization errors are negligible, by working in 32 bit floating we can eliminate the need to add dither during mixing and leave the one-time decision to be done in mastering.

Related articles:
20 quick and easy tips that will improve your productions
5 mixing mistakes that I used to make… and how to avoid them
Things I wish I learned sooner about audio engineering
The “your mixes sound bad in the car” phenomenon

Posted on Leave a comment

20 quick and easy tips that will improve your productions (part 2)

20 quick and easy tips that will improve your productions (part 2)

You can read the first 20 quick and easy tips that will improve your productions here!

Quick tip 21

Try automating the master fader 1 or 2 decibels in the chorus to add excitement.

Quick tip 22

Decide what takes you like and make comps right away. Don’t leave important decisions for later!

Quick tip 23

Treat music like a job, even if it’s not your job!

Quick tip 24

Make a template and constantly keep refining it.

Quick tip 25

If you want your chorus to sound big, don’t make it bigger make what comes before it smaller.

Quick tip 26

Separate the times you focus on sound design and sample organization from your actual production sessions.

Separate the times you focus on sound design and sample organization from your actual production sessions.
20 Quick and Easy Tips 26

Quick tip 27

Practice shortcuts and hotkeys! Your DAW is an instrument, treat it like one. Learning all of the hotkeys will drastically speed up your workflow.

Quick tip 28

Put each vocal section on its own track (lead verse, lead chorus, lead bridge) and send to a vocal bus. This makes it easy to change processing for the vocal between sections.

Quick tip 29

Don’t bounce and repeatedly listen to songs you are still working on. If you repeatably listen to a work in progress your brain will start to get used to it and it will become more difficult to make changes or add new tracks later.

Quick tip 30

Try adding chorus before a reverb to help widen and give movement to your return.

Quick tip 31

Avoid loopitis by following the structure of a reference track.

Quick tip 32

CONTRAST! CONTRAST! CONTRAST! Stereo sounds the best when there’s contrast. For the widest sound, make sure what is playing in the left speaker is different than what’s playing in the right.

Quick tip 33

ADD SOME AIR! For bright and shiny pop vocals make sure to add some “air” or frequencies over 14kHz.

Quick tip 34

Submix similar sounds and parts prior to mixing so things stay organized and you can easily process multiple tracks at once.

Quick tip 35

De-essing vocals allows you to get rid of harsh sibilance while maintaining clarity. I like to use one in the beginning of my vocal chain to make sure I get rid of anything before it’s compressed or equalized.

Quick tip 36

Small amounts of compression on multiple channels can really add up and help glue things together.

Quick tip 37

Don’t set it and forget it! Those faders need to move! Performing volume automation will give your mixes life and make you a lot cooler.

Quick tip 38

If a reverb isn’t working on a vocal, try a short delay. It can give you the sense of space you need without cluttering things up.

Quick tip 39

Modern digital recordings pick up transients very cleanly. Add saturation before compression to help tame some of those sharp transient peaks so your compressor will not have to work as hard.

Quick tip 40

Clean up that low end! Use plenty of hi pass filters on instruments that don’t have fundamental bass frequencies to get a tight and punchy low end.

Related articles:
20 quick and easy tips that will improve your productions
5 mixing mistakes that I used to make… and how to avoid them
Things I wish I learned sooner about audio engineering
The “your mixes sound bad in the car” phenomenon

Posted on Leave a comment

5 of the rarest and most unique synths ever made

5 of the rarest and most unique synths ever made

Disclosure: Audio Hertz is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission.

The following five rare and unique synthesizers weren’t very popular, but their scarcity and uniqueness make them even more valuable today. Being a newly self-declared synth geek, I’m enjoying the process of learning a new instrument. It’s nice to know there is even more money to spend on audio gear than I had previously thought.

We’ve all heard of Junos, Prophets, Arps, and DX7s, but what about the synthesizers that weren’t as popular? I think it’s fascinating to think about the smaller run projects and how they were all, at one point, the culmination of someone’s imagination.

Unfortunately, not all synths had as much widespread appeal as some of the industry staples I mentioned earlier. There are many reasons a synthesizer might not do well commercially, whether it be faulty components, a bad user interface, or just failed marketing efforts from the company, most of the time, it has nothing to do with the actual sound of the instrument.

Wersi EX20

Wersei EX20

Wersei is a German digital organ maker who is still in business today. They were one of the first Organ manufacturers to embrace digital technology. After ten years of producing Organs, they decided to try their hand at synthesizers and entered the market with the EX20 module, a digital 20 note polyphonic synth. The sounds from this synth are based on an arrangement of modules such as wave, pitch envelope, amplitude envelopes, and analog effects to produce fourier, waveform samples and modular synthesis.

There’s also an EX-10 model that looks similar and uses cartridges; however, the cartridges are not cross compatible and are extremely difficult to find. Owners and users of this unit say they are incredibly challenging to program and nearly impossible without the manual, which is entirely in German.

Wersei made some other unique synths that are just as rare, including the Bass Synthesizer and the Stage Performer Mk1.

Wersei Bass Synth
Wersei Bass Synth

In this interview with talented synth programmer Wolfram Franke of Waldorf Synthesizers, he talks about the ES20 and some of its unique features.

When asked about what synth started his journey into programming his own, he directly mentions the ES-20 from Wersei,

“It’s from the German organ company Wersi and it is called MK1 (Series III). It was a 20 voice, 8 part multitimbral additive synth with up to 32 harmonics, an integrated chorus/ensemble effect, and only one VCF, but that one was a copy of the Moog 24dB VCF plus a good-sounding overdrive.

It had a lot of very interesting features that you won’t find in any other synth like modular envelopes with 8 stages where each stage could hold a module that did something like generating random steps, vibrato, linear or exponential ramps or simply holding the level for a certain time.

If you ask why we didn’t put something like this into our Waldorf synths, I can easily answer that I was probably the only person outside of Wersi who could program this thing!”

Beilfuss Step Synth

This synth is part folklore, part reality. There are videos of it, so it must exist, but mentions of this on the internet go back to 1996, and others have claimed to have seen advertisements in Keyboard Magazine even years before that. The company has self-declared this unit as “the first ever Step Synthesizer,”

The first unique thing that sticks out about this synth is its 93 keys which are just two 4-octave synths connected.

The Beilfuss Step Synth was developed by Keith Williams, who also goes over the instrument in depth in this video posted to YouTube in 2011,

Using the way back machine, I was able to find the now-defunct single page Beilfuss website, which gives some more info on the synth,

“Unlike any analog or digital synthesizer’s controls, the patented tone control consists of sixteen steps simply outlining the waveform as set by the Signal Controls you see at the left of the control panel. Similarly, the envelope and filter contour transients and their time intervals are also set by the Signal Controls for rhythms or extended notes. You may easily add prompted, parallel voicings, combining settings of both sides for complex notes.

There are 32 controls and 142 switches for direct programming. Dedicated LED switches always read out their side of the full eight octaves, split keyboard. Five octaves of transposition is possible.”

ASI Synthia

ASI Synthia
Early promotional material for an ASI Synthia

Synthia was a very rare “all in one” high-end synthesizer released in 1982 by Adaptive Systems, Inc. These units started at 20,000, which may be why there aren’t many people that have ever seen, let alone played a working one. There aren’t even any sound clips or video demos posted online anywhere.  I wish there were videos or recordings of this one so I could verify this claim, but I guess I’ll just need to use my imagination.

These premiered in the form of two prototypes at the 1982 NAMM show. The prototypes did not sell, and soon after, manufacturing ceased. The synthesizers are currently not functioning and sitting in the basement of the inventor, Mark E. Faulhaber.

One unique feature that would warrant the incredibly high price tag was the touch responsive plasma screen which at the time was very new technology but is now seen in just about every modern all in one keyboard.

The plasma screen followed the user’s finger, which could be used to adjust bar graphs that controlled the different adjustable parameters on the synth like harmonic content, envelope parameters, controller assignments, and more.

Most of the information on this synthesizer is from the book Vintage Synthesizers by Mark Vail.

Hammond NovachordHammond Novachord

The Novacord is considered to be the godfather of the modern polyphonic synthesizer. The Hammond company debuted the instrument at the 1939 World’s Fair only four years after the invention of the tonewheel organ. These synths included many features that are now standard in polyphonic synthesizers, like the use of subtractive synthesis.

This behemoth of an instrument was truly a marvel, including more than 150 vacuum tubes, over 1,000 custom-made capacitors, two 12” speakers, and a power amp. Everything was housed beautifully in a wood cabinet similar to that of the more popular B3.

The Hammond team found a way to derive a full keyboard from just 12 tuned chromatic oscillators. They accomplished this by using a divide-down architecture which became standard in later synthesizers. This principle gives the Novachord its 72 notes of polyphony.

Due to the demands of World War II and the lack of sales and parts, c production of Novachords ceased in 1942.  One of the reasons it did not translate well during the time was organists and pianists had a hard time playing it as it was better at making more ethereal sounds rather than recreating the more familiar sounds of a piano or organ.

An estimated 200 Novachords are still in existence, and even fewer are operational.

Ensoniq Fizmo

Ensoniq Fizmo

Ensoniq was an American synthesizer and sampler manufacturer that was founded in 1983 and eventually acquired and merged with E-Mu and eventually discontinued entirely in 2002.  One of the more unique synthesizers released by the company was the Fizmo. Developed in 1998, the synth uses a “digital acoustic simulation Transwave with 4 MB of ROM, up to 4 voices per preset, each voice with two oscillators, independent LFOs and FX: 48 voices maximum, with three separate FX units built in. “

The name F-I-Z-M-O comes from the five real-time control knobs on the unit, “F” adjusted the effect modulation, “I” adjusted wave modulation, “Z” adjusted filter cutoff, “M” adjusted the detuning, and the “O” parameter changed based on the preset.

Problems with the synth that led to its demise were the unfinished operating system, clunky user interface, and issues with the power supply. Another problem was the look of the unit made people think it was similar to the other analog synthesizer of the period. Still, it was challenging to create classic “analog” sounds. The Fizmo was mainly known for its atmospheric and ethereal sounds.

500-2,000 of these synthesizers were estimated to have been produced.

Related articles:
5 mixing mistakes that I used to make… and how to avoid them
[Even more] Things I wish I learned sooner about audio engineering
Things I wish I learned sooner about audio engineering
The “your mixes sound bad in the car” phenomenon

Posted on Leave a comment

20 quick and easy tips that will improve your productions

20 quick and easy tips that will improve your productions

I recently started a new series called “Quick tips” on the Audio Hertz Instagram account and decided it would be a good idea to compile them all together and put them up as a single post.

Here are my first 20 quick and easy tips that are guaranteed to improve your productions.

Quick tip 1

Studio monitors should be the last thing turned on, and the first thing turned off.

Quick tip 2

Read the damn manual!

Quick tip 3

The zero point is where the positive and negative side of a wave meet. By cutting there you can avoid clicks and pops in your audio, although it never hurts to add a short crossfade.

Quick tip 4

To get a larger sounding floor tom, place foam pads under the feet so you don’t lose any of the low resonances through the floor.

Quick tip 5

Trying to get the vocals to pop? Duplicate the lead vocal track, add distortion, a sh*t ton of compression, boost EQ in the 1-5kHz range and automate the new track back in during the parts you need the vocal to cut through more.

Quick tip 6

Don’t be afraid to add processing to your effects returns.

Quick tip 7

Your goal when mixing shouldn’t be to make it sound the best, your goal should be to make it feel the best.

Quick tip 8

Automate the tempo of your track to go up a few BPM in the chorus. It adds excitement and life, just like when real musicians play together.

Quick tip 9

Don’t be afraid of using an extremely wide Q setting on an equalizer. Wider boosts at smaller increments tend to sound less intrusive and more musical.

Quick tip 10

The best engineers know when to tweak a sound and when to leave it alone. Listen before you decide to add EQ or compression.

Quick tip 11

Remember to mix at lower volumes! Everything sounds better when it’s louder, so make sure your track still retains the balance and punch at lower volumes. I also love adjusting compressors at low volumes, it makes it easier to hear what it’s really doing to the transient.

Quick tip 12

After mixing for a long time try flipping the left and right to listen from a new perspective.

Quick tip 13

Having trouble balancing the low end? Try cutting 60-80 Hz in the bass and boosting 60-80 Hz in the kick.

Quick tip 14

Try using a transient designer on your drum reverb return. Turning up the attack should yield a tighter and punchier sound.

Quick tip 15

Your groove is driven by the interplay between your kick, snare, hi-hat, and bass. If your rhythm section doesn’t groove, ain’t nobody gonna move.

Quick tip 16

Commit to a sound, save CPU power and bounce your virtual instruments to audio. Don’t get stuck in the habit of leaving yourself a ton of decisions to make while mixing.

Quick tip 17

Try turning off tempo-sync on your time based effects. Subtle timing discrepancies are a cool way to loosen up the groove and give a more human feel to your programmed tracks.

Quick tip 18

You don’t always have to mix sounds loud enough to hear them. Sometimes an instrument or effects return only needs to be felt and not heard.

Quick tip 19

Use a room mic on an electric guitar and blend it in with the amp’s microphone to add space, depth, and attack to your guitar tracks.

*I need to make a correction on this one. If you’re looking for more attack, you can try adding a close mic on the guitarist picking hand, this also works well for bass guitar.

Quick tip 20

Organize your plugins! Just about every DAW these days will allow you to rearrange the way your plugins are laid out.

Related articles:
20 quick and easy tips that will improve your productions (part 2)
5 mixing mistakes that I used to make… and how to avoid them
[Even more] Things I wish I learned sooner about audio engineering
Things I wish I learned sooner about audio engineering

Posted on Leave a comment

What the f*ck is audio clipping?

What the f*ck is audio clipping?

I’ve been interested in learning more about audio clipping for quite some time, but it wasn’t until recently that I was able to get my hands on a dedicated clipper plugin. I remember years ago hearing about mastering engineers clipping their converters for an extra 1-2 dB of gain. That made complete sense, but I’d never found a reason that this would be applicable in any of my productions. Clippers have become a common term thrown around on message boards and Facebook groups, and I took more interest and decided to do some research and experiment for myself.

We’ve all heard of clipping. From the very beginning of learning to record, we are taught to avoid clipping at all costs. Many would think it’s synonymous with digital distortion and is, in every way, shape, and form, a negative artifact of digital audio. But they would be wrong. Not all clipping is a bad thing. Clipping that sounds bad is bad; clipping that sounds good and helps us achieve louder volume levels is good.

The highest possible point before the audio starts to distort inside your DAW is 0 dBFS. If you push a source past this threshold, it will start to shave off the top of your waveform, so it looks more like a square wave than a standard round sine wave. Your waveform is effectively clipped.

There are two different types, hard clipping and soft clipping.

Hard clipping

If you look at a sine wave on an oscilloscope and raise the level into a clipper, the round sine wave gets squared off at the top, effectively shaving off the rounded edge of the waveform.

Sine Wave
Sine Wave
Clipped Sine Wave

Hard clipping distorts the sound while adding additional harmonics to the original source. This can sound cool on its own as just an effect. But it can also be particularly useful when used on subfrequency instruments, such as the ever popular 808 kick drum. These additional harmonics will make the subfrequencies more audible on smaller speakers.

Another common use of hard clipping is to make things louder. Many engineers will put one at the end of their mastering chain after the limiter. Here you can use a clipper to get a few extra dB of gain, similar to the mastering engineers I’d heard about that purposely clip their converters.

Soft clipping

Unlike hard clipping, where waves are completely squared off, with soft clipping, the waves are more rounded to create a smooth transition between the clipped and unclipped sections of the waveform. This makes for a more pleasing sounding distortion that isn’t as harsh as hard clipping. Analog gear and magnetic tape does this naturally when transformers and circuits are overdriven. Many compressors and limiters also have soft clipping built in as a standard feature.

Soft Clipped Sine Wave
Soft Clipped Sine Wave

Clipper Plugins

Usually, when levels are pushed, they become squared off, a clipper plugin uses an algorithm that knows when there is an overage, and rather than completely chop off the waveform, it helps shape it. A good clipper plugin keeps the transient impact while still adding harmonic content and volume.

You can also use them to “clip” or shave off the top peaks of your waveform so you can tame transients, add harmonic content, and effectively achieve a higher perceived loudness. Seems simple enough however, there’s an art to using one, as too much will leave you with dull transient-less material.

Related articles:
What the f*ck is Linear Phase EQ?
What the f*ck is 32 bit floating?
[Even more] Things I wish I learned sooner about audio engineering

The “your mixes sound bad in the car” phenomenon

Posted on Leave a comment

Why you need to stop arguing about audio gear online

Boy screaming into microphone
Boy screaming into microphone

I love talking about audio and music. I love having discussions with other people that enjoy the same things that I do. This is why I found myself frequenting not only forums, message boards, and Facebook groups but also live events and other places where people who enjoy recording audio and making music gather. Since I was a kid, I was always looking for a way to join a community while not actually having to join a community. When I started playing the bass in the 6th grade, I joined a bass guitar message board where I could talk to other bassists about ways to improve. Later, when I got into competitive paintball and poker, I did the same thing. By joining these communities, I was cued into a network of people that were all interested in the same thing. It became a vital way for me to find people with similar interests who are also looking to grow. Message boards and similar communities are also a great place to educate yourself as they are great resources for information and staying current on industry trends.

Now, with all good can come bad, and there can be a lot of bad when you get a lot of nerds together online. This is why I enjoy Facebook groups more than message boards. Facebook is less anonymous than some of the other communities which hold users accountable (to an extent). Anonymity can bring out the worst in people.

Some of you that follow me on social media may know that I like to create and post a lot of memes that are designed to be funny and poke fun at certain aspects of audio, music, or production. Some people get confused with statements about audio engineering or music production beliefs and philosophies.

I recently posted a meme where I depicted a Fairchild 670, the Waves plugin version of the same compressor– the Puigchild 670 and a stock photo of people in a crowd with the words “Who would win?” at the top. Almost immediately, comments began flooding in, arguing one side or the other.

That meme is meant to be a joke. It’s intended to be funny, but I’ve found that people online seem to take things too seriously and love to take any opportunity to argue something they believe is 100% right. If you’re one of those people, let me save you some time and aggravation. It’s not worth it, and you’re not always right, even if you always think you are. Why are you trying to change strangers’ opinions on the internet?

It doesn’t make any sense to me. I understand having a proper debate about something that may be controversial, but there’s no right or wrong in these scenarios. If you like analog gear, great, use it. If you like plugins, great, use them. Why is anyone trying to prove that one is better than the other?

Related articles:
Cultivating new habits and why you shouldn’t wait for motivation
The most embarrassing audio mistake I’ve ever made
[Even more] Things I wish I learned sooner about audio engineering

The “your mixes sound bad in the car” phenomenon

Posted on Leave a comment

Cultivating new habits and why you shouldn’t wait for motivation

I’m just as likely to be reading this type of article as I am to be writing it. I’ve struggled with getting rid of bad habits my entire life. I wish I could tell you I’m the hardest working engineer in the biz, but unfortunately, that’s not the case.  I’m always looking for ways to work harder and smarter. That doesn’t mean I’m lazy or don’t work hard, but there’s always room for improvement. It all comes down to discipline. You can lose weight, quit smoking, and get better at basketball if you can improve your self-discipline and are smart about your approach.

To give yourself the best chance of getting rid of a bad habit, it’s important to understand your tendencies and adjust to them accordingly. It’s hard not to feel like I’m the pot calling the kettle back, considering I’ve been “waiting” to start going to the gym for the last five years. But that’s not to say I haven’t improved in other areas or displayed self-discipline in other aspects of my life (humble brag; I just celebrated my first anniversary of not smoking cigarettes).

I started writing the “confessions of an audio engineer” series because I wanted to write about issues I am currently struggling with or have struggled with in the past. Even if admitting some of these things may be embarrassing and writing them out may be difficult, putting my thoughts on paper helps me be self-reflective and gives me a sort of third-party perspective. Ultimately, writing my thoughts down helps me better decipher what they mean and how I can fix them.

Lately, I’ve been disappointed that I haven’t been writing as much music as I’d like. I haven’t finished a song in years. I work on other people’s music, and I enjoy it, but like most audio engineers, I got into this field because I enjoy creating music. I want to start finishing more songs, and I want to begin cultivating better habits when it comes to writing and practicing music, and this is where the idea for this article stems from. I hope it helps you as much as it helped me.

Figure out what you want to change

The first step to fixing habits is figuring out what you want to fix. I think it’s hard for many people to step back and unbiasedly look at themselves and figure out what they would like to change about themselves.  It’s important to focus on what you can change and stop worrying about what you can’t.

The majority of people go through life without stopping to figure out how they can improve themselves on a more significant level. People have egos, some feel they are better off thinking that everything they are doing is fantastic and that the way they feel or how they think is always right. The best advice I can give myself and others is to remember to stay humble, nothing is written in stone. It’s okay to have an opinion; it’s not okay to be ignorant. Be open to changing yourself, your mind, the way you think, and the way you live.

Make a list

The best way to figure out what you want to change is by making a list. Write out what you like about yourself. Then, write down what you don’t like about yourself. Now look at the list of what you don’t like about yourself and try to figure out the best course of action to go about fixing it. For instance, in terms of life, if you’re trying to get in better shape, put down a few different ways you can work towards doing that. So using this example, you could say, go to the gym twice a week for 30 min or walk 10,000 steps a day.  Make sure that the task is achievable and not overwhelming. It’s important to keep the goal reasonable, or you’ll be more inclined to procrastinate or talk yourself out of doing it. And also, make sure that you’re taking into account some of your tendencies. Now, this is the problematic part, and some will be better at this the others. If I hate running on a treadmill or going to the gym, then I should try to walk in the park or find exercises I can do at home. If I know I’m not a morning person, then I should schedule my exercise in the afternoon. If you know exercising isn’t easy for you, start slow, start with your diet.

Find out and get rid of temptations and other distractions

Are you constantly going on Facebook? Staring at your phone?

Turn off the wifi! Put your phone in airplane mode! Mitigate your distractions by recognizing what they are before you even start working!

It’s common to find your attention drifting. If you see this happening, you should immediately recognize it and cognitively redirect your attention back to what you should focus on. I know, easier said than done, the good thing is if you keep at it, you’ll get better over time.

Schedule a time

Set aside a specific time to do a task. Make sure that it is written down on a calendar. Writing it down and putting it on your calendar helps hold yourself responsible. As I mentioned before, it’s easy to procrastinate, and it’s easy to give yourself excuses not to do something. My biggest problem is when the time comes to do something, I tell myself I’ll have time to do it tomorrow or tomorrow’s a better day anyway, and I find a way to convince myself it’s okay not to do something. I’ll talk myself out of doing something that at one point was important enough to put down on a to-do list but

You can’t wait for the motivation or inspiration to start doing something. You have to show up every day and get to work. James Clear, who is a Behavior Science Expert, says, “The work of top creatives isn’t dependent upon motivation or inspiration, but rather it follows a consistent pattern and routine. It’s the mastering of daily habits that lead to creative success, not some mythical spark of genius.”

If you know you have trouble holding yourself accountable when it comes to completing a task, schedule a time to do it. If you don’t do the task at the time specified, you failed. Once you start fulfilling these obligations you put down on your calendar, you’ll begin to enjoy the feeling of getting shit done. Small wins add up to big gains.

Don’t give yourself a choice

There’s no excuse for not giving all your effort. If you find yourself always thinking of ways to get out of doing something, realize this is an unfortunate trick your brain is playing on you. If you say you’re going to do something, you have to do it!

A technique I like to use is to think of myself as an optimized robot. If I was the perfectly programmed music making robot, what would I do? How would I spend my time? This allows me to take a step back and look at the situation from the outside. Robots don’t have a choice. If you schedule a robot to do something at a specific time, they do it. To sum everything up, be cool, and be a robot.

Related articles:
The most embarrassing audio mistake I’ve ever made
How to survive as a working audio engineer
[Even more] Things I wish I learned sooner about audio engineering

The “your mixes sound bad in the car” phenomenon

Posted on Leave a comment

6 audio effects you’re not using enough


If I could only have one effect or plugin, it would be a good saturator. You can use saturation to do just about everything from compressing to distorting to just adding color or harmonics to enhance the overall tone.

One of the major problems with digital recording is everything is captured so “cleanly.” Because digital signal processors lack actual hardware components, the super fast transients are preserved in an almost unnatural way. Super clean digital recordings tend to sound “sterile” compared to those recorded to an excellent magnetic tape machine. The analog components inevitably soften transients and add what most engineers describe as warmth and color. Whatever word you want to use to describe it, the end result usually makes things sound better and, more importantly, blend easier.

To compensate for these super clean recordings, many software companies that are emulating classic analog gear have added saturation stages into the algorithms. It is common to see a saturation stage in compressors, equalizers, and even some delay units such as the Waves H-Delay and Soundtoys Echoboy. The reason for this is that the units they are modeling had transformers, transistors, capacitors, and other components that the audio would have to pass through before being outputted to your speakers. In order to accurately model these classic processors, programmers needed to make sure the coloration that is happening inside the unit is there as well.

A few favorite saturation plugins of mine are the Kush Omega series, Soundtoys Decapitator and URS Saturation.

M/S Processors

It’s common to hear other audio engineers talking about “width” or “how wide a mix” is. There’s a reason for it. Something that sounds wider usually sounds “better” because it sounds fuller. There may be a select few scenarios where a narrow mix might be preferable, but for the majority of music, you’ll want to actively try to keep the stereo field as wide as possible.

Normally when adjusting a stereo track, you’re either adjusting a single mono channel individually or both stereo channels together. If you’re adjusting a stereo signal, whatever you’re doing to one side will affect the other side exactly the same way.

Midside takes a stereo signal and decodes it into a mid, and a side channel rather than the standard left and right. This lets you control the mid and the side channels separately.

The mid channel lets you control the center of the stereo image. If the mid-channel is boosted, the listener will perceive the source as narrower or more mono.

The side channel is the outer edge of the stereo field. Boosting the side channel will give the listener the perception of a wider sound. M/S decoding stems from the microphone technique of the same name invented by Alan Blumlein in 1934.

Mid/Side can be a very powerful tool, but a little definitely goes a long way, and you need to be careful of going too far as more extreme settings can cause phase and imbalance issues.

Automating Mid/Side effects is a fun way to add width to synths or guitars during a particular section of a song. Another cool trick is to raise the mid-channel on the overheads to get more focus and punch out of the kick and snare.

My favorite M/S plugin for decoding is Brainworx BX Control. I like the Waves Scheps 73 for M/S eqing.

Wideners and Stereo Imagers

In the past, I wish I had been more aware of the stereo field and just how important it is. A good mix engineer is very aware of giving every element of a production its own deliberate space. No two elements can be competing, and everything has to be put in a place that makes sense. That’s what makes a good mix. Many people underestimate how important and powerful panning is as a tool for engineers. There are many problems that can be solved with just a small move of the pan knob. Instead of trying to cut a conflicting frequency with an equalizer, adjusting the panning can often be a better solution.

The center of a stereo image is where the important stuff is going to happen. The vocals are arguably the most important, and you’ll always find them located smack dab in the center. You’ll also need to make sure both channels are balanced. This can sometimes be difficult during the mixing phase as there are times when you’ll want a mono sound playing through both speakers that is conflicting too much with more important elements. A widener is a perfect tool for getting an element out of the way of the center while still keeping the stereo image balanced. I hate pulling a guitar or keyboard over to one side without another sound on the other side to balance (unless I’m specifically going for that effect or want something to really stick out). There are multiple ways widening plugins make the image seem wider, and most involve some phase manipulation.

My favorite wideners are the stock Stereo Spread in Logic Pro X and also the Waves S1 Stereo Imager.

Transient designers

SPL Transient Designer

I learned that It’s hard to know you need something when you’ve never tried it. No, I’m not talking about drugs (stay away from those kids)! I’m talking about transient designers. I waited a little too long before I decided to try out a transient designer. It’s a tool that serves an important purpose when it comes to newer, more transient heavy electronic music, and it can not be replaced by any other tool in an audio engineer’s toolbox.

I think the reason I went so long without using them is if there was ever a sample or some track that would benefit from adjusting the transient, instead of opening up a  transient designer, I would opt to replace the sample or find another way to fix whatever issue I was having. That’s the beautiful part about recording and art in general. There’s always more than one way to do something, and if it works, it works, and it doesn’t matter whether anyone else considers it “right.”

Replacing the sample is one way to fix a problem but using a transient designer is a much quicker and more efficient way if you are looking to adjust the attack or sustain of a particular sound.

My favorites are SPL Transient Designer and Waves Smack Attack.

Harmonic and Subharmonic generators

These synthesize harmonics and allow you to blend them into a dry source. Harmonic generators like the Aphex Aural Exciter have been a long-standing staple of recording studios and are commonly used on vocals to help add extra shine. These units ultimately help the sound pop, they add excitement which you mostly notice in the high frequencies.

Subharmonic generators like the Peavey Kosmos and Waves R-Bass allow engineers to add synthesized subharmonics into their bass, kick, and synthesizer tracks. Subfrequencies that are 40 Hz below are difficult to capture via a standard microphone. You may have seen engineers recording a kick drum with a sub mic. A subharmonic generator allows you to synthetically add in these frequencies to help give the tracks some extra low-end oomph.

My favorites are Waves R-Bass and Vitamin.


Maybe you’ve bitcrushed a synth track, but have you ever tried bitcrushing a hi-hat? or shaker? Or a vocal double and blending it back in for a lo-fi texture. When you lower the bit depth of an audio signal, you alter the frequency response. The lower the bit depth, the lower the frequency range the audio signal can reproduce. Lowering percussion tracks or other sounds to 16 bit, 8 bit, or even lower can help add some texture that no other processor can.

You’ll find bit crushers are great for adding harsh distortion. They are a go to for engineers that want to absolutely destroy a sound, but they also can add more subtle color effects when used more tamely.

My favorites include Klanghelm SDRR and the stock Bitcrusher in Logic Pro X and Ableton Live

Related articles:
5 mixing mistakes that I used to make… and how to avoid them
How to calibrate your studio monitors
Things I wish I learned sooner about audio engineering
The “your mixes sound bad in the car” phenomenon

Posted on Leave a comment

How to mix faster with the pink noise mixing trick

You’ve probably used your favorite commercially produced song to check mixes and reference overall levels, but have you ever tried using pink noise? With this trick, you’ll use pink noise as a reference to check your tracks and make sure the overall balances are relatively even. You may think this sounds crazy, and there’s no way it could work but read this article, then try it out yourself, and if you’re still not convinced, then you can move on with your life and never think about this again. Regardless of if you end up using this trick ever in your mixing process, there is something to learn from why it works and how we can implement these types of things into our mixing workflow.

The Pink Noise Mixing Trick

  1. Import a pink noise sample or insert a signal generator on your mix bus. I like to set the output to around -12dbFS, which will leave plenty of headroom for mastering.
  2. Solo the pink noise and the first track.
  3. Bring the track down until you can barely hear it above the pink noise.
  4. Repeat steps 1-3 with every other track in the session.

Tada! Like magic.

Can this quick and easy yet powerful trick save you time on every mix?

If you’re like me, then you’re always trying to figure out ways to get to the fun stuff faster, like adding compressors, reverbs, and automation. The pink noise trick isn’t perfect, but it does help establish a basic balance of all the tracks quickly and easily, which gives you a good starting point to further develop your mix.

What is pink noise, and why would we decide to use it as a reference?

Pink noise is a generated signal used for audio measurement. The significant difference between pink noise and other noise is it reduces in amplitude as the frequency increases, which takes psychoacoustics into account. Every octave up is half the amount of the one before it. This allows the human ear to perceive every frequency as balanced, so every frequency sounds like it is the same volume even though it’s not.

Pink Noise Frequency Spectrum

If you look at pink noise through a spectrum analyzer, the shape is very similar to that of a well mixed modern pop song. This is likely the reason it works. When looking at the analyzer, you can see with both examples, the low frequencies start off the most prominent and then taper down the rest of the way, which is essentially what pink noise is. If we reference it, we can get the same desirable frequency curve on the spectrum analyzer that we also see in a well mixed pop song.

The trick is pretty easy and painless. I’m sure many of you might even argue that depending on the number of tracks, this trick will likely take longer than just mixing it by ear. If you feel that way, then this trick probably isn’t for you, and you can disregard everything you read so far and move on with your life. This is for the people that want a useful trick that will teach you a few things.

This trick won’t get you the most exciting mix, that’s not the point. The point is for you to be able to achieve a roughly balanced mix quickly and with little to no worry about room acoustics, monitoring, and other factors that can negatively affect your mix decisions.

Related articles:
5 mixing mistakes that I used to make… and how to avoid them
How to calibrate your studio monitors
Things I wish I learned sooner about audio engineering
The “your mixes sound bad in the car” phenomenon

Posted on Leave a comment

What the f*ck is Linear Phase EQ?

What the f*ck is Linear Phase EQ?
Linear Phase Equalizer
Fab Filter’s Pro-Q 2 and Wave’s Linear Phase EQ are two of the most popular Linear Phase Equalizers.

A minimum phase EQ is just another name for your standard, everyday equalizer. Your Neve 1073, API 550, your Pultec EQP-1A. All of these equalizers experience phase shifts due to the latency created by changing the amplitude of specific frequency bands. This latency or delay of the frequencies causes what’s known as a phase smear. Smearing leaves audible artifacts in the signal, which can be undesirable. Many times you can’t hear smearing at all, other times, you may like what it’s doing, but in different scenarios, you may want an equalizer that keeps the phase consistent (more on those later).

In the analog world, phase smear was just something that product designers tried to minimize or mold into something that sounded pleasing. In the digital world, all bets are off. When plugin coding and processing power started to become more advanced, developers realized they could finally do what engineers have wanted to do this whole time. Linear phase equalizers are impossible in the analog world, but in plugin land, anything is possible. Linear Phase EQ is equalization that does not alter the phase relationship of the source— the phase is entirely linear.

The irony of Linear Phase EQs is that they were initially conceived because of an engineer’s desire to eliminate phase smearing, which was thought to be a negative byproduct of using analog hardware equalizers. Once software programmers were able to develop a Linear Phase EQ, they soon realized that there were new problems and artifacts to overcome.

Pre-ringing is a negative artifact commonly associated with using Linear Phase EQs, which affects the initial transient. Instead of starting with a sharp transient, there is a short but often audible crescendo in the waveform before the transient hits. Since it happens before the transient, it sounds very unmusical and displeasing. This affects the overall tonality of transients which people do not find desirable.

The next obvious question… When are you supposed to use one? What are they good for?

Well, that answer depends on the engineer you ask. There are a lot of engineers that might tell you there is never a good reason to use a Linear Phase EQ. There were thousands and thousands of records made before plugins and Linear Phase EQs existed, and a lot of them sound pretty damn good. I can’t fault an engineer who doesn’t even bother with ever using one.

Other than never, there are a few scenarios where you might want to try a Linear Phase EQ. One of those is when boosting or cutting on sources that were multi mic’d. Since the phase relationships between each mic are so important, a Linear Phase EQ will ensure the phase coherence stays intact even with processing.

Another time you may want to pull out the ol’ Linear Phase EQ is when equalizing parallel tracks. When you have two of the same tracks, and you insert an equalizer, the phase of when the signal will change when combined with the unprocessed channel. This may, in fact, make it sound better, and you may like it, or it may make it sound worse; in this case, you could try reaching for a Linear Phase EQ to retain the phase relationship while still being able to boost and cut frequencies on the parallel channel.

Related articles:
What the f*ck is 32 bit floating?
5 mixing mistakes that I used to make… and how to avoid them
Things I wish I learned sooner about audio engineering
The “your mixes sound bad in the car” phenomenon