Posted on Leave a comment

What the f*ck is an audio oscillator?

What the f*ck is an audio oscillator?

What is an audio oscillator?

Moog Voyager Oscillator Section

Audio oscillators are most commonly used in music production as tone generators and measurement tools. Technically, an oscillator circuit converts DC to AC current, which produces a continuous, repeated, alternating waveform. They are most commonly used to produce waveforms in the desired frequency, ranging between 16 Hz to 20,000 Hz. A low-frequency oscillator, or LFO, generates waveforms below 20 Hz. The electrical current alternates very quickly between two states, much like a string resonates on a guitar; this produces a waveform that can be amplified and shaped with various other audio processors. Additionally, electronic oscillators are widely used in many industries as equipment clocking, measurement, and calibration tools.

The most common types of oscillators found in analog synthesizers are voltage-controlled. A voltage-controlled oscillator, or VCO, is an oscillator in which the frequency depends on a control voltage. These oscillators will produce a different frequency depending on the control voltage it is being fed.  When a key on an analog synth is pressed, it sends a specific voltage to the oscillator, which generates a waveform in the corresponding pitch. If the oscillator is tuned properly, the pitch will correspond with the note of the key that is pressed. Additionally, synthesizers will usually offer the ability to modulate the control voltage with different types of signals. For instance, adding modulation to a VCO can produce Frequency Modulation, better known as FM synthesis.

Filter Oscillator Synthesizer Toner Production

Voltage-controlled oscillators have three main parameters– frequency or pitch, amplitude or volume, and waveform or tone. A sine wave is the most basic of all waveforms; additional harmonics are added to alter its shape. The four main waveform shapes are sine, square, triangle, and sawtooth. Each type has a major impact on the tonality of the sound produced. Sine waves are usually classified as having a smooth sound, while a sawtooth is more harsh or buzzy. Another common waveform seen on oscillators is noise which most commonly comes in the white and pink variety. White noise contains all frequencies in equal proportions, while pink noise attenuates some of the higher frequencies.

Digital oscillators generate waveforms using digital signal processing or DSP. The waveforms are modeled and digitally recreated to emulate analog oscillators. Although modeling technology continues to get better and software manufacturers are making better sounding applications, most synth enthusiasts consider digital oscillators’ sound to be inferior to analog. However, the flexibility of digital gives manufacturers new opportunities to expand sonic possibilities.

Waveforms

Oscillators can produce a wide range of shapes, but the four most commonly found on an analog synthesizer are a Sine, Square, Triangle, and Sawtooth. The curve of the waveform line is what dictates its shape. 

Sine wave

Sine Waveform

A sine wave is arguably the most fundamental building block of sound. It is considered pure because there are no additional harmonics added to the signal; it is only made up of the fundamental frequency.

Square wave

Square Waveform

In addition to the fundamental frequency, a Square Wave only includes odd harmonics or harmonics that occur in whole odd-number multiples of the fundamental frequency. A square wave looks like its namesake, a square.

Triangle wave

Triangle Waveform

Triangle waves look similar to a sine wave, except the curves are replaced with straight edges that connect like a triangle. Triangle waves add in odd harmonics, which diminish the further away they get from the root frequency. For example, the root note and the 3rd harmonic will be the loudest, while the 5th and 7th harmonic will be lower in level and the 11th and 13th even lower.

Sawtooth wave

Sawtooth Waveform

The Sawtooth is named after its resemblance to a sawtooth blade. In these waves, both even and odd harmonics are added, resulting in a harsh but clear tone.

History of the oscillator

In the late 1930s, audio oscillators were less than ideal for modern applications due to their complexity, instability, and cost. This prompted William Hewlett to develop his first product for the Hewlett Packard company, the model 200B variable frequency oscillator. Hewlett was inspired after seeing a seminar at Stanford University during the late 1930s by his professor Frederick E. Terman on the use of negative feedback. He was so intrigued he decided to spend an entire semester studying it for his thesis needed to complete his advanced engineering degree. 

HP200A Oscillator
HP 200A Oscillator

The main difference in Hewlett’s oscillator is the incandescent lamp that is used as the temperature-dependent resistor in the feedback network. The light bulb also acted as an automatic gain control that kept the oscillator’s loop gain near unity, which is a key component to achieving the lowest amount of distortion. With this design, the output could be regulated without adding distortion. This not only improved the performance but also made it much more affordable. The Model 200A at the time sold for $54.40, which was substantially less than most of the other oscillators on the market, going for anywhere between $200-600. The first big sale which launched the entire company was to Walt Disney, whose engineers used them to test channels, recording equipment, speaker systems, and other equipment needed for its new Fantasound stereo sound reproduction system that was used for their breakthrough classic Fantasia. Fantasia was the first film that was commercially released in stereo.

Hewlett’s oscillator was the first practical method for generating audio signals, used for measurement and calibration in communications, science, medicine, audio, and many more industries. Before Hewlett’s 200B, there was no easy and accurate way to produce low-frequency signals.


Related articles:
What the f*ck is Linear Phase EQ?
What the f*ck is 32 bit floating?
Everything you need to know about reverb
What the f*ck is audio clipping?
The “your mixes sound bad in the car” phenomenon

Posted on Leave a comment

What the f*ck is a Baxandall EQ?

What the f*ck is a Baxandall EQ?

Disclosure: Audio Hertz is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission.


Dangerous BAX EQ

You’ve probably seen plugins, guitar amps, or stereos that replicate a Baxandall circuit, or maybe you’ve heard of the popular Dangerous BAX mastering equalizer. Peter Baxandall designed the Baxandall tone circuit or EQ in 1950, and they were then implemented into millions of audio systems around the world.

So what is a Baxandall equalizer circuit, and what makes it different than other equalizers?

The Baxandall equalizer is a shelving EQ, but unlike traditional shelving EQs, which have a steep rise or fall above the set frequency, the Baxandall shelving curve has an extremely wide Q curve, which creates a gentle slope. The broad curve can adjust a large portion of the frequency spectrum, but the gentle slope allows for a more natural sound and minimal phase distortion. The minimal phase distortion enables users to make more drastic boosts and cuts without imparting negative artifacts into the signal. This results in a wide, open sound that enhances the source’s sonic character that’s already there rather than imparting its own sonic character. These equalizers offer a subtle yet remarkably effective way of adjusting the frequency spectrum, which is why you’ll often find them being used on the mix bus and for mastering.

History

During World War II, Baxandall consulted for the Telecommunications Research Establishment in the Circuit Research Division. It was there he spent his time working on many different types of projects, including frequency transformers, powered loudspeakers, oscillators, high-speed tape duplicating equipment, and high precision microphone calibration methods, among many more things. His hero, Alan Blumlein, who you might know for his stereo micing technique, also worked for the TRE.

Peter Baxandall

Lucky for us, Baxandall was enormously generous and patient with passing on his knowledge. He was also remarkably good at conveying very complicated topics in a simple and easily understandable form. He published his tone circuit in a 1952 article in Wireless World magazine. Have you ever seen the Bass and Treble knobs on a stereo? That’s likely a Baxandall EQ circuit. He never collected a single royalty, while even a minuscule percentage would’ve made him an extremely wealthy man. This might be the greatest testament to his generosity; he genuinely wanted the world to sound better.

More History

The term equalization was likely derived from the various operators’ requirements at the time (phone, motion picture, broadcast, etc.) that were attempting to get their audio back to a flat frequency response or equal. Equalization, or filtering as it was also called, has been part of audio equipment since the beginning of the technology. Early radios came equipped with high frequency or top cut filters to remove unwanted noise or artifacts. Early telephone lines used equalizers to put back the high end that was lost in transmission. These equalizers were not fully adjustable like the parametric equalizers you’ll find in your DAW today.

Do you want to try a Baxandall EQ?

These plugins are available for free,
Acustica Audio Coral Bax-ter EQ
Kuassa BasiQ
Fuse Audio Labs RS-W2395C Baxandall EQ


Related articles:
What the f*ck is Linear Phase EQ?
What the f*ck is 32 bit floating?
Everything you need to know about reverb
What the f*ck is audio clipping?
The “your mixes sound bad in the car” phenomenon

Posted on Leave a comment

What the f*ck is oversampling?

What the f*ck is oversampling?
Oversampling
Oversampling options for Cytmoic’s The Glue.

Plugins have been drastically increasing in quality over the last 10 years. We are at the point now where we have some very innovative developers creating some truly remarkable sounding plugins. Not just digital emulations of classic analog gear but also new types of processors that wouldn’t be possible in the real physical world.

Unlike hardware, plugins require the use of complex algorithms, and the sound of the plugin is dependent on the coding of the developer. The better coders will be able to achieve better-sounding plugins much like a better electrical engineer can design a better circuit for a compressor. Trained ears matched with talented developers allow software companies to turn out some very high-quality plugins.

So, what is oversampling or upsampling?

Oversampling is when a plugin converts the audio to a higher sample rate for processing. Processing at the higher sample rate usually removes some of the negative artifacts associated with processing digital audio, mainly aliasing. Aliasing happens when information outside of the frequency response range of the digital converters and the sample rate you’re using are interpreted by the converter to be different frequencies.

oeksound Soothe
oeksound’s Soothe, a dynamic resonance suppressor for mid and high frequencies.

Oversampling mitigates issues, including aliasing, and will usually yield smoother, more pleasant-sounding results at the cost of using more CPU power. But all oversampling algorithms aren’t made equal, and some are better than others. You may even find that you prefer the sound of a plugin with the oversampling turned off. It’s not necessarily guaranteed that oversampling will make the audio sound “better.” If you see a plugin or DAW that offers oversampling and you have the CPU power to spare, try it out and see if you prefer the way it sounds. If you are short on CPU power, you’ll probably want to keep oversampling off unless you decide to freeze the tracks.


Related articles:
What the f*ck is 32 bit floating?
20 quick and easy tips that will improve your productions
What the f*ck is a power conditioner?
The “your mixes sound bad in the car” phenomenon

Posted on Leave a comment

What the f*ck is a power conditioner?

What the f*ck is a power conditioner?

Disclosure: Audio Hertz is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission.


We’ve all seen them. Those black units at the top of just about every single rack of gear. Sometimes they even have cool lights that pop out and an outlet on the front so your buddy can easily charge his e-cig. But what do power conditioners really do and when is it worth it or beneficial for you to buy one for your studio?

Rack of gear with power conditioner

The goal of a power conditioner is to filter, clean and stabilize incoming AC power. This, in theory, should preserve your equipment as well as improve performance. There’s an overwhelming amount of varying opinions on what exactly a good power conditioner is. A common sentiment on internet forums and messages boards is that most cheaper and more commonly used power conditioners are nothing more than an expensive box with a surge protector in it. A surge protector is used to prevent a power surge from causing damage to your electronic devices where a power conditioner is used to prevent noise and voltage fluctuations from causing issues.

Even an opinion piece on a supposedly reputable website like Computerworld.com, in which the author attest to the benefits of using a power conditioner, still come with no definitive proof. Just reading the author’s choice of words exude uncertainty, like he’s not even sure what the truth is.

“I can’t say with certainty that it [power conditioner] has improved the service life of my electronics, but I haven’t suffered a power related failure in the past 15 years”

Not exactly the best commercial for Team Power Conditioner. In fact, if I was making a commercial for a power conditioner and that was one of the customer testimonials, I’d probably leave that one out.

The author then goes on to cite a specific instance when he heard a hum through his guitar amplifier and his power conditioner was able to instantly remove it, claiming this as proof of the magic powers of his power conditioner. The only problem with that is that hum is usually caused by a ground loop and a power conditioner doesn’t have anything to do with that.

So what’s the truth? Are the thousands upon thousands of audio professionals using the base model Furman power conditioners stupid for wasting their money? That seems unlikely but it was still hard to find a clear definitive answer because the internet is littered with contradicting information and opinions. There seem to be four different schools of thought on how to properly power professional audio gear. I’ll explain each way and then I recommend you make your own educated decision depending on your situation.

Rack of gear with power conditioner 2

The first school consists of people that believe in using a power conditioner. These people believe a conditioner is an effective and necessary tool that allows you to get the most out of your gear as well as preserve its components by providing the unit with consistent, stable, and clean power. They believe it reduces stress on their gear from things like brownouts and voltage sag.

The second school is made up of people that don’t believe anyone in the first school. They believe that any power conditioner within a few hundred dollars is not really conditioning anything and is nothing more than a rack mountable surge protector. Because of this, they choose to buy a $10 surge protector power strip or a $30 rack mountable power strip and call it a day.

The third school believes in using a pure sine wave UPS (they almost always include a built in a surge protector). It is important that you look for a UPS that puts out a pure sine wave, as many of the lower priced units use a simulated sine wave, which can cause some power supplies to buzz and is not recommended for professional use.

The last school believes you really need to use a voltage regulator. Voltage regulators, which are also made by Furman, can run you well over $1,000. It seems that many people believe their power conditioners are regulating the voltage when that’s not actually the case. The Furman P-1800 AR Advanced Level Voltage Regulator/Power Conditioner claims to offer “True RMS Voltage Regulation delivers a stable 120 volts of AC power to protect equipment from problems caused by AC line voltage irregularities.”

There are obviously some other ways of going about powering your audio gear, and you can certainly combine all three schools of thought for the ultimate peace of mind, but these three are definitely the most common.

As for proof of what inexpensive power conditioners are really doing and if they work? Sorry, I can’t help you with that. That will continue to be debated by audio nerds for decades to come, right alongside “Do cables make an audible difference?”


Related articles:
What the f*ck is 32 bit floating?
20 quick and easy tips that will improve your productions
5 mixing mistakes that I used to make… and how to avoid them
The “your mixes sound bad in the car” phenomenon


Posted on Leave a comment

What the f*ck is dither?

What the f*ck is dither?
Audio Dither

To dither or not to dither that is the question…

The real question is, how far down the rabbit hole do you want to go? I’m sure there are plenty of incredibly talented engineers that produce exceptional sounding music that doesn’t know the first thing about dither. Dither isn’t going to make or break you, and even the most technical engineers will tell you that when working in 24-bit fixed point, the majority of people aren’t going to hear a difference. But the difference is there, and in our world, we are trying to accumulate small wins that will increase the quality of our work. Individually these changes don’t make a huge difference, but as they accumulate with other small victories it can start to make a dramatic difference and ultimately becomes what sets a good engineer from a great one.

Unfortunately, there is a lot of misinformation out there on dither. It’s much more difficult than it should be to figure out what the hell dither is if you should use it, and why. I like articles that get straight to the point and can sum up the topic in a practical way, quickly, efficiently, and without getting too complicated.  is how I prefer to look at the technical details of audio.

The truth is there is nothing I’d rather talk about less than the technicalities of digital audio, file formats, truncating bits, zeros, ones and the rest of that nonsense. Unfortunately, as an audio engineer and music producer, we have a responsibility to learn new technology if there is a chance it can improve the quality of our productions.

What is dither?

Dither is a specific type of low-level noise that is added when converting the bit depth of an audio file in order to reduce quantization distortion.

Why do you use dither?

Dither is required when reducing the number of bits in an audio file to help mask any quantization errors. Dither works by randomizing the quantization errors; the added noise has the effect of spreading the errors across the audio spectrum which in turn makes them less noticeable.

If you’re converting a file from 24-bit to 16-bit you need to get rid of or truncate the 8 extra bits of information. The truncation of bits causes quantization distortion which can cause the audio to sound brittle, and gritty, as well as shrink the stereo image. A music file that was correctly dithered compared to one that wasn’t is night and day, as the correctly dithered version will always sound better, even to untrained ears.

To hear what truncation distortion sounds like you can watch Ian Shepards video Dither or distort? Listen and decide for yourself”

Should you be using dither?

A lot of people say only dither once, dither is noise, and we shouldn’t constantly be adding noise to our audio right? Well, accomplished mastering engineer Ian Shepard of http://productionadvice.co.uk and The Mastering Show Podcast says  “After all, it’s just noise – and very quiet noise, at that.” Quiet noise never hurt anybody but hearing the effects of quantization errors can be quite jarring and painful. In order to minimize quantization errors that are introduced when converting fixed bit depth audio files, you should use dither every time you process a file in your DAW (more on this later).

All modern DAWs process and calculate in 32-bit floating regardless of the source bit depth. This means that bit depth conversions are happening every time you process the audio in any way like freezing tracks, bouncing in place and consolidating regions. To my knowledge, most if not all DAWs are not adding dither automatically, which means that bits are being truncated every time these processes are happening. You can prevent any conversion or need to dither entirely by always working in 32-bit floating. I would even recommend sending a 32-bit floating wav file to your mastering engineer and leave the dithering to them. You won’t need to dither until you finalize your master into a fixed point bit depth.

Conclusion

Even though when working at 24 bits the audible artifacts of quantization errors are negligible, by working in 32 bit floating we can eliminate the need to add dither during mixing and leave the one-time decision to be done in mastering.


Related articles:
20 quick and easy tips that will improve your productions
5 mixing mistakes that I used to make… and how to avoid them
Things I wish I learned sooner about audio engineering
The “your mixes sound bad in the car” phenomenon

Posted on Leave a comment

What the f*ck is audio clipping?

What the f*ck is audio clipping?

I’ve been interested in learning more about audio clipping for quite some time, but it wasn’t until recently that I was able to get my hands on a dedicated clipper plugin. I remember years ago hearing about mastering engineers clipping their converters for an extra 1-2 dB of gain. That made complete sense, but I’d never found a reason that this would be applicable in any of my productions. Clippers have become a common term thrown around on message boards and Facebook groups, and I took more interest and decided to do some research and experiment for myself.

We’ve all heard of clipping. From the very beginning of learning to record, we are taught to avoid clipping at all costs. Many would think it’s synonymous with digital distortion and is, in every way, shape, and form, a negative artifact of digital audio. But they would be wrong. Not all clipping is a bad thing. Clipping that sounds bad is bad; clipping that sounds good and helps us achieve louder volume levels is good.

The highest possible point before the audio starts to distort inside your DAW is 0 dBFS. If you push a source past this threshold, it will start to shave off the top of your waveform, so it looks more like a square wave than a standard round sine wave. Your waveform is effectively clipped.

There are two different types, hard clipping and soft clipping.

Hard clipping

If you look at a sine wave on an oscilloscope and raise the level into a clipper, the round sine wave gets squared off at the top, effectively shaving off the rounded edge of the waveform.

Sine Wave
Sine Wave
audio_hard_clipped
Clipped Sine Wave

Hard clipping distorts the sound while adding additional harmonics to the original source. This can sound cool on its own as just an effect. But it can also be particularly useful when used on subfrequency instruments, such as the ever popular 808 kick drum. These additional harmonics will make the subfrequencies more audible on smaller speakers.

Another common use of hard clipping is to make things louder. Many engineers will put one at the end of their mastering chain after the limiter. Here you can use a clipper to get a few extra dB of gain, similar to the mastering engineers I’d heard about that purposely clip their converters.

Soft clipping

Unlike hard clipping, where waves are completely squared off, with soft clipping, the waves are more rounded to create a smooth transition between the clipped and unclipped sections of the waveform. This makes for a more pleasing sounding distortion that isn’t as harsh as hard clipping. Analog gear and magnetic tape does this naturally when transformers and circuits are overdriven. Many compressors and limiters also have soft clipping built in as a standard feature.

Soft Clipped Sine Wave
Soft Clipped Sine Wave


Clipper Plugins

Usually, when levels are pushed, they become squared off, a clipper plugin uses an algorithm that knows when there is an overage, and rather than completely chop off the waveform, it helps shape it. A good clipper plugin keeps the transient impact while still adding harmonic content and volume.

You can also use them to “clip” or shave off the top peaks of your waveform so you can tame transients, add harmonic content, and effectively achieve a higher perceived loudness. Seems simple enough however, there’s an art to using one, as too much will leave you with dull transient-less material.


Related articles:
What the f*ck is Linear Phase EQ?
What the f*ck is 32 bit floating?
[Even more] Things I wish I learned sooner about audio engineering

The “your mixes sound bad in the car” phenomenon

Posted on Leave a comment

What the f*ck is Linear Phase EQ?

What the f*ck is Linear Phase EQ?
Linear Phase Equalizer
Fab Filter’s Pro-Q 2 and Wave’s Linear Phase EQ are two of the most popular Linear Phase Equalizers.

A minimum phase EQ is just another name for your standard, everyday equalizer. Your Neve 1073, API 550, your Pultec EQP-1A. All of these equalizers experience phase shifts due to the latency created by changing the amplitude of specific frequency bands. This latency or delay of the frequencies causes what’s known as a phase smear. Smearing leaves audible artifacts in the signal, which can be undesirable. Many times you can’t hear smearing at all, other times, you may like what it’s doing, but in different scenarios, you may want an equalizer that keeps the phase consistent (more on those later).

In the analog world, phase smear was just something that product designers tried to minimize or mold into something that sounded pleasing. In the digital world, all bets are off. When plugin coding and processing power started to become more advanced, developers realized they could finally do what engineers have wanted to do this whole time. Linear phase equalizers are impossible in the analog world, but in plugin land, anything is possible. Linear Phase EQ is equalization that does not alter the phase relationship of the source— the phase is entirely linear.

The irony of Linear Phase EQs is that they were initially conceived because of an engineer’s desire to eliminate phase smearing, which was thought to be a negative byproduct of using analog hardware equalizers. Once software programmers were able to develop a Linear Phase EQ, they soon realized that there were new problems and artifacts to overcome.

Pre-ringing is a negative artifact commonly associated with using Linear Phase EQs, which affects the initial transient. Instead of starting with a sharp transient, there is a short but often audible crescendo in the waveform before the transient hits. Since it happens before the transient, it sounds very unmusical and displeasing. This affects the overall tonality of transients which people do not find desirable.

The next obvious question… When are you supposed to use one? What are they good for?

Well, that answer depends on the engineer you ask. There are a lot of engineers that might tell you there is never a good reason to use a Linear Phase EQ. There were thousands and thousands of records made before plugins and Linear Phase EQs existed, and a lot of them sound pretty damn good. I can’t fault an engineer who doesn’t even bother with ever using one.

Other than never, there are a few scenarios where you might want to try a Linear Phase EQ. One of those is when boosting or cutting on sources that were multi mic’d. Since the phase relationships between each mic are so important, a Linear Phase EQ will ensure the phase coherence stays intact even with processing.

Another time you may want to pull out the ol’ Linear Phase EQ is when equalizing parallel tracks. When you have two of the same tracks, and you insert an equalizer, the phase of when the signal will change when combined with the unprocessed channel. This may, in fact, make it sound better, and you may like it, or it may make it sound worse; in this case, you could try reaching for a Linear Phase EQ to retain the phase relationship while still being able to boost and cut frequencies on the parallel channel.


Related articles:
What the f*ck is 32 bit floating?
5 mixing mistakes that I used to make… and how to avoid them
Things I wish I learned sooner about audio engineering
The “your mixes sound bad in the car” phenomenon

Posted on Leave a comment

What the f*ck is 32 bit floating?

What is 32 bit floating?

I, like, I’m assuming a portion of the people reading this have heard of 32 bit floating but are still unsure about exactly what it is. What are the advantages? What are the disadvantages? When I asked a friend of mine, who is also an experienced engineer about 32 bit floating point, he told me he didn’t know and had never used it.

After having this discussion and then immediately seeing this tweet from accomplished Brooklyn based producer Andrew Maury, I knew I had to finally figure out what the hell it was and if I should be using it.

So, what is 32 bit floating?

The Wikipedia article tells us it’s,

A computer number format that occupies 4 bytes (32 bits) in computer memory and represents a wide dynamic range of values by using a floating point. In IEEE 754-2008 the 32-bit base-2 format is officially referred to as binary32. It was called single in IEEE 754-1985. In older computers, different floating-point formats of 4 bytes were used, e.g., GW-BASIC’s single-precision data type was the 32-bit MBF floating-point format.

Alright, well, that about wraps it up… That was almost too easy. Ha. Ha.

Let’s start with the definition of bit depth because I know that one, and it’s not too difficult to understand. Bit depth is what decides the dynamic range of an audio file.

So 32 bit floating means more dynamic range, right? Not exactly.

So is 32 bit floating better? Higher bitrate means it’s better, right? Sort of.

So it turns out the reason no one knows what 32 bit floating is… is because it’s kind of pointless for most engineers even to bother worrying about it.

A video on the Reaper blog is one of the only sources I found that explained 32 bit floating in a practical way. This explanation is easy for a person that doesn’t like spending his time thinking about digital signal processing anymore than he has to.

So… 32 bit floating is a 24 bit recording with 8 extra bits for volume. Basically, if the audio is rendered within the computer, then 32 bit floating gives you more headroom. Within the computer means things like AudioSuite effects in Pro Tools and printing tracks internally. So say you decide to print a compressor, and the output level is peaking badly… If you are using 32 bit floating, you can bring the level down and restore the headroom so the file won’t be distorted. If you were recording to a tape machine, this wouldn’t be impossible. You can’t just record a bass that’s clipping and restore the headroom afterward. The benefit of 32 bit floating is when processing internally, BUT the downside is the files it creates are 50% larger than standard 24 bit audio files.

Most experienced engineers don’t need to worry about headroom as they probably already know how to make sure levels are never clipping when they aren‘t supposed to be. This article from ask.audio says 32 bit floating will also help reduce unnecessary noise introduced by AudioSuite dithering and rounding errors during signal processing in Pro Tools.

Maybe I’ll write an article in the future where I run some tests to see if there is a noticeable difference between AudioSuite effects processed with 24 bit and ones processed with 32 bit floating.

Update: Most DAWs process in 32 bit floating therefore, if you are processing any audio, it is converted to 32 bit to be processed and then converted back to 24 bit. If conditions permit, it is best to work in 32 bit floating all the way through until mastering to avoid any unnecessary conversion artifacts. Once the project is mastered, you can have the mastering engineer convert the final audio file to whatever sample and bitrate you need.


Related articles:
Things I wish I learned sooner about audio engineering
[Even more] Things I wish I learned sooner about audio engineering
EMT 250 and the birth of digital audio
The “your mixes sound bad in the car”