audio mixing tutorial

19
Audio Mixing Tutorial By Dan Kury Introduction: It is an honor for me to write this audio mixing tutorial for Gary Garritan's website. Thank you Gary! I have been a recording engineer for just over thirty years, and I am still learning something new all the time. It is my pleasure to provide you with some insight regarding my approach to making music with computers. Whether you use a Mac, PC or both, you are the conductor and the engineer, and you have to be good at both. Audio production is a critical process that makes your music realistic and pleasing to the ears. All too often, users make the necessary adjustments to their midi notes, and then disregard the audio production altogether. Essentials:

Upload: api-3750482

Post on 11-Apr-2015

1.520 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Audio Mixing Tutorial

Audio Mixing Tutorial By Dan Kury

Introduction:

It is an honor for me to write this audio mixing tutorial for Gary Garritan's website. Thank you Gary!

I have been a recording engineer for just over thirty years, and I am still learning something new all the time. It is my pleasure to provide you with some insight regarding my approach to making music with computers. Whether you use a Mac, PC or both, you are the conductor and the engineer, and you have to be good at both.

Audio production is a critical process that makes your music realistic and pleasing to the ears. All too often, users make the necessary adjustments to their midi notes, and then disregard the audio production altogether.

Essentials:

 

First things first, I trust you have a decent pair of studio monitor speakers. If you don't, take a few of your favorite audio CDs to a store where you can audition many different monitors. Don't buy the ones with the deepest bass or brightest highs, instead, buy the pair that produces the most natural sounding midrange, like piano, acoustic guitar, strings, and most important, listen to a vocal. We need to be able to clearly hear the part we are working on and how it fits, or sings with the other parts. Stay out of the big electronic stores. Go to a music store that sells pro audio. You

Page 2: Audio Mixing Tutorial

cannot expect to get a professional sounding mix if the speakers you are listening to were part of an old stereo. Expect to pay at least $500.00.

Many people are confused about midi and audio. I read about people who are trying to "convert" their midi tracks into audio. That is just not possible. A midi track contains information about when each note or notes start, (note on) velocity, pitch and other things and finally when to stop (note off). From a technical point of view, a midi track cannot be converted to an audio track. The midi track has to play into a synthesizer or sample library. The synth or sampler will then play the appropriate sound. The sound can then be recorded to an audio track and then, you can listen to it.

 

 

In the creation of a midi production, you are many different people. A conductor while you create the midi tracks, and a recording engineer/producer after the midi editing is completed.

The process of creating a realistic rendition with midi has two parts. The first is the editing of notes in the midi track, while the second is how the instrument will sound through the mixing board. A notation program allows notes to be entered the same way, and for some, notation is a more convenient way of entering and editing midi notes.You really need a DAW (digital audio workstation) to accurately manipulate your sound. Even if you can't play piano at all, get a keyboard with a decent modwheel and some faders, you won't regret it. If you do buy a controller keyboard, make sure that the mod wheel is a separate controller and not spring loaded, worse is a combination pitch bend/mod wheel.

When I start to mix, I pretend that one midi track is the sound of one microphone in a recording. This is how we want to hear our instrument while we edit the notes. Listen to just one midi track. Some of the transitions from note to note may be unrealistic, and a few notes might be too loud or too soft. You should fix these problems in the midi track. Every single midi track needs to be listened to under a microscope so to speak. Be picky.

One of the things that sets sample libraries apart, is the amount of articulations for each instrument that you can choose from. The more you have, the closer you can get to a very impressive and realistic sound, however, using more articulations means more editing, and that is time consuming. Since we are putting together a jigsaw puzzle of notes, it is imperative that ALL the pieces fit perfectly together.

There are a lot of creative things that we can do with the audio features in a DAW, we can even create a whole new instrument. I recommend recording every instrument to an audio track. If your computer has enough memory and a powerful processor, you could play the entire piece from beginning to end without recording these instruments. Once you have recorded the instruments to audio tracks, you can then turn off the midi tracks. Sometimes our computer is

Page 3: Audio Mixing Tutorial

just not capable of playing all the sounds at the same time, so recording them is a great way to relieve the stress from your computer.  

Here is an example of using two different flute sounds. Flute solo V, (vibrato) and Flute solo NV (no vibrato). I used the same flute midi track and recorded these two different sounds.

Using fader automation, we can cross fade from the Flute NV track to the Flute V track and have total control over when the vibrato comes in. Take your time, use the mouse or trackball to record volume automation for the faders of these two tracks, and don't give up until it sounds the way you want it to. In the screenshot below, the black data is the volume fader automation. The first MP3 is the two flute tracks soloed without any reverb, the second is how the flute ended up sounding in the piece.

 

 

 

Page 4: Audio Mixing Tutorial

 

MP3 Example: http//www.garritan.com/tutorial/AudioMixing_files/flute_dry.mp3

 

MP3 Example: http//www.garritan.com/tutorial/AudioMixing_files/flute_mix.mp3

Many sample libraries don't provide you with the convenience of using the mod wheel to create feeling and expression. GPO does, and I recommend that you start your midi track with one straight line, or level of mod wheel data, perhaps 70 percent. After you have completed the editing of the velocity (attack), duration, pitch etc. you should erase all the mod wheel data, and use the overdub feature and your mod wheel to create new data for this track. This overdub feature will allow you to record new mod wheel data, and keep all your existing notes and other data in the midi track. Like the fader automation we just created with the flute, keep playing the mod wheel a phrase at a time until you get it right. If you mess up, you will have to make sure to erase the bad data for that phrase/section. If your DAW does not have an overdub feature, you could always record any additional data into an empty midi track. Set it's midi channel to the same channel as the track that your notes reside in.

This next screenshot shows all the data that is in the first violins track. The mod wheel data is orange, blue is pitch and red is sustain pedal. Notice how certain notes slide into each other. You don't have to do this to every note, but using the mouse to draw these pitch bends makes a world of difference. Sometimes I find myself using too much. Be careful not to make this violin section sound like beginners. Notice the intricate amount of detail with the mod wheel. Listen to the MP3 and follow the notes, you will see how the mod wheel really makes the part have human feeling. This mod wheel data is very hard to draw in with a mouse, so I use the mod wheel.

 

Page 5: Audio Mixing Tutorial

��

 

MP3 Example: http//www.garritan.com/tutorial/AudioMixing_files/midi_data.mp3

Reverb.

 

GPO has a nice sounding reverb. Select a preset that makes your piece sound pleasant. If your piece were a string quartet, I would not advise using a concert hall or church setting. Typically, those are very large rooms, and they will make your little string quartet sound too small, instead, you should use a chamber setting. You have to be the judge, and taking the time to audition the different presets will make or break the professional presentation of your work.

 

Lots of reverb questions.

What is a plugin?

Page 6: Audio Mixing Tutorial

How do I set reverb volume? What is an aux send?What does pre and post mean?Should all the instruments have the same amount of reverb?Should I use different types of reverb for different instruments?

 

Your mixing board has knobs and sliders, or faders to control everything. Usually there is a row of volume knobs called AUX (auxiliary) send 1, 2, 3, etc. These auxiliary volume controls distribute the volume of that instrument to just about anything. We want to use one of these aux "send" volume controls to send signal of our flute to the reverb. We will want to use the same type of knob on every audio track channel in the mixer, because we want to be able to send all the instruments to the reverb. This screenshot shows the aux sends, and also indicates that they have been assigned to bus1. Notice also the pan knob settings for the different sections, this is discussed in more detail a little later.

First, you will need to create, or open a stereo AUX track or "return" as some workstations call it. This is where the reverb will reside. The term plugin is a common reference to a device that can be used inside the DAW. In the days before computer recording and mixing, a reverb or equalizer was a piece of equipment in a rack, now it is software that is loaded into the computer, and the computer is responsible for processing these plugin devices. Now that we have an aux return showing in the mixing board, we need to select the reverb plugin of our choice and insert it into this Aux return. When the term insert is used, it simply means that the effect or plugin device, is simply "inside" this aux channel. Don't let the term track confuse you. It is not a recordable audio track, but rather a stereo channel where we can control volume. Consult the manual for your DAW to learn how to create and set up the aux sends, aux returns, inserts and a master fader.

The aux send on each channel can be set up two different ways, Pre or Post. This means pre or post fader. Here's the deal. If we have the aux send for our flute set to "post', it means that the volume of the flute travels from the volume slide fader to this aux send. In other words, the slide fader feeds sound to the aux send, this way, the more you increase the volume of the flute, the more gain is increased to the aux send knob which would feed more volume to the reverb. This is the most common use for controlling reverb. If the aux send was setup as "pre", the slide fader would not affect the volume of the reverb, because the aux send gets its signal (flute) before the fader.

Just to make sure this is clear, if the aux send is pre, your flute would sound like it was miles away from everything else if the fader was too low, when you lower the flute slide fader, you would not be reducing the aux send, only lowering the dry sound of the flute. You will always want the slide fader to control signal to the aux send which is why you will almost always want to use "post" The term "wet" is used to describe a lot of reverb, the term dry is little or no reverb.

Page 7: Audio Mixing Tutorial

In a real listening environment like a concert hall, the listener is not as close to the percussion instruments as they are to the strings. Way back when, someone decided that the loud instruments like brass and percussion should be put in the back so they don't drown out the softer instruments like strings and woodwinds. That sounds like a good idea to me.

These days, we have different types of reverbs, halls, churches, meeting rooms and a tremendous selection of "real spaces" that we can use to simulate a real space, even a trash can is available in some IR (impulse response) libraries.

In orchestral music, I recommend using some reverb on all the instruments, but not necessarily the same amount. I recommend a reasonable amount on strings, slightly more on woodwinds, and considerably more on brass and percussion. This will create that concert hall effect and provide a nice real sounding environment for your music.

I am not against the idea of using different reverb settings for different instruments. With the ability to send individual volume from each instrument track, I don't feel multiple reverbs is necessary in order to simulate a realistic effect. Your computer will run more efficient with fewer reverbs running too.. Here's a great tip. The louder a clarinet plays in a concert hall, the louder the reverb of that instrument. While that clarinets swells up to the big crescendo, you can create some additional energy and excitement by increasing the clarinet's aux send for more reverb. This increased reverb is not possible in a real environment, but it works beautifully. I have used this technique for years.

Sophisticated reverb units have a myriad of settings, reverb time (RT), otherwise known as decay, pre-delay and room size. Reverb time is how long the sound continues to reverberate until it is no longer heard, the later part of the reverberation is considered the reverb tail. Pre-delay is the amount of time that passes before the first reflection of sound is heard. This is caused from the sound bouncing off of a reflective surface like a smooth wall. Rooms that have perpendicular walls tend to be more reverberant because the sound bounces off of one wall, and then bounces right back which increases reverberation time (RT). Room size is a setting that allows the user to choose the size of the room he or she is trying to simulate. The reverb processor calculates these adjustments and uses an algorithm to simulate the reverberating room of your choice.

If you have a newer convolution reverb, it more than likely allows you to place each instrument in a specific location on the stage. With this impulse response (IR) technology, you are able to use multiple different distance settings and the results can be quite stunning, but not necessary.

Hop on the bus Guss

Page 8: Audio Mixing Tutorial

 

What is a bus anyway?

 

Since audio travels in a specific direction, it is important to understand the routing of the signal chain or signal flow as some call it. Sound enters the channel strip, and is distributed to various places in that "module"or strip like EQ, slide faders, as well as aux send knobs. So let's talk about a bus.

If we want all the brass tracks to go to a separate group, we need to set up an aux channel. Now, this aux channel has an input and an output. Assign stereo bus (7-8) to it's input, and the master fader to it's output. These assignments instruct the mixer to connect certain things to each other, so a bus is a connection between one or more audio signals and a common destination. All aboard!

Page 9: Audio Mixing Tutorial

 

 

EQ

 

To equalize or not, that is the question.

 

The frequency response of samples found in Gary Garritan's libraries are very good. As long as the user has created a nice balance of instruments for color, and the arrangement of chords is pleasant, there is usually no need for any EQ. That doesn't mean I never use EQ with Garritan's products, it just means that great sound is possible without having to mess with EQ.

 

Page 10: Audio Mixing Tutorial

Equalizer (EQ) to keep all frequencies equal. All too often, equalizers do more damage than good. If you are a good engineer, that's another story. Using EQ on an instrument or voice is very effective for removing offensive frequencies. An EQ can also be used to make an instrument stand out from the others. With automation, we can use an EQ for as little as one phrase, or even one note, it's totally up to the engineer, that's you by the way.

 

This is a five band equalizer that is a standard plugin in Digital Performer. It allows the user to adjust volume (gain), frequency and bandwidth. It features five bands and two filters, LF (low frequency) LMF (low mid frequency) MF (mid frequency) HMF (high mid frequency) and HF (high frequency)

Volume is measured in decibels (db) not D flat. Frequencies are measured in Hertz (hz) sometimes referred to as cycles per second (cps)Bandwidth referred to as "Q", indicates the amount of decibels per octave that the equalizer will affect when it is cutting or boosting volume.

Page 11: Audio Mixing Tutorial

Octave, now there's a term we can relate to. Interesting that music is all about math, just like many things in this universe. When a bass player plucks the lowest "open" string on an acoustic upright bass, it sounds an E, which is approximately 80hz. The highest note on a piano is 4,186 hz or 4.186 khz. When the bass string wiggles back and forth, it is vibrating 80 times per second. Our ear drum vibrates back and forth 80 times per second as well, so we are able to recognize this low E bass note and how it relates to the chords. If this was a bass guitar connected to a speaker, the speaker cone would travel outward and inward 80 times per second, creating sound waves that would reach our ears in the same manner as the note resonating from the upright acoustic bass. I still find this very fascinating, don't you? So what does all that have to do with mixing music. Everything! If you understand how sound originates, you can easily figure out what to do to make it sound better with a little thought. So, when the sound is not good, you will be able to set your EQ very specifically, instead of just twisting knobs till they do something. Won't that be great!

You have heard of tuning to A-440 haven't you? The A just above middle C on a piano produces a pitch of 440 hz., an octave below that, is 220hz., and one octave below that is 110hz. It's just simple math. I mention this because it directly relates to the frequency settings that you will see on equalizers.

Select a flute from GPO, or your keyboard if you don't have GPO yet. Record it into an audio track, then insert an EQ on that channel and play with it. Notice that when you boost gain at low frequencies, the sound of the flute does not change at all. The closer you sweep the frequency knob into the range of the flute, the more affect the controls will have. Experiment, but be very careful when you use EQ, a little knob twisting goes a long way, and can have a very negative or positive effect on your sound. Be especially careful if you are using an equalizer on an entire stereo mix.

You may hear a frequency that you don't like. If you don't know what the problem frequency is, you could set the gain on one of the bands real high, and then use the frequency knob to sweep. When you get close to the offending frequency, it will start getting really loud. Once you have honed in on the frequency, lower the gain below 0db so that you are now cutting the gain of that offending frequency region.

Panning

 

Pan, short for panorama. Do we want this harp left, center or right?

 

Page 12: Audio Mixing Tutorial

This is the knob that determines where the instruments' sound will appear from a panoramic perspective. There are no rules here. There are however some very traditional ways of setting the pan for certain instruments. It is totally up to you the engineer to decide where you want the harp.

I have never personally liked the fact that most orchestras set up with both first and second violins on the left side of the stage. I prefer to place the second violins opposite the first violins, this creates a nice full frequency balance across the panoramic field of sound. I also personally have never liked the string basses and cellos on the same side, this places all the bass response in one speaker. I always leave cellos to the right where they normally are in a concert, and string basses somewhat closer to center. This creates a much fuller sound to me. Bass response is enhanced because the string basses are now playing through both speakers. This pan setting also helps the basses to sound better on a car stereo, many automotive sound systems have serious phase problems.

I like to go crazy with woodwinds, they are so fun. Spread them out, don't be afraid to use some hard left and right pan settings. Keep this in mind though, if your oboe is panned hard right for instance, and nothing else is panned that far, the oboe will actually stand out, even with it's volume set low in the mix. This is a great way to make any instrument stand out without having to make it louder. GPO's Kontakt player already has the instruments' pan set to a typical setting. Experiment with changing the pan in the player.

Compression

 

Compression, it�s too loud, now it�s too soft.

Isn�t there a happy medium somewhere with all this stuff going on.

That is up to you, the engineer.

Compression is a tool that is capable of detecting voltage, (volume) and it can be set to� hone in on certain frequency ranges depending on the sophistication of the compressor.

This is a demonstration of one of my projects without any compression or EQ. Take a close listen, then listen to the second MP3, then I will explain why I think compressors are not the magic tool for orchestral recordings. Don�t get me wrong, I love compression, but not for use with orchestral music.

 

Page 13: Audio Mixing Tutorial

MP3 Example: http//www.garritan.com/tutorial/AudioMixing_files/pre_comp.mp3

 

MP3 Example: http//www.garritan.com/tutorial/AudioMixing_files/ compression.mp3

Now that you have heard the two different versions, go back and listen again to the second MP3 and see if you here more reverb this time. You will notice that the nice dynamics have been chopped to death because of the excessive compression. Depending on the settings, a compressor normally responds very quickly to attacks, this allows any sudden peaks to be caught by the compressor so it can lower the volume. A few adjustments like threshold, ratio, attack and release allow the user to set the compressor's behavior. Many users will insert a compressor into their mix, and have these settings all wrong. When a loud passage meets the threshold, the compressor responds by pulling volume back, then once the loud passage is abruptly over, the compressor quickly releases, and allows the volume to come back up. The reverb level in the recording is often "pulled up" and the resulting sound is horrible. Again, depending on the settings especially release, the compressor will actually sound like it is pulling up the soft passages relative to the louder sections. For a vocal in a contemporary mix, this is a very nice trick, but not on an orchestra.

So, should you use a compressor on your project if the dynamics are excessive?

 

I say no. Go back and create some automation on your master fader. Reduce the volume smoothly and gently with the same tender loving care that you used while you entered your midi notes.

 

Groups

What are groups?

 

Everything eventually ends up going through a master fader. Instead of routing all of our audio tracks/instruments directly to the master fader, we could separate the instruments into sections,

Page 14: Audio Mixing Tutorial

just like the orchestra, strings, woodwinds, brass and percussion. All the strings would go into a stereo (bus) group master, woodwinds into another, and so on. While we are attempting to make our last pass, we still have overall level control of the four main sections. Ah! We can also place another aux send for reverb on these group masters, and have the option of applying additional reverb in these sections if needed. Yes, automate those too. Read your DAW manual to properly configure groups, and learn how to use the automation.

When I start mixing, I only listen to the 1st violins. I create fader automation from beginning to end. Once I am happy with what I believe is correct, I let the 1st violins play while I automate the 2nd violins. I use the 1st violins as a reference to balance the two. After the violins are mixed, I move on down the line adding each string part, just like a conductor. Once all the strings are mixed, I go down the list through woodwinds, brass and finally percussion. Most of the time when I mix woodwinds, I will automate without the strings, balancing each added instrument along the way. Eventually I get done. There is always some last minute tweaking to do, but that is just part of fluff and buff.

Many forum members that I have talked with use libraries like Garritan's GOS and GPO. Some of these folks use a computer that is not capable of playing a full orchestra. If this is a problem that you encounter often, use the freeze track feature found in many digital audio workstations. This feature will allow you to record the sound to an audio track, including any inserted plugins. If you discover a few wrong notes, you can always go back and unfreeze the track, make your fix, and refreeze. Freezing is basically recording the sound in real time.

There is certainly much more about engineering to talk about, but we will have to leave some of that for next time.

Listen carefully, trust your ears. If something sounds even slightly out of time or tune, it is, so check it out.

Happy mixing! Dan Kury