Berklee College - Music Production & Engineering PDF
Berklee College - Music Production & Engineering PDF
Berklee College - Music Production & Engineering PDF
that famous The Snaremastering the art of noise article in the Januaryissue). What you may not know is that Alex is a busy engineer in the Boston area, that he teaches music production and engineering at Berklee College of Music, and that hes just the right person to be starting our new beginners series. Before turning the mighty pen over to Alex for this extended-mix opening column, let me just reassure fellow fans of Mike Rivers Oops Wrong Button that the series will continue in its more advanced state. Its simply time to start over from Square 1 so recording novices can be brought up to speed. And now without further ado I give youAlex.NB
The roads of Boston are famous for their random wandering. Few streets intersect at right angles. Individual streets change names, become one way, or dead end without warning. The natives, not just the rent-a-car equipped tourists, admit that its easy to get lost. A running joke says that somewhere in this city youll reach an intersection while driving down a one way street and innocently encounter all three signs of doom at once: no left turn, no right turn, and do not enter. Maybe there is a flashing red light to warn you are approaching this dreaded intersection. And there is probably also a sign admonishing you to yield to pedestrians, as if your ability to make progress werent already limited enough. Naturally there are no signs telling you what street you are on or what street you have reached. The cars, most of them taxis, just line up. Your blood pressure rises. Your appointments expire. You scream to yourself, Why am I driving in this town anyway? Without some fundamental understanding of how a studio is connected, youll eventually find yourself at the audio equivalent of this intersection: feedback loops scream through the monitors, no fader seems capable of turning down the vocal, drums rattle away in the headphones but arent getting to tape... I could go on. Believe me. I could go on. At the center of this mess is the mixing console (a.k.a. mixer, board, or desk). In the hands of a qualified engineer, it manages the flow of all audio signals, getting them to their appropriate destination safely and smoothly. The untrained user can expect to get lost, encounter fender benders, and eventually be paralyzed by gridlock. The role of the mixer The ultimate function of the console is to control, manipulate, and route all the various audio signals racing in and out of the different pieces of equipment in the studio or synth rackit provides the appropriate signal path for the recording task at hand. Consider mixdown. The signal flow goal of mixing is to combine several tracks of music that have been oh-soRECORDING JULY 1999
carefully recorded on a multitrack into two tracks of music that our friends, the radio stations, and the record buying public can enjoy. They all have stereos, so we convert the multitrack recording into stereo: 24 tracks in, two tracks out. The mixer is the device that does this. Naturally, theres a lot more to mixing than just combining the 24 tracks into a nice sounding 2-track mix. For example, we might also add reverb. And equalization. And compression. And a little turbo-auto-panning-flangewah-distortion (patent pending. Its just a little patch Im working on in the ol digital multi-effects box). It is the mixing consoles job to provide the signal flow structure that enables all these devices to be hooked up correctly. It ensures that all the appropriate signals get to their destinations without running into anything. A primary function of the console is revealed: the mixer must be able to hook up any audio output to any audio input. See Figure 1 for an example of the many possible hookups you might expect your mixer to provide. In connecting any of these outputs to any of these inputs, the console is asked to make a nearly infinite number of options possible. We mentioned mixdown as an example above, but we do more than mix. Our signal routing device has to be able to configure the gear for recording a bunch of signals to the multitrack recorder simultaneously, like when we record a big band. It should also be able to make the necessary signal flow adjustments required to permit an overdub on the multitrack.Additionally, we might need to record or broadcast live in stereo. Fortunately, all sessions fall into one of the following categories. 1. Basics A multitrack recording project begins with the basics session. When doing the basics session, nothing is on tape yet, lots of musicians are in the room playing, and the engineer is charged with the task of getting the first tracks onto tape. You know how it goes. The band all plays together, and you record them onto separate tracks. Of course the
singer will want to redo her part as an overdub later. Ditto for the guitarist. You still record everything, as sometimes the keeper take is the one that happens during basics. No pressure, just sing/play along so the band can keep track of which verse they are on, and well record a more careful track in a few weeks.
2. Overdubbing For the overdubs there are often fewer musicians playing, fewer microphones in action, and possibly fewer band members around. It is often a much calmer experience. During basics there is the unspoken but strongly implied pressure that no one can mess up or the whole take will have to be stopped. The crowd in the studio is overwhelming. The crowd in the control room is watching. The lights, meters, mics and cables all over the place complete that in the lab, under a microscope feeling. Performance anxiety fills the studio of a basics session. Overdubs, on the other hand, are as uncomplicated as a singer, a microphone, a producer, and an engineer. Dim the lights. Relax. Do a few practice runs. Any musical mistakes tonight are just between us. No one else will hear them. Well erase them. If you dont like it, just stop and well try again. Meantime, the console routes the mics to the multitrack tape. The console creates the rough mix of the mics and the tracks already on tape and sends them to the monitors. Simultaneously, it creates a separate mix for the headphones. And we never miss an opportunity to patch in a compressor and some effects. Figure 4 lays out the console in overdub mode. 3. Mixdown For mixdown, the engineer and producer use their musical and technical abilities to the max, coaxing the most satisfying loudspeaker performance out of everything the band recorded. There is no limit to what might be attempted. There is no limit to the amount of gear that might be needed. In case youve never seen what goes on in a big budget pop mix, let me reveal an important fact: nearly every track (and there are at least 24, probably many more) gets equalized and compressed and probably gets a dose of reverb and/or some additional effects as well. A few hundred patch cables are used. Perhaps several tens, probably hundreds of thousands of dollars worth of outboard signal processing is used. Automation is required. And an enormous console is desired. During earlier recording and overdubbing sessions you might have thought, This is sounding like a hit. Its not until mixdown when youll really feel it. Its not until the gear-intense, track by track assembly of the tune that youll think, This sounds like a record! As Figure 5 illustrates, the signal flow associated with mixdown is actually quite straightforward. Gone is the need to handle microphone signals. Gone is the need to create a headphone mix. Nothing needs to be sent to the multitrack. The mission is to route multitrack music plus effects to the monitors. The only addition is the master 2track machine. The point, after all, is to create a DAT, cassette, or CD master of the mix. 4. Live to 2 For many gigs we bypass the multitrack entirely, recording a live performance of any number of musicians straight to the 2-track master machine or sending it live to a stereo broadcast or the house monitors. A Live to 2 session is the rather intimidating combination of all elements of a Basics and a Mixdown session. Performance anxiety fills the performers, the producer, and the engineer.
RECORDING JULY 1999
Such freedom often leads to creativity and chance-taking, key components of a great take. So you may one day be glad you recorded the singer that day. Ditto for the guitarist. With the intent to do so many tracks as overdubs later anyway, the audio mission of the basics session is reduced to getting the killer drum and bass performance onto the multitrack. And sometimes even the bass part gets deferred into an overdub. So for basics we record the entire band playing all at once to get the drummers part on tape. Check out the set-up sheet for a very simple basics session. Its just a triodrums, bass, guitar, and vocalsand yet weve got at least 15 microphones going to at least ten tracks. I say at least because it is easy to throw more mics on these same instruments (e.g. create a more interesting guitar tone through the combination of several different kinds of mics in different locations around the guitar amp). And if you have enough tracks, it is tempting to use even more tracks (e.g. record the bass DI direct to the mixer as a separate track from the miked bass cabinet). The console is in the center of all this, as shown in Figure 3. It routes all those mic signals to the multitrack so you can record them. It routes them to the monitors so you can hear them. It routes those same signals to the headphones so the band members can hear each other, the producer, and the engineer. And it sends and receives audio to and from any number of signal processors (more is better): compressors, equalizers, reverbs, etc.
inevitable headache because the console is capable of routing so many different kinds of outputs to so many different kinds of inputs. 24 tracks is the norm for multitrack projects. Most of us exceed this. Number of microphones and signal processors? Well, lets just say that more is better. The result is consoles that fill the roomor a pair of 17" computer monitorswith knobs, faders, and switches. The control room starts to look like the cockpit of the space shuttle, with a mind-numbing collection of controls, lights, and meters. These two factors, complexity and quantity, conspire to make the console a confusing and intimidating device to use. It neednt be. Flexibility: friend or foe? In the end, a mixer is not doing anything especially tricky. The mixer just creates the signal flow necessary to get the outputs associated with todays session to the appropriate inputs. The console becomes confusing and intimidating when the signal routing flexibility of the console takes over and the engineer loses control over what the console is doing. Its frustrating to do an overdub when the console is in a Live to 2 configuration. The thing wont permit you to monitor whats on the multitrack tape. Or if the console is expecting a mixdown, but the session wants to record basic tracks, you experience that helpless feeling of not being able to hear a single microphone thats been set up. The band keeps playing, but the control room remains silent. It doesnt take too many of these experiences before console-phobia sets in. A loss of confidence maturing into an outright fear of using certain consoles is a natural reaction. Through total knowledge of signal flow, this can be overcome. The key to understanding the signal flow of all consoles is to break the multitrack recording process whether mixing, overdubbing, or anything elseinto two distinct signal flow stages. First is the Channel path. Also called the Record path, it is the part of the console used to get a microphone signal (or synth output)
But for the console itself, the gig is actually quite straightforward: microphones in, stereo mix out. Of course we want to patch in any number of signal processors. Then the resulting stereo feed goes to the studio monitors, the house monitors, the headphones, the 2-track master recorder, and/or the transmitter. Board of confusion These four types of sessions define the full range of signal flow requirements of the most capable mixer. Yet despite having distilled the possibilities into these key categories, the console demands to be approached with some organization. Broadly, we can expect to be frustrated by two inherent features of the device: comRECORDING JULY 1999
plexity of flow (where is the signal supposed to be going?) and quantity of controls (look at all these pots!). Complexity is built into the console because it can provide the signal flow structure for any kind of recording session one might encounter. The push of any button on the console might radically change the signal flow configuration of the device. In this studio full of equipment, that little button changes whats hooked up to what. A fader that used to control the snare microphone going to track 16 might instantly be switched into controlling the baritone sax level in the mix. It gets messy fast. The sheer quantity of controls on the work surface of the mixer is an
to the multitrack tape machine and, you know, record it.It usually has a microphone preamp at its input, and some numbered tape busses at its output. In between you find a fader and maybe some equalization, compression, effects sends, cue sends, and other handy features
associated with getting a great sound to tape. The second distinct audio path is the Monitor path. It is the part of the console you use to actually hear the sounds you are recording. It typically begins with the multitrack tape returns and ends at the mix bus.
Along the way, the Monitor Path has a fader and possibly another collection of signal processing circuitry like equalization, compression, and more. Keeping these two signal flow paths separate in your mind will enable you to make sense of the plethora of controls sitting in front of you on the console. Try to hang on to these two distinct signal paths conceptually, as this will help you understand how the signal flow structure changes when going from basics to mixdown. Try to break up the console real estate into channel sections and monitor sections so that you know which fader is a channel fader and which is a monitor fader. Split consoles Console manufacturers offer us two channel/monitor layouts. One way to arrange the Channel paths and Monitor paths is to separate them physically from each other. Put all the Channel paths on, say, the left side of the mixer and the Monitor
paths on the right as in Figure 8A. This is a split configuration. Working on this type of console is fairly straightforward. See the snare overload on the multitrack? This is a recording problem. Head to the left side of the board and grab the Channel fader governing the snare mic. Levels to tape look good, but the guitar is drowning out the vocal? This is a monitoring problem. Reach over to the right side of the console and fix it with the Monitor faders. Sitting in front of 48 faders is less confusing if you know the 24 on the left are controlling microphone levels to tape (channel faders) and the 24 on the right are controlling mix levels to the loudspeakers (monitor faders). So its not too confusing that there are two faders labeled,Lead vocal. The one on the left is the mic youre recording; the one on the right is the track youre listening to. In-line consoles A clever but often confusing enhancement to the console is the in-line configuration. Here the channel and monitor paths are no longer separated into separate modules on separate sides of the mixer. In fact, they are combinedinto a single module set; see Figure 8B. Experience tells us that our focus, and therefore our signal processing, tends to be oriented toward either the channel path or the monitor path, but not both. During tracking the engineer is dedicating ears, brains, heart, and equipment to the record path, trying to get the best sounds on tape as possible. Sure the monitoring part of the console is being used. The music being recorded couldnt be heard otherwise. But the monitor section is just creating a rough mix, giving the engineer, producer and musicians an honest aural image of what is being recorded. The real work is happening on the channel side of things, and the monitor path should only report the results of that work accurately. Adding elaborate signal processing on the monitor path only adds confusion at best, and misleading lies at worst. For example adding a smiley face equalization curveboosting the lows and the highs so that a graphic eq would seem to smileon the monitor path of the vocal could
RECORDING JULY 1999
hide the fact that a boxy, t h i n ,a n d muffled signal is whats actually being recorded onto the multitrack. It turns out that for tracking, overdubbing, mixing, and live to 2 sessions, we only really need signal processing once, in the channel or the monitor path. Weve just seen the channel path focus of tracking. Mixing and Live to 2 sessions are almost entirely focused on the final stereo mix that we hear, so the engi-
neer and the equipment become more monitor path oriented. Herein lies an opportunity to improve the console. If the normal course of a session rarely requires signal processing on both the monitor path and the channel path, then why not cut out half the signal processors? If half the equalizers, filters, compressors, aux sends, etc. are removed, the manufacturer can offer the console at a lower price,
or spend the freed resources on a higher quality version of the signal processors that remain, or little bit of both. And as an added bonus the console gets a little smaller and a lot of those knobs and switches disappear, reducing costs and confusion further still. This motivates the creation of the in-line console. On an in-line console, the channel path and the monitor path are combined into a single module so they can share some equipment. Switches lie next to most pieces of the console, letting the engineer decide, piece by piece, whether a given feature is needed in the channel path or the monitor path. The equalizer, for example, can be switched into the record path during an overdub and then into the monitor path during mixdown. Ditto for any other signal processing. Of course, some equipment is required for both the channel path and the monitor pathlike faders. So there is always a channel fader and a separate monitor fader (less expensive mixers often use monitor pots). The in-line console is a clever collection of only the equipment needed, when its needed, where its needed.
RECORDING JULY 1999
Channel surfing An unavoidable result of streamlining the console into an in-line configuration is the following kind of confusion. A single module, which now consists of two distinct signal paths, might have two very different audio sounds within it. Consider a simple vocal overdub. A given module might easily have a vocal microphone on its channel fader but some other signal, like a previously recorded guitar track, on its monitor fader. The live vocal track is actually being monitored on some other module and there is no chan-
nel for the guitar, as it was overdubbed yesterday. Levels to tape look good, but the guitar is drowning out the vocal? This is a monitoring problem. The solution is to turn down the monitor fader for the guitar. But where is it? Unlike the split design, an in-line console presents us with the ability to both record and monitor signals on every module across the entire console. Each module has a monitor path. Therefore each module might have a previously recorded track under the control of one of its faders. Each module also has a channel path. Therefore, each module might have a live microphone signal running through it. To use an in-line console, you must be able to answer the following question in a split second: Which of the perhaps 100 faders in front of me controls the guitar track? Know where the guitars monitor path is at all times, and dont be bothered if the channel fader sharing that module has nothing to do with the guitar track. The monitor strip may say, Guitar. But you know that the channel contains the vocal being recorded. It is essential to know how to turn down the guitars monitor fader without fear of accidentally pulling down the level of the vocal going to the multitrack tape.
One must maintain track sheets, set-up sheets, and other session documentation. These pieces of paper can be as important as the tape/hard disk that stores the music. However, rather than just relying on these notes, it helps to maintain a mental inventory of where every microphone, track, and effects unit is patched into the mixer. Much to the frustration of the assistant engineer who needs to watch and document whats going on and the producer who would like to figure out whats going on, many engineers dont even bother labeling the strip or any equipment for an overdub session or even a mix session. The entire session set-up and track sheet is in their heads. If you have enough mental RAM for this, try to do it. It helps you get into the project. You are forced to be as focused on the song as the musicians are. Theyve got lines and changes and solos and lyrics to keep track of. The engineer can be expected to keep up with the microphones and reverbs and tracks on tape. This comes with practice. And when you know the layout of the console this intimately, the overlapping of microphones and tracks that appears on an in-line console is not so confusing.
RECORDING JULY 1999
Sure the split console offers some geographic separation of mic signals from tape signals, which makes it a little easier to remember whats where. But through practice you are going to keep up with all the details in a session anyway. The inline console becomes a perfectly comfortable place to work. Getting your ducks in a row If youve dialed in the perfect equalization and compression for the snare drum during a basics session, but fail to notice that you are processing its monitor path instead of its channel path, you are in for a surprise. When you play back the track next week for overdubs, youll find
that that powerful snare was a monitoring creation only and didnt go to tape. It evaporated on the last playback last week. Hopefully you remember and/or document the settings of all signal processing equipment anyway, but more helpful would be to have had the signal processing chain in front of the multitrack tape machine, not after. No worries. Through experience, youll learn the best place for signal processing for any given session. Equalization, compression, reverb, the headphones each has a logical choice for its source: the channel path or monitor path. And it varies by type of session. Once youve lived through a variety of sessions it becomes instinctive. Your mission is to know how to piece together channel paths, monitor paths, and any desired signal processing for any type of session. Then the signal flow flexibility of any mixer, split or in-line, is no longer intimidating. By staying oriented to the channel portion of the signal and the monitor portion of the signal, you can use either console to accomplish the work of any session. You can focus instead on music making.
Whats that switch do? I will admit that there is such a thing as too much. You may be an excellent engineer capable of recording sweet tracks, but when Peter Gabriel invites you to his studio and you sit in front of his 72 channel GSeries SSL, you will have trouble doing what you know how to do (recording the sweet tracks) while dealing with what you dont know how to do (use this enormous mixer with, gulp, more than 8,000 knobs and switches). Good news: that vast control surface is primarily just one smaller control group (the module) repeated over, and over, and over again. Know how to use a single module and you know how to use the whole collection of 72 modules. Impress your clients. Impress your friends. Heck, impress yourself. Master the many subtle aspects of juggling monitor and channel paths through different types of sessions, and learn to sit calmly in front of consoles that have grown well beyond 100 modules, and youll have developed 90% of the ability to use any console anywhere.
Glossary
Well include a list of terms introduced in each installment of the column, and collect them on our Web site as an ongoing reference. Heres a starting list of terms mentioned in this article.
between the loudest and softest moments is reduced. Delay: Electronically created repetitions of a sound (echoes). Shorter delays are perceived as flanging, chorusing or doubling. Well study these effects another time. DI: Direct Inject or Direct Input bypassing an instrument amp by taking the signal (usually from guitars and bass guitars) straight to a channel input of the board. Usually this is done via a small device called a direct bo x, which matches levels so the instruments weak signal is matched to the boards input. equalization or eq: Tonal treatment of a signal by attenuating (reducing) or boosting selected ranges of the total spectrum (bass and treble controls are the simplest examples). There are many types of eq, which well learn about later. filter: An electronic device that reduces certain ranges of the total spectrum. For example, a low-pass filter attenuates (reduces) high frequencies, passes (leaves alone) low frequencies. Equalization is generally done with arrays of filters. live to 2: Bypassing a multitrack recorder, mixing any number of input sources all at once into stereo. microphone preamp: An electronic device that increases the typically very weak signals produced by microphones so that these signals can join others at line level in a mixer. mix bus: See bus. mixdown: Usually stage three in a recording project after basics and overdubs, this is when all previously recorded tracks on the multitracker are routed through (returned to) the board, their levels and panning and effects adjusted, resulting in a final stereo mix. mixer (or console, board, desk): An apparatus with many electronic
basics (or tracking): The early stages of a recording project recording the individual tracks on the multitrack recorder. This is done before adding overdubs, mixing to stereo, or mastering for final duplication/distribution. bus (sometimes spelled buss): A signal path that can accept and mix signals from various sources. channel, channel path (or input path or record path): The signal coming from your source (mic, instrument, or returning from an already-recorded track on your multitracker) into one of the mixers channels, passing through that channels electronics, then usually getting split to go to several destinaAlex Case always has a con vincingly tions (monitor section for listening, innocent look on his face when he sees multitracker to be recorded, effect a traffic cop or a console. You can write sends for delay/reverb etc, master to Alex with questions or suggestions section for stereo mix). on what youd like to see in Nuts & compression: Dynamic treatment Bolts at case@r ecordingmag.com. of a signal so that the difference
circuits, all designed to accept audio signals, split (duplicate) them, re-route them, combine them, adjust their levels, tonal characteristics, and placement in the final stereo mix. module: A group of electronic circuits that combine to achieve a specific task, as in a mixers channel module monitor path: A mixer signal path that accepts and mixes signals to be monitored (listened to). outboard signal processing: Treatment (reverb, delays, others) of signals outside of the board (reached via effects or auxiliary send busses and send outputs, returned to the board via return inputs and return busses) overdub: Adding one track or several tracks to previously recorded tracks (e.g. a singer adds vocals after the instrumental tracks have been recorded). patch cable: A cord connecting two points to carry a signal from A to B. pot: Short for potentiomente,ra device that increases or decreases the signal strength (a kind of volume control) or tweaks eq settings, etc. Basically a techie term for a knob. return (tape or aux or effects): A type of input into the board bringing back signals other than the original sources (mics or instruments), either previously recorded multitracks, or signals returning from outboard processors. See send. reverb: An electronically created illusion of room acoustics. send (aux or cue): Circuits (busses) that lead to an output connector from where signals can be sent to outboard processors or to monitoring (listening) setups. See return. stereo bus: The final two circuits in a board that accept and mix signals to become the Left and Right channels of a stereo mix. tape bus: A circuit that accepts and mixes signals to or from tape recorders. two mix: See ster eo b us.
Excerpted from the July edition of RECORDING magazine. 1999 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO 80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subsciption Information, call: 1-800-582-8326
RECORDING JULY 1999
Part 2
in our beginners series
a convincing emotional presence for a voice fighting its way out of a pair of loudspeakers. The distinguishing characteristic of this type of parallel signal processing is that it is addedto the signalit doesnt replacethe signal. The structure is illustrated in Figure 1A. The dry (i.e. without reverb; more generally, without any kind of effect) signal continues on its merry way through the console as if the reverb were never added. The reverb itself is a parallel signal path, beginning with some amount of the dry vocal, going through the reverb, and returning elsewhere on the console to be combined with the vocal and the rest of the mix. (Note that in these examples, signals are being routed to a L/R bus for monitoring on the speakers shown, as discussed last issue.) Conversely, consider equalization. A typical application of equalization is to make a mediocre sounding track beautiful. A dull acoustic guitar is made to shimmer and sparkle,courtesy of some boost around 10 or 12 kHz. A shrill vocal gets a carefully placed dip somewhere between 3 and 7 kHz to become more listenable. The idea is that this fixes the sound; you dont want to hear the unprocessed version anymore, just the good one. Adding shimmer to a guitar isnt so useful if the murky guitar sound is still in the mix too. And the point of eq-ing the vocal track was to make the painful edginess of the sound go away. Here the signal processing is placed in serieswith the signal flow, as shown in Figure 1B. Equalizing, compression (and other dynamics processing),deessing, wah-wah,distortion, and such are all typically done serially so that you only hear the effected signal and none of the unaffected signal. Well explain what all those processes are, of coursestick with this series. The effects send Not surprisingly, parallel and serial flow structures require different approaches on the console. For parallel processing, some amount of a given track is sent off to an effects unit for processing. Thats what the effects send is for. Also known as an echo send or aux sendshort for auxiliaryit is a simple way to tap into a signal within the console and send some amount of
RECORDING AUGUST 1999
that signal to some destination. If that destination is, say, a reverb or delay, the effected signals come back to the consoles effect r eturns or aux returnsanother set of your consoles inputs that usually feed straight into the master L/R bus. Probably available on every channel module of the console, the effects send is really just another fader or knob. Not a channel fader or monitor fader, the effects send fader or knob determines the level of the signal being sent to the signal processor. Reverb, delay, and such are typically done as parallel effects and therefore rely on effects sends. Check out Figure 2A to see how it works. Theres more to the effects send than meets the eye, however. Its not just an effects fader. An important benefit of having an effects send level knob on every channel on the console is that a single effects processor can be shared by all those channels. Unless you have lots of very high quality (i.e. very expensive) reverbs, for instance, it isnt practical to use one on just the snare, or just the piano, or just the vocal. Turn up the effects send level on the piano track a little to add a small amount of reverb to the piano. Turn up the effects send level on the vocal a lot to add a generous amount of reverb to the vocal. In fact, the effects send levels across the entire console can be used to create a separate mix of all the music being sent to an outboard device. Its a mix the engineer doesnt usually listen to; its the mix the reverb listens to when generating its sound. Fading fast So in case you thought there wasnt enough for the engineer to do during a session, lets review the faders that are in action: the channel faders are controlling the levels of the signals going to the multitrack, the monitor faders are controlling the levels of all the different tracks being listened to in the control room, and the effect sends are controlling the levels of all the different components of music going to the reverb. Three different sets of faders have to be carefully adjusted to make musical sense for their own specific purposes. There are two more subtleties to be explored. First, as we are rarely satisfied with just one kind of effect in a multitrack project, we would probably like to employ a number of different signal processors all at once on a single project.
RECORDING AUGUST 1999
Each one of them might expect to use its own effects send. That is, we might have one box with a sweet and long reverb dialed in, another adding a rich, thick chorus, and perhaps a third box generating an eighth note delay with two or three fading repetitions. The lead vocal might be sent in varying amounts to the reverb, chorus, and delay; the piano gets just a touch of reverb; and the background vocals get a heavy dose of chorus, echo, and a hint of reverb. We need more than an effects send to do thiswe need three effects sends. The solution, functionally, is that simple: more effects sends. Its an
important feature to look for on consoles, as the number of sends determines the number of different parallel effects devices you can use at once during a typical session. Beyond this ability to build up several different effects sub-mixes, effects sends offer us a second, very important advancement in our session work: cue mixes . (On some consoles, there are two sets of sends one set labelled effects and one labelled aux. In that case, its usually the aux sends that fulfill the functions were about to discuss.) Generally sent to headphones in the studio or, in the case of live sound, fold-back monitors on the
Excerpted from the August edition of RECORDING magazine. 1999 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
stage, an aux send is used to create the cue mix, a mix the musicians use to hear themselves and each other. As the parameters are the same as an effects send (the ability to create a mix from all the channels and monitors on the console), the cue mix can rely on the same feature. With one exception: the cue mix, unlike the returns from the effects devices, is not returned to the console, but sent to monitors or headphones. Now lets do a fader check: channel faders control any tracks being recorded, monitor faders build up the control room mix, aux or effects send number one might control the mix feeding the headphones, aux or effects send number two might control the levels going to a long hall reverb program, aux or effects send number three might be the signals going to a thick chorus patch, and aux or effects send number four feeds a delay unitsix different mixes carefully created and maintained throughout the overdub. Thats a lot to do at once. Oh, and by the way its not enough for the right signals to get to the right places; they also have to make musical sense. The levels to tape need to be just right for the medium on which you are recording. The monitor mix needs to sound thrilling. The headphone mix needs to sound inspiring. And the effects need to be appropriately balancedtoo much or not enough of any signal going to any effects unit and the mix loses impact. This is some high-resolution multitasking. And it is much more manageable if the console is a comfortable place to work. Experience through session work in combination with studying magazines like this one make this not just doable, but fun.
Pre or post With all these faders performing different functions on the console, it is important to revisit the regular monitor fader to see how it fits into the signal flow. Compare the monitor mix in the control room to the cue mix in the headphones. The singer might want a vocal-heavy mix (also known as more of me) to sing to, with extra vocal reverb for inspiration and no distracting guitar fills. No problem. Use the send dedicated to the headphones to create the mix she wants. But you have different prioritiesyou want to hear the vocal in an appropriate musical context with the other tracks. Moreover, extra reverb on the vocal would make it difficult to evaluate the vocal performance going to tape as it would perhaps mask any pitch,timing, or diction problems. So clearly the cue mix and the control room mix need to be two independent mixes. Theyre created using aux sends and monitor faders. But other things go on in the control room during a simple vocal take. For example, you might want to turn up the piano and pull down the guitar to experiment with some alternative approaches to the arrangement. Or perhaps the vocal pitch sounds iffy. The problem may be the 12-string guitar, not the singer. So the 12-string is temporarily attenuated(its level is lowered) in the control room so you can evaluate the singers pitch relative to the piano, which is in tune. All these fader moves in the control room need to happen in a way that doesnt affect the mix in the headphonesan obvious distraction for the performer. Thats what the pre/post switch shown in Figure 2A is for. A useful feature of many aux or effects sends is that they can grab the signal before (i.e.pre) or after (i.e. post) the channel or monitor fader. Clearly, its desirable for the headphone mix to be sourced pre-fader so that it will play along independently, unchanged by any of these control room activities.
What is the usefulness of a post-fader send, you might then ask? The answer lives in the aux sends other primary function: effects sends. Lets observe a very simple two-track folk music mixdown: fader one is the vocal track and fader two is the guitar track (required by the folk standards bureau to be an acoustic guitar). The well-recorded tracks are made to
A touch of reverb can enable the vocal to soar into pop music heaven, creating a convincing emotional presence for a voice fighting its way out of a pair of loudspeakers.
sound even better by the oh-so-careful addition of some room ambience to support and enhance the vocal while a touch of plate reverb adds fullness and width to the guitar. After a few hours, er, I mean five minutes of tweaking the mix, the record label representatives arrive and remind you that Its the vocal stupid. Oops, the engineer is so in love with the rich and sparkly acoustic guitar sound that the vocal track was a little neglected. It must be fixed. Not too trickyjust turn up the vocal. Heres the rub. While pushing up the vocal fader will change the relative loudness of the vocal over the guitar and therefore make it easier to follow the lyric, it also changes the relative balance of the vocal versus its own reverb. If the vocals reverb send is pre-fader, turning up the vocal leaves its reverb behind; the vocal becomes too dry, the singer is left too exposed, and the larger than life magic combination of dry vocal plus super-sweet reverb is lost. The solution is the post-fader effects send. If the source of the signal going to the reverb is after the fader, then fader rides will also change send levels to the reverb. The all-important relative balance between dry and processed sound will be maintained. Effects are generally sent post-fader for this reason.
RECORDING AUGUST 1999
one on the tip and one on the ring, with the sleeve acting as ground reference for both. For headphones, one signal is the left channel and the other is the right, but for an insert, one signal is the send and one is the return. Signals travel to and from the mixer on one cable, which often splits into two separate cables to plug into the effect in/out (a Y cable). Usually the tip is the send and the ring is the return, but check your mixers manual to be sure. Wrench turnings An organized approach to the console and also the outboard processing equipment will help make it easy and fun to work in a room full of gear. An intuitive understanding of when to use an aux or effects send and when to use an insert will free your mind to be creative with the effects. And knowing that cue mixes generally use pre-fader sends while most parallel effects need post-fader sends will keep you out of trouble. Alex Case welcomes suggestions for Nuts & Bolts. Write to him at case@r ecordingmag.com.
Part 3
in our beginners series
Multitrack Recorders
Have you ever been picked on by some bully in school? I have. After the event I replayed it over and over in my mind until I came up with the perfect comebackthe one I wished Id delivered instead of giving him my Spam and potato chip sandwich. The desire to improve history by rewriting it is pretty instinctive. And in music-making it isnt just a wish, its a modus operandi. The tool that lets us relive a situation as long as it takes to perfect it is the multitrack recorder. Rather than living with the live-to-2 recording, the multitrack gives us that much-appreciated secondor third, or fifteenth chance to get a better take. The ins and outs Heres the idea: as music makes its way from the various microphones to the final 2-track master, we store it temporarily on a multitrack. And thats how its hooked up: microphone out, through the console record path (channel path), to the multitrack recorder. On playback, we send signals from the multitrack recorder out through the consoles monitor path to the mix bus. (Figure 1 sketches it out in a general way: check out our previous two installments to refresh your understanding of the consoles busses and signal paths.) Falling in between the channel path and the monitor path, the multitrack recording devicewhether a computer hard disk recorder, digital tape deck, or analog
tape recorderreceives whatever tracks you are currently recording at its input; it plays back whatever tracks are already recorded. The multitrack is nothing more than an audio storage device. It stores the drums while you add bass. It stores the rhythm section while you add vocals and solos. What are good devices for audio storage? There are just a couple of valid answers (so far): tape and disk. For analog storage, tape is the only practical multitrack medium. For digital,theres tape media like ADAT and TASCAM,but disks in all their formats are also possible: magneto-optical disks, internal or external hard disks, removable disks Naturally the recording device must possess high sound quality, reliability, and affordability. Three other features are perhaps less obvious. First, it must be able to be erased and then re-recorded over, on the off chance someone makes a mistake. Second, the recording must be available for immediate playback right after being recorded. So while 35mm film might be a great release format for sound, it is impractical in the studio because it requires processing in a film laboratory before it can be played back. A final functional requirement of the multitrack recorder is that it must be able to record and play back simultaneously. Its power as a creative tool in the recording studio depends on its ability to overdub : record a new track while simultaneously playing back previously recorded tracks.
How can a recorder play back one thing while recording another? Simple enough. Each track of the multitrack recorder assumes one of two states: playback or record. When a track is in the playback mode, its audio is sent to the multitrack machines output. In record mode, new audio for that track is written/stored on the tape or disk. Overdubs The cool thing about a multitrack is that it can enter record mode selectively, track by track, so that it records only on the tracks desired. The other tracks arent being recorded onto, so they instead stay in playback mode. This accommodates the overdub. Lets check out a power trio session consisting of drums on track one, bass on track two, guitar on track three. Vocals are to be done as an overdub onto track four. During the overdub, it is pretty clear what multitrack outputs one, two, and three are. What signal appears at output number four? Seems logical that it should be the vocal being overdubbed. But one wonders how a recorder can play back the same track its recording. In fact, it cant! So it doesnt. On the track actually being recorded, the tape machine cant play back what its laying onto tape or disk. There is an inevitable delay between when the signal is recorded and when it is played back. That delay is long enough to cause the musical equivalent of a train wreck. The solution is that the machine doesnt even attempt to play back the track it is recording. Instead the output for the track being recorded is its own input. Honest. No typo there. The vocal signal being recorded is sent to multitrack input number four, and it is split within the multitrack machine before being recorded. The divided vocal signal goes both to the recorder and simultaneously to the multitrack output. This is standard operating procedure, and is shown in Figure 2. The mode that routes the input of the track actively being recorded to its own output is called input mode . If a track isnt in input mode, its output signal is the audio already recorded on that particular track. Playbac k mode (or repro mode , from reproduce) describes this configuration, and it is the standard signal flow for tracks not currently being recorded. So there are two choices for what signal appears at each output of a multitrack. It can playback whats already on the tape or disk (thats
RECORDING SEPTEMBER 1999
repro mode), or it can play back what is currently being sent to the tape or disk (thats input mode). Youve got two options here: reread this paragraph half a dozen times or sit in front of a tape machine for a couple minutes. Its much more confusing to say than it is to do. The tricks and treats Okay, so a multitrack is used to record the rather elaborate audio arrangement of a pop tune a few tracks at a timean arrangement that might use more than 24 tracks of recorded music. Naturally, we do more than just print tracks with the multitrack.
Lets explore some of the more subtle production capabilities offered by the humble multitrack recorder. One handy feature is the ability to record from one track to another, a process called bouncing . There are a few reasons to bounce tracks. The first reason is for convenience. As a project progresses, the multitrack can get a little messy, with alternate vocal tracks, solo out-takes, background vocal harmony ideas, and that experimental (but ultimately rejected) contrabassoon solo all spread out to various locations among the keeper tracks. Its often helpful to reorganize the tracks into a more logical order: all the
drums on the first few tracks, all the vocals on the last few tracks, with the rhythm section laid out in an order that is logical and comfortable for you. To move a signal from one track to another, simply hook up the output of one track to the input of another, set the source track to playback, the target track to record mode, and record. On analog machines, this costs you a generation of quality, which is more than tolerable on some of the better machines. On digital machines, this bouncing ability often exists digitally within the machine. No patching, and no generation loss. Needless to say, we bounce tracks more often on digital multitracks. Another variation on the bouncing theme is submixing. Instead of doing a direct transfer from one track to another, it is often handy to create and record a submix of a component of the tune. If drums were recorded across 12 tracks of the multitrack, it can be a good idea
to mix just the drums down to two new tracks, leaving the non-drum tracks unmixed for now. This is helpful in two ways. First, it frees up tracks for other purposes. Digital recording software can sometimes offer infinite numbers of
tracks, but that can be limited by the size of the storage medium and the power of the machine doing the recording. And analog recorders, and digital tape and MiniDisk machines, have fixed numbers of tracks available. So bouncing downlets you take
Excerpted from the September edition of RECORDING magazine. 1999 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
a large number of tracks and pre-mix them to make room for more of your orchestration. Second, if some elements of the tunelike the drums in this case are already carefully pre-mixed, then creative energy, effects devices, patch cables, and fader fingers are free to focus on the remaining elements of the mix. Of course, there is a downside to submixing. In order to free a track by submixing, you must perform the rather scary act of erasing the original. Submixing twelve drum tracks to a stereo pair of tracks will indeed free up ten tracks for vocals, solos, and other musical ideas, but only if you are willing to erase the original snare track, the original kick drum track, etc. And the submixed drums are only useful in the final mix if the submix itself is, as they say, totally killer. Without knowing exactly what the entire mix will sound like, youve got to create an appropriate,complementary, compelling submix of the drums. Clearly, submixing some number of tracks down to fewer tracks is its own skill, requiring not only basic mixing chops but also a little bit of extra-sensory perception to predict the appropriate mix goal of a given element of the overall mix. Expect to make a few mistakes. Plan to remix these submixes a few times, and try to have a backup of any tracks you erase. This is easier to do with digital systems than with analog recorders, since you dont lose a generation of sound quality when you dub off a backup. Sometimes submixes are printed to the multitrack not so much to free other tracks but to store a mix move. Printed mix mo ves are a good way to have manual fader rides and crazy pan pot moves in a mix without automation. Just do the mix move manually, recording the audio result to spare tracks of the multitrack. Yet another variation on the bouncing theme is called comping .A comp is hip-speak for a composite. It refers to creating a single track that is in fact a collection of pieces of any number of different tracksthe best chorus happened on take three while the best bridge happened on take seven, and the best intro was yesterdays scratch vocal.Aack!
RECORDING SEPTEMBER 1999
Mixdown with vocals all over the multitrack coming up on faders all over the console is very distracting. They are comped by recording all the appropriate pieces to a separate track. The comped track then appears in one place, on one fader, and is a lot less distracting during mixdown. In fact, comping is nothing more than bouncing from many different source tracks one at a time to the same destination track. The multitrack can do more than just record instruments, other tracks, and submixes. Why not record some effects to the multitrack? If you stumble upon a truly magic effect that you think may be difficult to reproduce, record it to its own tracks. Sometimes its necessary to record the effect because youre borrowing the $3500 Spastron Digital Nirvana Box and it has to be returned tonight. More likely its because the total effect uses an elaborate signal path through 14 different effects units, and though you actually own them, the exact settings may be difficult to recreate. Printed effects are a good habit when you have spare tracks and have created a rather dramatic effect. Finally, there is no reason the entire mix itself cant be recorded on the multitrack. Naturally, the mix is recorded to the 2-track master machine, be it DAT, disk, or analog tape. But if you have spare tracks, print a safety version of the mix on the multitrack. Beyond the comfort of having a backup copy, you create the basis for a fast, and therefore cheap, recall. Youve no doubt experienced the temptation to recall a mix. You, the artist, or the label decide a week or two after mixdown that everything in the mix was gorgeous... but it just needs a little extra reverb on the snare, and some slap echo on the slide guitar would be nice. These changes generally require the entire mix to be recalled. That is, the same studio full of the same gear has to be restored to the exact same configuration it was in the day you mixed, knob by knob, switch by switch. And all manual moves must be re-performed exactly as before. This aint trivial. Its difficult to get a recall to truly match the original mix. Often the best you can do is get close enough for rock and ro l l a n d move on. If the entire mix is on two tracks of the multitrack, the recall is pretty trivial. Push up the two faders with the original mix on them, and use the snare and slide guitar tracks as sources for the additional effects. Mix
Multitrack Recorders
to live on the edge, you can literally cut and paste the tape to change the song in any way you desire. Swap verse one for verse three. Cut the solo to half its length. Pull out two bars of the intro. (Dont shudderits how we all did it, until very recently.) Should you go digital, youve got the additional decision of whether to be tape- or disk-based. Compared to hard disks, tape-based formats are generally much less expensive, are often more portable, and they offer the ability to change instantly from one project to another. Changing tapes is a lot easier and quicker than backing up one hard disk project and restoring another (unless you switch hard drives,which can also be an option, if an expensive one). Splicing digital tape is possible on the (very expensive) open reel formats, but is verboten for helical scan, cartridge-based tapes like the ones used in DAT, DTRS (DA88 series), and ADAT formats. Hard disks overcome that disadvantage by offering nondestructive software-based editing.Nondestructive means that the edits dont alter the actual audio filetheyre done in software on the fly, so youre free to change your mind. There are other advantages to hard disk recording. These include ever-decreasing prices, external and removable drives to improve portability, and random access. You are familiar with the appeal of random access if you have ever tried to zip to the fifth song on an album on both CD and cassette. The CD gets there instantly and effortlessly. On cassette you fumble for the location, fastforwarding and rewinding until you find it.
the new effects with the old stereo mix, and for just a few minutes work youve got a new-and-improved mix that will please everyone for at least another week or two. Pros and cons We can see where a multitrack is a core part of your studio and its operations, and the right machine will let you do everything from storing and assembling passes to recalling entire mixes. So how do you choose the right machine? When evaluating which multitrack to buy, rent, or borrow, the normal priorities apply. Youll be looking for the balance of sound quality versus price that fits your budget. When evaluating the cost of a multitrack, do keep in mind the cost of the media as well as the cost of the machine. Youve got the one-time cost of buying the tape machine to take care of first, but the per-project cost of the tape (or disks) to justify to yourself or the clients later. Beyond this value calculation, some other features should be given due consideration. The multitrack recorder, whether tape- or disk-based, is a mechanically complicated device. Unlike, say, a digital multieffects unit, a multitrack has moving parts, and lots of them. It needs to be well designed and well maintained.
The multitrack gives us that much-appreciated second or third, or fifteenth chance to get a better take.
For a tape machine, look for a manufacturer you or a colleague knows to be trustworthy. If new, make sure it has a warranty to get you through any manufacturing faults. If used, try to assess the amount of loving care or sloppy abuse the machine endured. If the machine hiccups during day one of an album you might lose the gig. If the machine crashes on day 231 you might lose all the audio for the entire project. Its tough to put a price on reliability, so give it some thought before you transact. Related to quality is the feel of the machine. How quickly does it fast forward and rewind? Is it cooperative or cantankerous as you record a chorus, rewind to the beginning of the chorus, re-record the chorus, rewind to the second repeat of the words ...baby, yeah..., rerecord the words ...baby, yeah..., rewind, repeat, etc. The process of recording a song is fairly active and very non-linear, and the multitrack needs a transport that can keep up with the creative needs of the session. Some feel like Italian sports cars, anticipating your every recording desire; some drive like an old school bus with a flat front left tire. The decision to use a digital or an analog multitrack should really be governed by how it meets the above criteria. Choose the machine you feel gives you appropriate sound quality for the dollar while offering acceptable reliability and a comfortable transport. If you go analog, you get the ability to edit tape. All you need is a razor blade for cutting, some special tape for taping the tape to another, er, piece of tape, and an editing block to help you cut consistently. With appropriate equipment, clean and steady hands, and a willingness
RECORDING SEPTEMBER 1999
Tape-based multitrack machines have the same problemonly theres the added benefit of a room full of people waiting for you to find the right place on tape. Tape counters, memory locations, and good notes on a take sheet can make this less of a headache, but disk-based recorders offer true random access. Want to hear track six? Click. Here it comes. Another type of multitrack recorder we havent mentioned is the DAW (digital audio workstation). These systems are in the hard disk category, but theyre really a whole subject unto themselves. They combine all the advantages of stand-alone hard disk recorders with excellent editing. And track bouncing in a DAW is different, because you can do it without deleting the original tracks. If this sort of power appeals to you, you should follow the DAW Diaries in this magazine for tips and tricks that have worked for my fellow writers. In my recording life I use both tape and hard disk machines. I guess Im showing my age, but I prefer the vibe that comes from using tape-based multitrack machines. Rewinding before re-taking a solo gives the session a sort of pace that I find natural. Instant access to the beginning of the solo makes it all to easy to work way too fast and lose the chance to take a breath and be creative. On the other hand, random access locating and its associated nondestructive editing are clearly a powerful production tool. The choice is yours, and a bit of practice and forethought will help you pick the multitrack thats right for you. Alex Case welcomes suggestions for Nuts&Bolts . You can contact Alex at case@r ecordingmag.com.
Part 5
in our beg inners series
Microphones 2
Microphones 2
Much of a microphones behavior is determined by the following simple distinction: is the diaphragm open to air on one side or both? Figure 1 demonstrates this distinction. The upper capsule shows a diaphragm that is open on one side but blocked on the other. The lower capsule is open to the acoustic pressure on both sides. The figure also shows a particularly illustrative snapshot of an ongoing acoustic wave moving across the entire figure from left to right. The capsules are oriented so that they are both open on the left side; they are looking toward the oncoming wave. The top capsule has a compression wave immediately in front of it. This instant of high pressure pushes the diaphragm of the capsule inward, to the right. Similarly, the lower capsule sees a higher pressure to the left than it does to the right, so it too is pushed to the right.
Figure 1: The upper capsule is open on one side only, measuring pressure. The lower capsule is open on both sides, measuring velocity
So far the two types of capsule seem to behave identically. Consider Figure 2, which shows the two microphones rotated 90 so that they are oriented upward. As the acoustic wave rolls by in this instance, the upper capsule is again pushed inward as the pressure on the open outside of the diaphragm is greater than the enclosed inside. The lower capsule, on the other hand, sees the same high pressure on both sides of the diaphragm. The interesting result is that this diaphragm doesnt move at allit only moves when there is a pressure difference between the two sides. The upper capsule measures pressure. The lower capsule measures a pressure difference, or to be more mathematically precise, a pressure gradient. Naturally, the lower capsule is not typically called a pressure difference or pressure gradient mic, at least not in rock in roll. Instead, it goes by the slightly cooler name: a velocity tr ansducer .
RECORDING NOVEMBER 1999
Figure 2: For sound from the side, the diaphram of the upper capsule is displaced. The lower capsule rests, completely uneffected.
flaps in the wind at audio frequenciesperhaps as slowly as 20 times per second and as quickly as 20,000 times per second. These two types of transducers, pressure and velocity, are both perfectly capable of converting music into voltages. Both types are common in any studios mic closet. But there are differences between them. Directionality The physical orientation of the capsule itself is fundamental to determining its directionality. A key difference between the pressure microphone and the velocity microphone has already been demonstrated in Figures 1 and 2. The pressure mic (the upper capsule in each figure) reacts to sound coming from in front or from the side. In fact, it responds to pressure waves no matter what their angle of arrival. Being equally sensitive to sounds from all directions, it earns the moniker omnidirectional.
Microphones 2
The velocity mic (the lower capsule in each figure) demonstrates an ability to hear sound arriving from the front, yet it ignores sound coming from the side. The arrangement in which the capsule is open on both sides is most sensitive to sound coming straight at the diaphragmfrom the front or the rearand least sensitive to sound coming from the sides. The mics sensitivity decreases gradually as sound sources move off-axis from front to side. To understand this better we need to graph it on polar coordinates. If we plot the sensitivity of the microphone as a function of the angle of arrival of the sound from the source, we can make visual the directional discrimination properties of the mic.
You can achieve a total understanding of how mics work by understanding how the capsule interfaces with the air.
Figure 3 shows the three polar patterns we most often see in the studio. And parts A (omnidirectional) and B (bidirectional) weve just discussed. The omnidirectional pickup pattern shown in 3A is equally sensitive at all angles, and is a natural result of being a pressure transducer. The bidirectional pattern (also called the figure-eightpattern) shows two points of maximum sensitivity directly in front of and behind the capsule, diminishing sensitivity as the angle of arrival goes toward the side, and finally total rejection for sounds fully at the side. The bidirectionality of the mic is a byproduct of being a velocity transducer. It only measures the movement of particles against it, ignoring particle velocity that moves alongside, parallel to the diaphragm itself. But there is a little more to the bidirectional pattern. The front and the back lobes of the figure-eight pattern are not exactly the same.
RECORDING NOVEMBER 1999
Excerpted from the November edition of RECORDING magazine. 1999 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
Microphones 2
Figure 4 shows a velocity microphones reaction to a given sound wave as it propagates left to right. Figure 4A orients the mic facing left into the sound. The higher pressure, left versus right, suggests the air particles and the diaphragm will move toward the right. This motion will create an output voltage of, say, one volt. phone can be equally sensitive to sounds in front of or behind the mic, but it picks up sound from behind with reverse polarity. In front of the mic, a positive pressure creates a positive voltage, while behind the mic, a positive pressure creates a negative voltage.
Look that way It is this reverse polarity of front versus back that enables us to create a unidirectional microphone. Figure 3C shows this type of pickup pattern, which is most sensitive in only one direction. This is quite helpful in the studio when you wish to record several instruments at once, but each to its own track. When physical isolation isnt available in the form of isolation booths, engineers achieve a sort of acoustic isolation by using unidi-
Microphones 2
rectional microphones aimed directly at their intended instruments, rejecting/minimizing the unwanted neighboring instruments. If you add up the response of an omni to that of a figure-eight, you end up with the cardioid response shown in Figure 3C. (Its called a cardioid because, to someone who knew Latin, it looked heart shaped. But its a pretty funny looking heart, not the sort of heart that would sell a valentine greeting card. I guess there wasnt a Latinbased way to say Looks kind of like a pizza with one slice missing.). Want to build a cardioid response? Grab a pressure transducer (or any omnidirectional microphone) and a velocity transducer (or any bi-directional microphone). Place them as near each other as possible, facing the same way, and mix them together onto one track. If you monitor with the two microphones at equal amplitude, youll have created a cardioid pick-up pattern using a 2mic combination.
All studio microphones are either pressure transducers, velocity transducers, or some combination thereof...
It is possible to delay the components of sound reaching each side of the diaphragm so that for sources behind the mic, they arrive at exactly the same time. Ports into the microphone are configured so that there is no direct path from the rear of the microphone to the rear side of the diaphragm. The arriving sound must navigate the short detour of an acoustic labyrinth on its way to the back side while simultaneously wrapping around to the front. If the time it takes the wave to diffract around to the front of the mic is equal to the time it takes the same wave to reach the back of the diaphragm via the longer path, the diaphragm will not move. When the diaphragm is pushed from the front by the positive portion of a cycle, it is simultaneously pushed from the rear by the positive portion of the cycle. This push/push phenomenon emulates the situation of Figure 2 in which sound arriving from the side presents the same pressure on both sides of the velocity diaphragm. Mission accomplished: acoustic manipulation of the signal achieves rejection from behind. Pretty darn clever. But theres a little more to it. For this modified microphone to be of any use, sound arriving from sources that are in front of the microphone must still be effective at moving the diaphragm.
Figure 4: Reversing the orientation of a velocity microphone reverses the polarity of the output signal
To see how a cardioid pattern is born, look closely at some landmark points in the response of the two component patterns of Figures 3A and 3B. Directly in front of the microphone you get a contribution from both capsules. Off to each side, only the omnidirectional pressure transducer picks-up sound. Behind the microphone you have the contribution of the omnidirectional piece being undoneliterally cancelled by the polarity-reversed rear lobe of the bidirectional mic. Placing a pressure capsule and a velocity capsule in the same place and combining them gives you double the sensitivity in front of the pair and total rejection to the rear. Its a good trick. The downside is that you get a single mic for the price (and noise floor) of two. Theres another way to do it that requires only a single capsule.
RECORDING NOVEMBER 1999
...And all of the intermediate mic patterns can be created by mixing variable amounts of two types of transducers.
This is achieved by making sure that the front versus back time-of-arrival difference at the diaphragm for sound arriving from the front of the capsule is exactly (or nearly) equal to 180 of phase difference. In this way the waveform is presented to both sides of the diaphragm in a complementary way. When the diaphragm is pushed from the front by the positive portion of a cycle, it is simultaneously pulled from the rear by the negative portion of the cycle. Not only does sound arriving from the front of the microphone still move the diaphragm, but it does so in
Microphones 2
Figure 5: Cardioid pick-up is achieved through acoustic manipulation of sound reaching each side of the diaphragm
this push/pull fashionit simultaneously pushes on one side and pulls on the other. That translates into increased sensitivity. This clever manipulation of the waveform as it reaches both sides of the velocity transducer leads to a cardioid pattern: enhanced sensitivity in the front, total rejection at the rear. In fact, all this acoustic signal processing tries to make a single capsule that is half sensitive to pressure and half sensitive to velocity. It is the acoustic combination of the two microphones we combined electrically above. By using a single capsule, though, it accomplishes this at a much more appealing price. Many single diaphragm cardioid microphones (the famous Shure SM57 and Electro-Voice RE20, among the many good examples) offer a good visual example. It is easy to see the ports on the body of the microphone that are the entry points for the sound into the back of the diaphragm. As before, all these intermediate patterns can be created by mixing variable amounts of two types of transducers, using two mics and a mixer. The relative levels of the two mics determines the degree of omni versus bidirectional in the net polar pattern. Alternatively, sub- and hypercardioid patterns can be created on a single capsule by acoustically mixing the two types of transduction through the clever design of ports reaching the rear of the diaphragm. A particularly good visual case study comes via the venerable microphone manufacturer Neumann. They recently released small diaphragm omnidirectional and hypercardioid mics to complement their well known cardioid, the KM184. Check out the photo of the complete Series that shows the mics side by side. The only visible difference among them is the rear ports.
Want to build a cardioid response? Grab any omni mic and any bi-directional mic. Place them as near each other as possible, facing the same way, and mix them onto one track.
Look this way By mixing differing amounts of pressure and velocity transduction, other polar patterns can be created. Weve seen how equal parts pressure and velocity produces a cardioid. More pressure than velocity leads to a directivity that is, not surprisingly, more omni than cardioid. Called a subcardioid , it is slightly more sensitive front versus back; it partially but not completely rejects sounds behind it (Figure 3D). Conversely, having less pressure than velocity tilts the balance toward the bi-directional pattern. This more directional pattern is usually called a hypercardioid (Figure 3E). It is more sharply focused forward. Because it is less pressure than velocity, however, there is no longer perfect cancellation at the rear. The hyper-cardioid develops a small rear lobe of sensitivity that is the residual rear lobe of the bidirectional component. Enhanced forward sensitivity comes at the expense of diminished rearward rejection. Youll no doubt find specific session situations where these other patterns are just what you need.
RECORDING NOVEMBER 1999
Know it all All studio microphones are either pressure transducers, velocity transducers, or some combination thereof. In addition, most all studio mics employ one of the following design types: moving coil, ribbon, or condenser. Weve spent two months digging into these concepts and found that within all of these types of microphones lives a knowable, straightforward process. Armed with this knowledge of the physics behind the technology, next month well discuss the basic specifications, features and switches you might find on a microphone. Youll find they all stem from these microphone fundamentals. Deciding which microphone to buy or which microphone to use on a specific instrument in a specific situation will depend on your knowledge of this basic process of transduction from acoustic to electric energy, in combination with your feeling about what sounds best. Alex Case wants to know w hat you want to know.Request Nuts & Bolts topics via case@r ecordingmag.com.
Part 6
in our beginners series
Microphones 3
In the relatively straightforward case of recording a power trio, you might have to select maybe a dozen microphones. The trio: drums, bass, guitar and vocals. The mics: kick, snare, hat, three toms, two overheads, two out in the room, bass, guitar and vocal. Selecting the right mic for the job is an ever-present part of the recording gig. The creation of an album involves making this small decision maybe hundreds of times. In the last two episodes of Nuts & Bolts we explored the inner workings of microphones. This month we tackle the meaning of the various specifications, and the function of the various controls that might appear on a microphone. The goal is to convert microphone selection from a random, luck-of-the-draw process into an organized system built on total knowledge of all microphone technologies and parameters. Frequency response Selecting a mic really begins with its frequency response; we want to know how it sounds. A frequency response plot is the first view into this. This description of the microphones output at different frequencies reveals any biases for or against any particular frequency ranges. Figure 1 offers a few samples. The oft-cited color of a microphone is very much determined by its frequency response. Try to have in mind a rough sense of the frequency response of every mic you use. You can store the data (in your brain, that is) visually, literally picturing frequency response plots in your mind. Alternatively, you can store the data in words: warm, boxy, present, edgy, airy. As your experience grows, these words develop a very precise meaning. As time goes by and naturally you acquire more mics, youll need to add new words to your lexicon to be more precise. Its not just warmits thick, tubby, big, phat, punchy, heavy, or some such. Its not airy; rather its breathy, it shimmers, it soars, it sparkles.... So the frequency response plot is a good starting point for learning the sound of the device. But your professional development will alwaysfor the rest of your lifeinclude refining your own internal sense of the sound of each make and model of microphone.
RECORDING DECEMBER1999
Microphones 3
The idea of a uni-directional pattern is that the microphone is focused most on sounds directly in front of the mic. Sounds arriving from the side are attenuated.And sounds arriving from the rear are rejected. Thats the theory. The fact is, off-axis sounds arent just attenuatedthe off-axis frequency response of a microphone is often different from the on-axis frequency response. The result is
Which microphone shall we try? This question will sometimes fill you with dread and panic. (Ive never recorded a contrabassoon...)
and so on. Moreover, these instruments tend to be quite loud, forcing themselves on every microphone in the zip code. Leakage abounds. But its not just drums that require us to consider this acoustic leakage issue. When we work with loud instruments (e.g. horns, percussion, and the obligatory electric guitar) or in tight quarters (small booths, and most home or project studio recording situations), we get significant off-axis sound into our directional microphones. In all cases, if that off-axis sound is dull or murky it will drag down the sound of the mix. And another thing (this one seems more obvious but is all too often neglected): we often record off-axis sound on purpose. Drums, electric guitars, sections (of horns, strings, voices, etc.) and many other tracks welcome the placement of some distant mics for recording the ambient sound in the room. In these situations, choose a mic that welcomes off-axis sound and doesnt impose an unappealing coloration onto the sound. These ambient room mics are supposed to be far
that sounds arriving from anywhere but in front of the mic are spectrally altered. Specifically, most cardioid patterns are better at rejecting high frequencies off to the side and behind than low frequencies. Said another way, the cardioid microphone is more of a true cardioid at high frequencies and more omnidirectional at low frequencies.
RECORDING DECEMBER1999
Microphones 3
from the source and are generally supposed to be picking up room reflections coming from all directions. Choosing an omnidirectional mic is one solution. But it is perfectly acceptable to want a directional mic to achieve some rejection. Keep in mind the off-axis coloration the microphone might add. Choose one whose off-axis response enhances the ambient sound you are trying to capture. A subtle part of microphone choice then has to do with the degree of offaxis coloration the mic imparts. As if you didnt have enough to memorize about a microphone, now youve got
to learn its frequency response as a function of angle... Dont sweat it, thoughit will come over time. Proximity effect Okay, so youve got the frequency response of a microphone thoroughly internalized, both on- and off-axis. What next? Proximity effect: the low frequency accentuation that occurs when a sound source is very close to a directional (i.e. non-omnidirectional) microphone. Proximity effect represents another alteration to the frequency response of a microphone. Fortunately, it is
In the relatively straightforward case of recording a power trio, you might have to select maybe a dozen microphones.
sound larger than life. This is helpful when advertising monster truck shows, announcing sports, or just plain ol talking. DJs sound more impressive (and taller) when they are in close on their directional mics. Lead vocals in rock and roll and pop rely on this as well. Hit songs need to sound better than the original instruments, better than reality. Modern studio production techniques leverage proximity effect selectively for many tracks. Roll-off But bumping up the low end isnt always a good thing. Getting in close to a snare, a piano, or an acoustic guitar can lead to an overly boomy, annoyingly thumpy sound. When you hear this sort of problem, you are hearing an unwanted proximity effect. Know that backing the mic away from the instrument might be all it takes to solve the problem.
RECORDING DECEMBER1999
When the close mic location is just right except for some unwanted low end due to proximity effect, it is helpful to kick in a high pass filter. Allowing the highs through, the high pass filter attenuates only the lows. Studio speak calls this roll-off, and many microphones have a built-in switch that does exactly this. Engaging the roll-off circuit removes or diminishes a problematic proximity effect. Additionally, it may be used simply to get rid of unwanted low frequency sounds that sneak into a studio. Air conditioning and traffic noise from highways or train tracks are typical low frequency headaches. The roll-off switch is a good solution to these problems. And when the mic doesnt have a roll-off filter built in, you can often find one on the console or in an outboard mic pre or equalizer designed for the same purpose.
The more sensitive microphone generates a higher amplitude voltage for the same sound pressure level input. The hotter output requires less amplification at the mic preamp, which can mean a lower noise floor will be recorded. This is a good thing. On the other hand, placing a very sensitive microphone near a very loud sound source can overload the electronics, causing distortion. This is (usually) a bad thing. Think of sensitivity as a specification that really only needs to be worried about at its extremes. That is, if you know you must record a very
quiet instrument (have you ever gathered sound effects like foot steps in sand or water dripping?), seek out a sensitive microphone. If you know the instrument is ragingly loud (trombone comes painfully to mind), perhaps look for a less sensitive transducer. Otherwise, mic selection is more a function of polar pattern, frequency response, off-axis coloration, etc. Pad Sometimes the pairing of a loud sound with a sensitive microphone leads to distortion. Should the
Its not just warmits thick, tubby, big, phat, punchy, or heavy. Its not airy; rather its breathy, it soars, it sparkles....
Listen carefully when you engage a filter, because poorly designed filters can affect the higher frequencies audibly, even though they pass through the filter. If youve got the time, the gear, and the ear training, compare the highpass filter on your microphone to the highpass filter on your console to any other highpass filters your studio may have. You may find that all too common trend here: the more expensive filters sounds better. Sensitivity Its not just a New Age, politically correct termmicrophone sensitivity describes how much output the microphone creates electrically for a given acoustic input. That is, if the assistant engineer screams at exactly 90 dB SPL into a microphone, what voltage will come out? When the assistant screams at exactly 90 dB SPL into another microphone, what voltage comes out?
RECORDING DECEMBER1999
Microphones 3
acoustic energy hitting a microphone overload the mics internal electronics or the microphone preamplifier, a pad can be engaged. The pad offers a fixed amount of attenuation, say 10 or 15 dB, to the signal just leaving the transducer. The lower voltage coming out of the transducer after the pad will (hopefully) no longer overload the electronics, enabling the microphone to be used even on a louder sound. Many microphones have the ability to sound gorgeous on, for example, both acoustic guitar and snare drum. A mic close to a snare drum might encounter sounds well above 130 dB SPL. A subtle nylon string acoustic
The oft-cited color of a mic is very much determined by its frequency response.
times, this question will fill you with anticipation. (This new mic sounded great on Amys guitar. I cant wait to try it on yours.) It is possible to break the mic selection process down into smaller decisions. Youve got to get the best sound possible. Experience and ear training will help you match sound sources to complementary microphones. Session requirements might narrow your options, forcing you into a given polar pattern. For example, you might need to use a mic with a cardioid pattern if you have to put the sax player next to the piano. Given placements and pairings may or may not require you to switch in a roll-off or a pad, but your knowledge of what these switches do in combination with your reaction to what youre hearing make these decisions pretty straightforward. The simplicity of the microphone technology requires that we master just a few concepts; the subtlety of recording acoustic sounds demands that we then proceed carefully, with our ears wide open. Alex Case wants to know w hat you want to know. Request Nuts & Bolts topics via case@r ecordingmag.com. Thanks.
Excerpted from the December edition of RECORDING magazine. 1999 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
guitar might be a mere 40 dB SPL or less at the desired mic position. The pad enables the same mic to be used on instruments of such radically different loudnesses. The pad is turned on when recording the snare and turned off when recording the guitar. Sometimes a pad isnt enough; acoustic signals can become too loud for the microphone. If the acoustic stimulation of the transducer forces the diaphragm into the extreme limits of its physical motion, it may become non-linear. That is, when the sound is too loud for the capsule, the motion of the diaphragm no longer follows the sound smoothly, rather it slams into the limits of its freedom to moveit distorts mechanically.
RECORDING DECEMBER1999
Part 4
in our beginners series
Microphones1 Transducer
electing the right microphone is a constant part of the job. Band: The Has Beens. Song #5: The Hair I Used to Have. Overdub #16: ukulele. Which microphone should be used? Depending on the studio, the engineer has to choose among maybe a dozen or maybe even a hundred microphones. They come from countless manufacturers, offering several model numbers. What will a given microphone sound like on a particular instrument in a specific style of music in this unique recording space? Aaargh! There is no end to the possibilities. One develops insight and intuition about which mic to try for a given situation through experience. But we can help our experience along by learning how they work. Its helpful to break down the vast range of microphone possibilities into some subgroups. In the recording studio, the recording engineer typically chooses among three types of microphone designs: moving coil, ribbon, or condenser.
Designs
Leveraging this principle, called electromagnetic induction, you can generate your own electricity if you want. Just persuade someone to hop onto a bicycle modified so that the rear tire is a coil of wire. Set it up so that the wire rotates through the gap of a magnet when he or she pedals. If he or she pedals hard enough and if the coil and magnet are big enough, you could power all your favorite equipment free (assuming you dont pay this person). We dont know people willing to do that, so instead we have power companies.
We can gain insight about which mic to try for a given situation by learning how they work.
How do they work? Unusual in our world of complicated gear (ever open up a digital 8track?), the microphone is an elegantly simple, completely knowable technology. And knowing how the thing works gives us some insight into how to use it. A fascinating parallel between electricity and magnetism exists and seems tailor-made for audio. Whenever an electrical conductor like a wiremoves through a magnetic field, an electrical current is induced onto it.
RECORDING OCTOBER 1999
Power companies use giant steampowered turbines to spin generators that rely on this same fundamental physical property. And not only does a magnetic field induce a current on a wire that moves through it, but also a changing current on a wire creates a magnetic field around it. That is, electromagnetic induction also works in reverse. Using electricity to create a magnetic field is a basic necessity when
recording music on magnetic tape or playing music back through a loudspeaker. More on all that in future episodes of Nuts & Bolts; for now lets apply electromagnetic induction to microphones. Microphones that rely on electromagnetic properties to convert an acoustic event into an electrical signal are called electrod ynamic (more commonly dynamic) mics. There are two types of dynamic microphones used in the studio: moving coil and ribbon. And they both are appealingly straightforward devices. The moving coil dynamic microphone converts sound into electricity with essentially three components: a diaphragm that moves with the air, a coil that is moved by the diaphragm, and a magnet that induces electrical current onto the coil when it moves. This type of mic takes advantage of the motion of air particles during an acoustic sound to move a coil of wire through the magnetic field of a permanent magnet. The coil movement creates an electrical signal whose voltage changes as a direct result of the acoustic event. Its a satisfyingly simple process. The ribbon microphone takes advantage of the same electrodynamic principle weve discussed. As a machine that converts acoustic energy into electrical energy, it is even simpler than the moving coil system. The ribbon microphone cleverly combines the diaphragm and the coil above into a single device: a ribbon. That is, the thing that moves in the air is also the conductor of electricity. The ribbon is a piece of metal suspended between the poles of magnet.
When a musical instrument plays, air molecules move. The air molecules near the ribbon force it to move; the motion of the ribbon through the magnetic field induces electrical current onto the ribbon itself. Voltage changes that are a perfect analogy to the acoustic event are created. A third microphone transducer technology employed in the studio doesnt rely on electromagnetic induction at all. The condenser microphone relies on the electrical property of capacitance instead. We know that if we hook up a voltage source (e.g. a battery) across a wire, electrical current will flow. If we cut that wire, the current stops.
Once upon a time this type of electrical component was called a condenser . While the component is today generally called a capacitor instead, the microphone built around this technology hangs on to the name condenser. A condenser microphone is nothing more than a variable capacitor driven by acoustic sound waves. One plate of the capacitor is the diaphragm whose motion is a result of the changing sound pressure around it. As the diaphragm moves, the capacitance changes. The electrical output of the microphone is a pattern of voltage changes derived from this change in capacitance.
It turns out that there is something in between a closed circuit (the wire) and an open circuit (the severed wire). Imagine that after cutting the wire we bring the two ends of the wire really close to each other without touching. Its easy to imagine that, without current actually flowing across the gap weve made in the wire, the two ends would influence each other electrically. A capacitor is a component that does this on purpose. Where the wire was broken, plates of metal are attached. And these two plates are brought up very close to each other, again without touching. The result is that an electrical charge builds up on the plates, pulled by the influence of the voltage source across the gap in between the plates. The ability to store a charge, or capacitance(hence the name capacitor), is a function of the voltage across the plates, the size of the plates, and the distance between the two plates. As the plates separate, they become more like a fully broken circuit and the charge dissipates. As the plates converge, they have a stronger and stronger influence on each other and try to approach the behavior of a completed circuitthe charge on the plates then increases.
Mission accomplished: acoustic music in, electrical signal out. Which one do I use? Knowing the type of transducer technology a microphone employs gives the engineer some insight into how it might sound and what applications it is best suited for. But let me preface this discussion with some very important, really good news: were lucky to be in the audio biz in 1999. The quality of the design, materials, and manufacturing techniques used today is enabling all microphone technologies to converge toward a consistent,high-quality, high-durability product.Below I discuss some general properties of microphones based on the type of transducer used. This is a good starting point for deciding which mic to use in a given situation. And its certainly helpful when using the ever-popular older microphones. Take note, however, that some new microphones have addressed many of the historic design weaknesses cleverly, creating mics that are often appropriate in a broad range of recording situations. So with the caveat that these generalities dont apply to all mics, consider the following.
RECORDING OCTOBER 1999
Durability Moving coil microphones are often considered to be the heartiest of the bunch. As a result they are often the transducer of choice for live sound applications, which are very tough on delicate equipment. At the other end of the durability chain is the ribbon mic. The ribbon itself is pretty fragileespecially on the vintage (i.e. expensive) ribbon microphones available at some studios. Remember, the job of the ribbon is to react instantly to any change in the air pressure around it. And if there is, say, a 10 kHz component to the music you are recording, then the ribbon has to be able to move back and forth ten thousand times a second. Physics asks it, therefore, to have as little weight as possible. Unfortunately, as the ribbon loses mass it necessarily loses strength. Some ribbon microphones are still manufactured today, and the ribbon within those mics is certainly tougher than the ribbons in Grannys microphones. But nobody dares stick a ribbon microphone in the high amplitude world of a rock and roll kick drum. Some new ribbons are designed to be tough enough for screaming vocals and thundrous electric guitar. But they all want a chance at the horns (not too close, thank you), the piano, and the acoustic guitar, among others.
Moving coil dynamic microphones are the largest mechanism used for converting acoustic waves into electrical ones. Not surprisingly, then, they generally have a natural high frequency roll-off as the ability of the device to transduce diminishes at higher frequencies. Consider the following hypothetical expedition. Before the session begins, you go food shopping for the band you are working with, and the drummer helps. The shopping list consists solely of potato chips and beer, but in enough quantity to get the band through a two-week session. You and the drummer go to the neighborhood Chomp n Gulp, get two shopping carts, and you fill one cart with chips while the drummer fills the other with beer. This has nothing to do with microphones, youand my editorthink to yourselves. But consider these questions: Which cart is easier to drive? Which cart is easier to stop and start? The chip cart and the beer cart can go pretty much the same speed, but the beer cart needs a stronger shove to get going. Clever drummers start emptying (i.e. drinking) some cans for this very reason. For our microphones, the moving coil is more like the beer cart. Quite simply, the diaphragm/coil assembly is too big to react quickly, as required by very high frequencies.
Unusual in our world of complicated gear (ever open up a digital 8-track?), the microphone is an elegantly simple, completely knowable technology.
In the durability category, condensers generally fall somewhere in between moving coil and ribbon designs. As a result, youll certainly find some of them performing on stage or placed near the very loud instruments such as kick drum, trumpet, trombone, and so forth. Sound quality Though microphones of all types seem to be improving in capability, it is worth making some generalizations about how a microphone of a given transducer technology might sound.
RECORDING OCTOBER 1999
The ribbon microphone is more like the chip cart. Consisting of a single moving part (the ribbon) it is a lighter mechanism. As a result the ribbon transducer is typically more agile than a moving coil, achieving more sensitivity at the high frequencies as a result. The condenser microphone is generally lightest of all, behaving more like an empty shopping cart in the analogy above. The only moving part, the diaphragm, can be an extremely thin plastic membrane with just the lightest coating of a metal to make it
conduct electricity. As a result the condenser offers the best opportunity to capture the detail of a transient, or the very high frequency portion of a high hat.
pelling after the subtle reshaping of the transient that a moving coil microphone introduces. As soon as a session permits, try using a moving coil and a condenser
The apparent transient response weakness in the moving coil design can in fact be quite a handy engineering tool.
The apparent transient response weakness in the moving coil design can in fact be quite a handy engineering tool. By reacting slowly to a sudden increase in amplitude, it acts mechanically as a compressor might act electrically. It reduces the amplitude of the peaks of a transient sound. This is helpful for two major reasons. First, this reduction of peaks can help prevent the sort of distortion that comes from overloading your electronics. The true spike of amplitude that comes off a conga might easily distort the microphone preamplifier or overload the tape you are recording onto. The use of a dynamic mic might be just the right solution to capture the sound without distortion.
on the same instrument, placed as near to the same location as possible. Then listen critically to the different coloration of each mic. Most apparent will be the frequency response differences, with the condenser sounding a little brighter at the high end while the moving coil offers perhaps a presence peak in the upper mid-range. But listen beyond this, to the character of the attack of the instrument. Depending on the application, you will often find that the moving coil dynamic microphone squashes the transients into a more exciting, more intense sound. By the time the track you are recording gets combined with all the other tracks in the multitrack project, and after the signal makes its way
Beyond this issue of audio fidelity and the prevention of distortion, dynamic microphones with their natural lethargy are often used for creative reasons. The sound of a clave, snare, kick, dumbek, and many other instruments is often much more com-
from mic to tape, through the console and various effects processors to the loudspeakers, the recorded sound from a moving coil microphone often just seems to work better. Meantime the ribbon microphone, offering more high frequency content
RECORDING OCTOBER 1999
than the typical moving coil microphone but less high frequency reach than most condensers, still finds its place in the recording studio.
of applying different microphones in different musical situations. Let the other engineers work their way through the microphone closet,
Many instruments have a rather painful amount of high end. Close-miking them, as we so often must do in the studio, only makes this worse.
Many instruments have a rather painful amount of high end. Closemiking them, as we so often must do in the studio, only makes this worse. The natural high frequency attenuarandomly trying different microphones in different applications. We can organize our experiences based on what we know about how the microphone works.
Excerpted from the October edition of RECORDING magazine. 1999 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
tion of a ribbon is often just the right touch to make a trumpet, a cymbal, a tambourine, a triangle, and others become beautiful, airy, and sparkling, without being shrill, piercing, thin, or edgy. In our next Nuts & Bolts episode well look at other microphone properties like directionality and proximity effect so that we can make more sense out of the vast range of options microphones offer. We can look forward to a career-long exploration of the beauty
RECORDING OCTOBER 1999
Try to rationalize what you actually hear with how you think it should sound, and youll bring some order to an otherwise chaotic part of the recording gig. Alex Case (case@r ecordingmag.com) is the director of e Frmata where he records and produces music hev lo es. He hopes you have a similar job.
PART 7
E q u a l i z at i o n , Pa rt 1
B Y A LE X C A S E
The latest installment in our series for the novice is all about tone shaping and how to deal with it.
The Equalizer. You may well wonder: what sounds in a studio need to be made equal? Equal to what? A more descriptive term for equalizer would be spectral modifier, or frequency-specific amplitude adjuster. Then again, sometimes a very simple term will doeven something as mundane as tone contro l s ,l i ke Bass and Treble or Low and High. The audio job of an equalizer is to change the frequency content of an audio signal. If the audio signal is dull, lacking high frequency sparkle, the equalizer is the tool used to fix thisprovided that there is some high frequency content in the signal in the first place that the equalizer can bring out. If the sound is painfully bright, harshly assaulting our ears with too much high frequency sizzle, the equalizer again offers the solution, this time by reducing the offending portion of the sounds high frequency content. Youll see that common sense and your ears have at least as much to do with good use of equalizers as the theory behind them. Yet it pays to know that theory; youll do better work knowing the theory where common sense would only get you so far. Engineers use equalizers to adjust the amplitude of a signal within specific and controllable frequency ranges. The master fader on your console adjusts the amplitude of the entire audio signal. Think of an equalizer as a frequency-specific fader; it increases or decreases the amplitude of a signal at certain frequencies only.
RECORDING JANUARY 2000
Looking high, looking low The frequency response of a device describes its ability to create output signals that are consistent across the entire audio frequency range. Figure 1 shows a typical example, in this case a device with an output that emphasizes the low frequencies and de-emphasizes the high frequencies. It crosses 0 dB amplitude change (unity gain, neither more or less amplitude) at about 1 kHz. Musically speaking 1 kHz is a rather high pitch, almost two octaves above middle C. But keep in mind that many instruments playing lower notes will have some harmonic content at this frequency, which we may need to alter. Imagine routing a couple of sine waves into this device. One is set to a 1 kHz frequency, and the other well move up and down to various frequencies in comparison to our 1 kHz reference. Well use meters to assure that both sine waves are kept at the same amplitude (to our ears, higher pitched sine waves sound much louder than equal-amplitude waves at lower frequencies). When you measure the output of the device that is set up to boost low frequencies and reduce high frequencies (as per Fig. 1), a 100 Hz tone (a bit more than an octave below middle C) will measure louder than a 1 kHz tone input at equal amplitude. And a very high frequency sine wave (say 10 kHz) will measure softer. If youve ever had to listen to sine waves for very long as in the experiment above or when aligning analog magnetic tape recordersyouve learned that they can create
To understand equalization you need only understand this: you are changing the frequency content of a signal by running it through a device whose frequency response is distinctly non-flaton purpose. The trick, and well discuss this in more detail later, is to alter the frequency response in ways that are tasteful, musical, and appropriate to the sound. Its easy to get it wrong. Dialing up just the right eq curve for a given situation will require experience, good ears, a good monitoring environment, and good judgment. How many knobs? If you consider the frequency response like that in Figure 1 to be adjustable from flat to the specific contour shown, you discover that configuring a device that actually controls these sorts of changes isnt obvious. To see how this is done well take a tour of the equalizers you are likely to find in a studio (leaving out equalizers that exist in software for now). We begin with the most flexible type of all: the parametric equalizer. No one got a Nobel Prize for naming this thing. It is a parametric eq because it offers you the most parameters for changing the spectral shaping. Thats it. In fact its got all of three parameters for your knob tweaking pleasure. Understanding the three parameters here makes understanding all types of equalizers a breeze. All other equalizers will have one or two of these three parameters available for adjusting on the front of the box. When you learn how to use a parametric equalizer, you are learning how to use all types of equalizers. Perhaps the most obvious parameter needed is the one that selects the frequency you wish to alter. The center frequency of the spectral region you are altering is dialed up on a knob labeled Frequency. In our search for
a rather unpleasant, distinctly non-musical listening experience, becoming more and more annoying the higher their frequencies get. Thats because sine waves have no overtones, which makes them useful for testing and calibrating purposes but not usually for making music. So lets take another look at the meaning of the frequency response plot in Figure 1. Consider an input that is not just a simple sine wave, but is instead an entire mixa killer mix. The mix is a careful blend of instruments and effects that fills the audio spectrum exactly to your liking, with a gorgeous, present midrange, an airy, detailed high end, and a rich, warm low end.
Dialing up just the right eq curve for a given situation will require experience, good ears, a good monitoring environment, and good judgment.
Sent through the device in Figure 1, that spectral balance is altered. The mix you found oh-so-perfect becomes too heavy in the low frequencies and loses detail up high. The frequency response plot quantifies exactly the sort of changes in frequency content you can expect when a signal is run through the device. Youve probably already absorbed the idea that a flat frequency response is often desirable, at least during audio production. Wed like devices like microphone cables and mixing consoles to treat the amplitude of all signals the same way at all frequencies. We hope these sorts of devices dont change the frequency character of the mix behind our back unless we choose to make such changes. And when we want to make such changes away from a flat frequency response we resort to using the equalizer. If you feel your vocal track or your entire mix needs a little more low end and a little less high end, you might run it through an equalizer with a frequency response like that in Figure 1. bass, we might have decided that our signal needs additional low frequency content in the area around 100 Hz. Or is it closer to 80 Hz? These decisions are made at the frequency select control. Naturally, we then decide how much to alter the frequency weve selected. The addition (or subtraction) of bass happens via adjustment of the second parameter: Cut/Boost. It indicates the amount of decrease or increase in amplitude at the center frequency you dialed in on parameter number one above. To take the shrill edge off of a horn, select a high frequency (around 8 kHz maybe) and cut a small amountmaybe about 3 dB. To add a lot of bass, boost 9 to 12 decibels at the low frequency that sounds best, somewhere between 40 and 120 Hz perhaps. As you can see, these two parameters alone, frequency select and cut/boost, give you a terrific amount of spectral flexibility.
RECORDING JANUARY 2000
Bandwidth and Q Consider a boost of 8 dB at 100 Hz. This could be just the trick to make a guitar sound powerful in the lower and fatter notes. You can almost taste the Grammy Award after deciding on this eq move. You can hear the result. But before you know what you really did to alter the frequency response, you need to consider a third parameter. Its a bit more subtle than the first two, and many less expensive equalizers (which well cover later) do without it. We know where we boost (at 100 Hz in the above example) and how high we boost (by adding 8 dB)but we dont just boost the narrow and exclusive frequency of 100 Hz, even though thats the one we dialed up. Instead we affect a range of fre-
quencies both below and above that 100 Hz frequency. Rememberthat 100 Hz is called the centerfrequency. Just how wide is the boosted region to either side of that center going to be?
Figure 2 demonstrates two possible results from the same center frequency and boost settings. Check them out and youll see what we meant by saying that selecting a center frequency to boost affects not
just that single frequency but the neighboring frequencies as well. The degree to which we also boost other frequencies nearby is defined by the third parameter, Q. The Q describes the width of the cut or boost region. Lets define first the bandwidth of an equalization change. Bandwidth is closely related to but not the same as Q. Bandwidth is considered to be the frequency region on either side of the center frequency that is within three decibels of the center frequencys cut or boost. Starting at the center frequency and working our way out both above it and below it in frequency, we can find the points on the curves in Figure 2 where the signal is three decibels down from the amplitude at the center frequency. The bandwidth of a cut or boost at a specific frequency describes the frequency range bounded by these 3 dB down points. In our example of an 8 dB boost at 100 Hz, the bandwidth is based on the frequencies that are boosted by 5 dB (8 3 = 5) or more. Figure 2 shows two such possible boosts. The wide boost has 3 dB down points at 75 Hz and 125 Hz. The bandwidth then is 50 Hz (the spectral distance
from 75 Hz to 125 Hz). The narrow boost is 3 dB down at 95 Hz and 105 Hz, giving a smaller bandwidth of just 10 Hz. Now expressing values in actual Hertz is rarely very useful in the studio. We humans dont process music that way. When you are writing a horn chart, you dont decide to add a flute part 440 Hertz above the tenor
an extra $20 to make an eq sweepable will bump up the price of a 32channel mixer by over $600.
sax. Instead you describe it musically, saying that the flute should be perhaps one octave above the tenor sax. For music we think in terms of musical ratios or intervals, the most famous of which is the octave. The octave represents nothing more than a mathematical doubling of frequency, whatever the frequency may be 440 Hertz (tuning A above middle
C) is one octave above 220 Hz, 1000 Hz is one octave above 500 Hz, etc. Because this is how we hear, we stick to this way of describing spectral properties on the equalizer. Using a ratio, we compare the bandwidth to the center frequency and express them in relative terms in octaves rather than Hz. For example a 50 Hz bandwidth around a 100 Hz center frequency represents a bandwidth that is half an octave wide; the bandwidth is half the value of the center frequency. With a fixed bandwidth of exactly half an octave, sweeping the center frequency down from 100 Hz to 50 Hz would be accompanied by a bandwidth that decreases automatically from 50 Hz to 25 Hz. This narrowing of bandwidth as measured in Hertz ensures that the equalization character you hear doesnt change as you zero in on the desired center frequency. Bandwidth expressed in octaves is more musically useful to our ears than bandwidth expressed in Hertz. If the bandwidth during the previous move (from center frequency 100 Hz to down to 50 Hz) had remained at a bandwidth of exactly 50 Hz it would have sounded like a wider,
less precise equalization adjustment at lower frequencies. Thats because a bandwidth of 50 Hz around a center frequency of 50 Hz isyou guessed ita full octave. Thats the idea of bandwidth. And thats almost the end of the math in this article. But there is one more idea to take in here before were done. Bring on Q. And Q makes three While expressing the bandwidth of an equalizer boost or cut in octaves makes good sense, the tradition is to flip the ratio over mathematically (the fancy term for this is to take the reciproca l impress your clients!). We consider center frequency divided by bandwidth instead of bandwidth divided by center frequency. The spectral width described this way (still in octaves) is the Q parameter. The wide boost discussed above and shown in Figure 2 is 50 Hz wide at a center frequency of 100 Hz. The Q therefore is 2 (center frequency of 100 Hz divided by the bandwidth of 50 Hz). The narrow boost has a Q of 10 (100 Hz divided by the narrow 10 Hz bandwidth).Studio-speak includes phrases like low Q and high Q to describe wide (low Q) and narrow (high Q) boosts and cuts. It then follows that the higher the Q, the more surgical your intervention. If you have a particular note or tone or hum or buzz that you need to pull out, of course you go for the narrowest bandwidth around the offending center frequency, with the steepest cut your equalizer can provide.Such a move is called notching or notch filtering.
parametric equalization. The terrific amount of sonic shaping power that four bands of parametric equalization offer makes it a popular piece of gear in any studio. But other options exist. Take away the Q Some equalizers fix the bandwidth internally, providing access only to the Frequency Select and Cut/Boost parameters. Because of the downgrade from three parameters to two this type of eq is sometimes called a semi-parametric (or demi-parametric or even quasi-parametric) equalize. r These devices suffer from having an even less imaginative name than parametric equalizers. Its probably best to call them swee pable eq to emphasize that you can adjust the frequency that you are cutting or boosting. When you see such a term in a products specs its implied that you cannot adjust the bandwidth. Believe me, if the bandwidth were adjustable the brochure would brag that the device is fully parametric! This configuration in which only two parameters (Frequency and Cut/Boost) are adjustable is common; it is easy for the recordist to use, easier for the manufacturer to design than a fully parametric, and still very useful in music production. Take away the frequency Down one more step, sometimes we only have control over the amount of cut or boost and can adjust neither the frequency nor the Q of the equalization shape. Generally called progr am eq, this is the sort of equalizer found on most home stereo equipment (those Treble and Bass knobs, remember?). You also see this type of eq on many consoles, vintage and new. It appears most often in a 2- or 3-band form: three knobs labeled High, Mid, and Low that are fixed in frequency and Q and offer you only the choice of how much cutting or boosting youre going to apply. In the case of consoles, remember that there may be the same equalizer repeated over and over on every channel of the console. If it costs an extra 20 bucks to make the equalizer sweepable, that translates into a bump in price of more than $600 on a 32-channel mixer. If it costs 50 bucks to make them fully parametric, and its a 64-channel console...well, you do the math.
Expressing values in Hertz is rarely useful in the studio. When writing a horn chart, you dont decide to add a flute part 440 Hertz above the tenor sax.
So for a full complement of equalization parameters you have Frequency Select, Cut/Boost, and Q as the three controls needed to achieve any kind of alteration to a frequency response, from broad and subtle enhancements to aggressive and surgical notches. Parametric equalizers give you these three controls for every band of equalization. Band of equalization? Thats right. These three controls often appear in sets. A 4-band parametric eq has 12 controls on it (3 controls x 4 bands = 12 controls in all)! It offers the three parameters four different times so that you can select four different spectral targets and shape each of them with their own amount of boost or cut, and each with a unique bandwidth. The result, if your ears can follow it all, is the ability to effect a tremendous amount of change on the spectral content of a signal. Figure 3 shows a possible result of 4-band
RECORDING JANUARY 2000
houses. That didnt stop me from thinking it could be useful for tracking a vocal, radically reshaping a guitar tone, and other silliness. I accidentally let slip my disappointment that despite a 5-figure price tag, the frequency select knobs clicked. Frequency select wasnt continuously sweepable from, say 125 Hz to 250 Hz; the knob clicked from 125 Hz to 250 Hz. If you wanted The good news is that even well designed program equalization can sound absolutely gorgeous. And often the preset center frequencies are close enough to the ideal spectral location to get the job done on many tracks. Sometimes you dont even miss the frequency select parameter. Take away the knobs A variation on the equalizer described so far is the graphic equalizer . Like program eq, this device has fixed Q and center frequencies, offering the engineer only the cut/boost decision. On a graphic eq, the various frequency bands are presented not as knobs but as sliderslike faders on a console. The result of such a hardware design is that the faders provide a decent visual description of the frequency response modification that is being appliedhence the name graphic. (To be exact, the actual eq curve outline looks more like a series of sharp bumps above and below a straight line than than the smooth continuous curve one might expect.) Handy also is the fact that the faders can be made quite compact. It is not unusual to have dual 31-band graphic equalizers that fit into one or two rack spaces. Graphic eq is an extremely intuitive and comfortable way to work. Being able to see an outline of what you hear will make it easier and quicker to set up the sound you are looking for. Turning knobs on a 4-band parametric equalizer is more of an acquired taste than moving sliders. There are times in the course of a project when one must reshape the harmonic content with great care using a parametric eq. In other instances there is no time for such careful tweaking and a graphic eq is the perfect, efficient solution. Plan to master both. Some knobs are switches Early in my audio career while attending the AES show in New York City, I admired a rather impressive British eq. It was a super high quality equalizer intended for mastering
RECORDING JANUARY 2000
piece of the circuit, it was physically changing the circuit. That made frequency selection surgically precise and absolutely repeatable, a must for mastering applications. I was instantly humbled, and learned a lesson. In choosing which type of equalizer to use, you have to trade off sound quality versus price and processing flexibility versus ease of use.
In choosing which type of eq to use, you have to trade off sound quality vs. price and processing flexibility vs. ease of use.
your equalization contour to be centered on exactly a frequency between clicks you were out of luck. How could this be? I was politely informed that for this particular device, selecting a different frequency by clicking a knob on the faceplate selected different electronic components inside the device. The equalizer was physically using different parts for different frequency selections! It wasnt just adjusting some variable This company has such high standards for sound quality that they took away a little bit of user flexibility to get a better and more repeatable sound. Conversely, if you find an equalizer that is fully parametric and sweepable across four bands yet costs $39.99, you would be wise to wonder how it is that they made the eq so infinitely adjustable and how much sound quality was sacrificed in the name of this flexibility.
Dont value an equalizer based on the number of controls it has. A simple program eq that allows you only to adjust the amount of cut or boost might contain extremely high quality components inside.
Knobs, switches, filters So far the most complicated equalizer we can build, the one with the most fancy knobs on the faceplate, is a parametric equalizer. If we allow four bands of eq we are up to 12 knobs. Naturally, it would look cooler if we added some switches. Heres how. Weve talked about equalization changes that offer a region of emphasis when we boost or deemphasis when we cut. This shape is called a peak/dipbecause of the visual change it makes in the frequency response. Roughly shaped like a bell curve, it offers a bump up or down in the frequency response.
Two other alternatives exist. The shelving equalizer offers the peak/dip response on one side of the selected center frequency and a flat cut or boost region on the other. Figure 4 demonstrates. A broad equalization desire might be to brighten up the sound in general. A high frequency shelving eq bumped up 6 dB at 8 kHz will raise the output at 8 kHz and above. It isnt limited to a center frequency and its associated bandwidth. The resulting alteration in the frequency response is flat (like a shelf) beyond the selected frequency.
As Figure 4 shows, the concept of a shelving eq applies to low frequencies as well as high, and cuts as well as boosts. In all cases there is a flat region beyond (above or below) the selected center frequency that is boosted or attenuated. A helpful image comes by way of beer: the shelving eq shape provides a good flat region to set a beer on without risk of spilling. There is nowhere within the eq move to set a beer when using a peak/dip eq contour. An important final option exists for reshaping the frequency response of a signal: the filter . Engineers speak generally about filtering a signal whenever they change its frequency
RECORDING JANUARY 2000
First, filters are cut-only devices; they never boost at any frequency (except in the case of resonant filters on synthesizers, which we wont go into now). Shelf eq can cut or boost. Second, and this is important, filters offer an ever-increasing amount of attenuation beyond the selected frequency. They do not flatten out like the shelf; there is nowhere to set the beer. They just keep cutting, and cutting, all the way down to silence. response in any way. Under this loose definition, all of the equalizers weve discussed so far are made up of audio filters. But to be more precise, a stand-alone filter must have one of the two shapes shown in Figure 5. A highpass filter (Figure 5A) allows high frequencies through but attenuates lows. A lowpass filter (Figure 5B) does the opposite, allowing low frequencies to pass through the device without a change in amplitude, but attenuating high frequencies. Because the sonic result can be rather similar to shelving equalizers cutting out extreme high or low frequencies, there is some confusion between them. Filters distinguish themselves from shelving equalizers in two key ways.
Now the faceplate of our equalizer is pretty complicated. The 4-band parametric (12 knobs) gets a low pass and high pass filter at each end, as well as switches that toggle each band between a peak/dip or shelf shape. But such an equalizer contains a rich amount of capability with which you can freely alter the spectral content of any signal in your studio. These knobs and switches enable you to bend and shape the frequency
If you find a 4-band parametric eq that costs $39.99, youd be wise to wonder what was sacrificed in the name of all that flexibility.
If there is some unwanted low frequency air conditioner rumble on a track that you never, ever want to hear, a filter can essentially remove it entirely. A shelf equalizer will have a limit to the amount of attenuation it can achieve, perhaps only 12 or 16 dB down. The weakness of using a shelving equalizer in this case is easily revealed on every quiet passage whenever that track is being played, as youll still heae the air conditioner rumbling on faintly in the background. response of the equalizer into almost any contour imaginable. Your strong creative drive to push the limits of a sound must be balanced by your musical and technical knowledge of your sound and equipment. Listen closely, and have fun. Alex Case encourages you to insert the words cup of water h werever the word beer appears abo ve. Request Nuts & Bolts topics via case@r ecord ingmag.com.Thanks.
Excerpted from the January edition of RECORDING magazine. 2000 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
PART 8
E Q U A L I Z AT I O N , PA RT 2
B Y A LE X C AS E
But before giving in to despair, realize that all engineers have a lot to learn about eq. Apprentices, hobbyists, veterans, and Grammy winners... all are still exploring the sonic variety and musical capability of equalization. Eq offers a huge range of possibilities and options. Critical listening skills are developed over a lifetime and require careful concentration, good equipment, and a good monitoring environment. No one learned the difference between 1 kHz and 1.2 kHz overnight. Interfering with this challenging learning process is the temptation to imitate others or repeat equalization moves that worked for us on the last song.Magic settings that make every mix sound great simply dont exist. If you got the chance to write down the equalizer settings used on, say, Jimi Hendrixs guitar track on The Wind Cries Mary, it might be tempting to apply it to some other guitar track, thinking that the equalizer goes a long way toward improving the tone.
Beware the urge to imitate others or repeat eq moves that worked on the last song...
But the fact is, the tone of Jimis guitar is a result of countless factors: the playing, the tuning, the type of strings, the kind of guitar, the amp, the amp settings, the placement of the amp within the room, the room, the microphones used, the microphone placement chosen, et cetera et cetera. The equalizer alone doesnt create the tone. In fact, it plays a relatively minor role in the development of the tone in the scheme of things.
RECORDING FEBRUARY 2000
The way to get ahead of this infinitely variable, difficult to hear thing called eq is to develop a process that helps you strategize on when and how to equalize a sound. Armed with this organized approach, you can pursue a more complete understanding of eq. The audio needs and desires that motivate an engineer to reach for some equalization fall into four categories: The Fix, The Feature, The Fit, and The Special Effect.
...the fact is, Magic settings that make every mix sound great simply dont exist.
The Fix A big motivation for engaging an equalizer is to clean things up and get rid of problems that lie within specific frequency ranges. For example, outboard equalizers, consoles, microphone preamplifiers, and even microphones themselves often have low frequency roll-off filters. Why is this kind of eq on all these devices and what is it used for? These devices remove low frequency energy less for creative thisll sound awesome reasons and more to fix the common problems of rumble, hum, buzz, pops, and excessive proximity effect. In many recording situations, we find the microphone picks up a very low frequency (40 Hz and below) rumble. This low-end energy comes from such culprits as the buildings temperature control system or the vibration of the traffic on nearby highways and train tracks (note to self: dont build studio next door to Amtrak and Interstate 10). This is really low stuff that singers and most musical instruments are incapable of creating. Since very little music happens at such low frequencies, it is often appropriate to insert a highpass (i.e. low cut) filter that removes all the super low lows entirely.
RECORDING FEBRUARY 2000
Thats rumble. A slightly different problem is hum. Hum is the interference from our power lines and power supplies that is based on 60 Hertz AC power (50 Hertz for many of our friends in other countries). The alternating current in the power provided by the utility compa-
ny often leaks into our audio through damaged,poorly designed, or failing power supplies. It can also be induced into our audio through proximity to electromagnetic radiation of other power lines, transformers, electric motors, light dimmers, and such. As more harmonics appear120 Hz, 180 Hz, and 240 Hzthe hum blossoms into a full-grown buzz. Buzz finds its way into almost every old guitar amp, helped out a fair amount by florescent lighting and single coil guitar pickups. Again, a low pass filter helps. To remove hum, we need to roll-off at a
frequency just above 60 Hertz or perhaps an octave above, 120 Hertz. This is high enough in frequency that it can audibly affect the musical quality of the sound. Exercise care and listen carefully when filtering out hum. Many instruments (e.g. some vocals, most saxophones, a lot of percussion, to name a few) arent changed much sonically by such a filter. But low frequency-based instruments (e.g. kick drum, bass guitar) arent gonna tolerate this kind of equalization. Fortunately, the hum might be less noticeable on these instruments anyway as their music can mask a low level hum. Buzz is more challenging. The additional harmonics of buzz make removing it only more musically destructive. Drive carefully. Other low frequency problems fixed by a highpass filter are the woofer-straining pops of a breath of air hitting the mic whenever the singer hits a P or a B in a word. Or if you are working outside (doing live sound or collecting natural sounds in the field), youve no doubt discovered that any breeze across the mic leads to low-end garbage. If you cant keep the wind off the microphone, then filter the low frequencies out. When the instrument you are recording is very close to a directional microphone, proximity effect appears. Sometimes this bassy effect that increases with proximity to a directional mic is good. Radio DJs love itmakes them sound larger than life. Sometimes proximity effect is bad. Poorly miked acoustic guitars have a pulsing low frequency sound that masks the rest of the tone of the instrument with each strum of the guitar. Roll off the low end to lose it. Equalizers are employed to fix other sounds. Ever had a snare with an annoying ring? Find the frequency range (boost, search....) most responsible for the ring and try attenuating it at a narrow bandwidth. Often, turning down that ring reveals an exciting snare sound underneath. Ever track a singer with a cold? Its difficult to get a great sounding performance out of a congested crooner, but such a problem might be fixable. Find the dominant muddying frequency (probably somewhere between 200 and 500 Hertz) and cut it a bit. Compensate with some helpful midrange boost and you might find a vocal sound that you and the singer didnt think was there.
RECORDING FEBRUARY 2000
Ever track a guitar with old strings? Dull and lifeless. This is unlikely to be fixable (because eq cant generate missing frequencies), but dont rule it out until youve tried a bit of a boost somewhere up between 6 kHz and 12 kHz. Sometimes a gorgeous spectral element of a sound is hidden by anoth-
instead for the ugliest, muddiest component of the drum sound (between about 180 and maybe 400 Hertz) and cut it. As you cut this problematic frequency, listen to the low end. Often this approach reveals plenty of low end punchiness that just wasnt audible before the wellplaced cut was applied.
Want a richer tone to the voice? Manipulate the vowel range. Having trouble understanding the words? Manipulate the consonant range. Watch out for overly sizzling S sounds, but dont be afraid to emphasize some of the human expressiveness of the singer taking a big breath right before a screaming chorus.
Realize that all engineers have a lot to learn about eq. Apprentices, hobbyists, veterans, and Grammy winners... all are still exploring the musical capability of equalization.
er, much less appealing frequency component. A good example of this can be found in drums. Does it never sound right when you go searching for the right frequency to boost for that punchy big budget drum sound? The low frequency stuff that makes a drum sound punchy often lives just a few Hertz lower than some rather muddy junk. And boosting the lows invariably boosts some of the mud. Search at narrow bandwidth The Feature A natural application of equalization is to enhance a particular part of a sound, to bring out components of the sound you like. Here are a few ideas and starting points. The voice:It might be fair to think of voice as sustained vowels and transient consonants. The vowels happen at lower mid frequencies (200 to 1000 Hz) and the consonants happen at the upper mids (2 kHz on up). The snare:Its a burst of noise. This one is tough to eq, as it reacts to almost any spectral change. One approach is to divide the sound into two parts. One is the low frequency energy coming from the drum itself. Second is the mid-to-high frequency energy up to 10 kHz and beyond due to the rattling snares underneath. Narrow the possibilities; look for power in the drum-based lows, and exciting raucous emotion in the noisy snares.
The kick drum:Like the snare, consider reducing this instrument to two components. There is the click of the beater hitting the drum followed by the low frequency pulse of the ringing drum. The attack lives up in the 3 kHz range and beyond. The tone is down around 50 Hertz and below. These are two good targets for tailoring a kick sound. The acoustic guitar:Try separating it into its musical tone and its mechanical sounds. Listen carefully to the tone as you seek frequencies to highlight. Frustratingly, this covers quite a range from lows (100 Hertz) to highs (10 kHz). In parallel, consider the guitars more peculiar noises that may need emphasis or suppression: finger squeaks, fret buzz, pick noise, and the percussive sound of the box of the instrument itself, which resonates with every aggressive strum. Look for these frequency landmarks in every acoustic guitar you record and mix. Eq is a powerful way to gain control of the various elements of this challenging instrument. For the instruments you play and often record, you owe it to yourself to spend some time examining their sounds with an equalizer. Look for defining characteristics of the instrument and their frequency range. Also look for the less desirable noises some instruments make and file those away on a watch-out list. These mental summaries of the spectral qualities of some key instruments will save you time in the heat of a session when you want more punch in the snare (aim low) and more breathiness in the vocal (aim high).
The trick is to find a spectral range that highlights the good qualities of the guitar without doing significant damage to the tone of the synth patch. Itll take some trial and error to get it just right, but youll find this approach allows you to layer in several details into a mix. Expect to apply this thinking in a few critical areas of the mix. Around the bass guitar, we encounter low frequency competition that needs addressing. If you play guitar or piano and do solo gigs as well as band sessions, youve perhaps discovered this already. Solo, youve got low frequency responsibilities as you cover the bass line and pin down the harmony. In the band setting, on the other hand, you are free to pursue other chord voicings. You dont want to compete with the bass player musically, and the same is true spectrally.
The needs and desires that motivate an engineer to reach for equalization fall into four categories: The Fix, The Feature, The Fit, and The Special Effect.
As an engineer, this means that you might be able to pull out a fair amount of low end from an acoustic guitar sound. Alone, it might sound too thin, but with the bass guitar playing all is well. There is spectral room for the low frequencies of the bass because the acoustic guitar no longer competes here. But the acoustic guitar still has the illusion of being a full and rich sound because the bass guitar is playing along, providing uncluttered, full bass for the songand for the mix. In the highs, competition appears among the obvious high frequency culprits like the cymbals and hand percussion as well as the not-so-obvious: distorted sounds. It is always tempting in rock music to add distortion to guitars, vocals, and anything that moves. Spectrally speaking, this kind of distortion occurs through the addition of some upper harmonic energy. And this distortion will overlap with the cymbals and any other distorted tracks. Make them fit with the same complementary eq moves. Maybe the cymbals get the highs above 10 kHz, the lead guitar has emphasized distortion around 8 kHz, and the rhythm guitar hangs out at 6 kHz. Mirror image cuts on the other tracks will help ensure all these high frequency instruments are clearly audible in the mix. The mid frequencies are definitely the most difficult region to equalize. It is very competitive space spectrally, as almost all instruments have something to say in the mids. And it is the most difficult place to hear accurately. We tend to gravitate toward the more obvious low and high frequencies areas when we reach for the equalizer. On the road to earning golden ears, plan to focus on the middle frequencies as a key challenge and learn to hear the subtle differences that live between 500 and 6,000 Hz.
There are often technical considerations behind eq decisions, its true. But music wouldnt be music if we didnt selectively abandon those approaches.
The Fit A key reason to equalize tracks in multitrack production is to help us fit all these different tracks together. One of the simplest ways to bring clarity to a component of a crowded mix is to get everything else out of the wayspectrally. That is, if you want to hear the acoustic guitar while the string pad is sustaining, find a satisfyingly present midrange boost for the guitar and perform a complementary cut in the mids of the pad. This eq cut on the string pad keeps the sound from competing with or drowning out the acoustic guitar.
RECORDING FEBRUARY 2000
The Special Effect If you have the sense from the discussion above that there are technical considerations behind equalization decisions, thats true. But music wouldnt be music if we didnt selectively abandon those approaches. A final reason to eq is to create special effects. This is where we are least analytical and most creative. Your imagination is the limit, but here are some starting points. Wah-wah is nothing more than variable eq. If youve a parametric equalizer handy, patch it in to a guitar track already recorded. Dial in a pretty sharp midrange boost (high-Q, 1 kHz, +12 dB). As the track plays, sweep the frequency knob for fun and profit. On automated equalizers you can program this sort of eq craziness. Without automation, you just print the wah-wah version to a spare track. Your creative challenge: explore not just middle frequencies, but low and high frequency versions; try cuts as well as boosts; and apply it to any track (acoustic guitar, piano, tambourine, anything). Another special effect is actually used to improve realism. As sound waves travel through space, the first thing to go are the high frequencies. The farther a sound has traveled, the less high frequency content it has. Consider the addition of a repeating echo on a vocal line. For example, the lead singer sings, My babys gonna get some Gouda Cheese. And the background singers sing, Gouda! Naturally the mix engineer feeds the background line into a digital delay that repeats at the rate of a quarter note triplet: Gouda... Gouda... Gouda. For maximum effect, it is traditional to equalize the signal as it is fed back to the delay for each repetition. The first GOUDA! is simply a delay. It then goes through a lowpass filter for some removal of high frequency energy and is fed back through the delay. It is delayed again: Gouda! Once more through the same lowpass filter for still more high frequency attenuation and back through the same delay: gouda. The result is (with a triplet feel): GOUDA!...Gouda! ...gouda. The echoes seem to grow more distant, creating a more engaging effect.
RECORDING FEBRUARY 2000
Obviously, this eq approach applies to signals other than echoes, and it even works on non-dairy products. In composing the stereo or surround image of your mix, you not only pan things into their horizontal position, but you push them back, away from the listener by adding a touch more reverb (obvious) and removing a bit of high end (not so obvious). This eq move is the sort of subtle detail that helps make the stereo/surround image that much more compelling. Speaking of stereo, a boring old monophonic track can be made more interesting and more stereo-like through the use of equalization. What is a stereo signal after all? It is difficult to answer such an interesting
ferent channels on your mixer and eq them differently. If the signal on the left is made brighter than the same signal sent right, then the image will seem to come from the left, brighter side (remember distance removes high frequencies). Consider eq differences between left and right that are more elaborate and involve several different sets of cuts and boosts so that neither side is exactly brighter than the other, just different. Then the image will widen without shifting one way or the other. The piano becomes more unusual (remember, this section of the article is called Special Effects, so anything goes....); its image is more liquid, less precise.
Critical listening skills are developed over a lifetime. No one learned the difference between 1 kHz and 1.2 kHz overnight.
question without writing a book, or least an entire article dedicated to the topic. But the one sentence answer is: a stereo sound is the result of sending different but related signals to each loudspeaker. Placing two microphones on a piano and sending one mic left and the other right is a clear example of stereo. The sounds coming out of the loudspeakers are similar in that they are each recordings of the same performance on the same piano happening at the same time. But there are subtle (and sometimes radical) differences between the sounds at each mic due to their particular location, orientation, and type of microphone. The result is an audio image of a piano that is more interesting, and hopefully more musical,than the monophonic single microphone approach would have been. If you begin with a single mic recording of a piano and wish to create a wider, more realistic, or just plain weird piano sound in you mix, one tool at you disposal is equalization. Send the single track to two difRECORDING FEBRUARY 2000
Add some delays, reverbs, and other processing (topics of future Nuts & Bolts pieces) and a one-mic monophonic image takes on a rich, stereophonic life. The End The challenging and subtle art of equalization neednt be surrounded in mystery. Whenever you have a track with a problem to be removed or a feature to be emphasized, try to grab it with eq. If a mix is getting crowded with too many instruments fighting for too little space, carve out different spectral regions for the competing instruments using eq. And sometimes we just want to take a sound out and make it more interesting.Again, just boost, search, and set the equalizer so that you like what you hear. Alex Case has cornered the market on murky, dull sounding mixes. Whats your eq specialty gonna be? Suggest Nuts & Bolts topics via case@r ecord ingmag.com.
Excerpted from the February edition of RECORDING magazine. 2000 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
PART 9
C O M P R E SS O R S
usic signals are rarely consistent in level. Every crack of the snare, syllable of the vocal, and strum of the guitar causes the signal to surge up and recede down in amplitude. The top of Figure 1 (on Page 52) shows the amplitude of about a bar of music. Signals like this one must fit through our entire audio chain without distortion: the microphone, the microphone preamp, the console, the outboard gear, the multitrack recorder, the 2-track master recorder, the power amp, and the loudspeakers. The highest peak must get through these devices without clipping, while the detail of the lowest, nearly silent bits of music must pass through without being swamped by noise. When we aim for 0VU on the meters, all were doing is trying to avoid distortion at the high end of things and noise down on the bottom. To help us fit extremely dynamic signals within the amplitude limits imposed by our studio, we reach for a compressor. Its task? Quite simply, when a signal gets too loud, the compressor turns it down. What counts as too loud? The Threshold setting on the compressor sets the level at which compression is to begin. When the amplitude of the signal is below this threshold the device passes the audio through unchanged. When the signal exceeds the threshold the compressor begins to turn the signal down.
BY A L EX CA SE What happens when you ignore what they were originally designed to do?
Taking control How does it turn it down? This question breaks in two. How much? And how fast?
RECORDING MARCH 2000
The amount of compression is determined by the Ratio setting. Mathematically, the ratio compares the amount of the input signal above threshold to the amount of the attenuated output above threshold. For example, a 4:1 (four to one) ratio describes a situation in which the input was four times higher than the output above the threshold4 dB above threshold in becomes 1 dB above threshold out, 8 dB above threshold in becomes 2 dB above threshold out. A ratio of X:1 sets the compressor so that the input must exceed the threshold by X dB for the output to go just one dB above threshold. How fast the signal is attenuated is controlled by the Attac k setting.Attack describes how quickly the compressor can fully kick in after the threshold has been exceeded. Fast attack times will enable the compressor to react very quickly, while slow attack times are more lethargic. Sometimes compressors change the gain so quickly that it becomes audibleand unmusical (although the effect can be useful as an effect). It becomes desirable to slow the attack time down and let the compression sneak into action. Its a trade-off, though, because if the purpose of compression is to control the dynamic range of a signal to prevent distortion, then it must act quickly. Threshold, ratio, attackthen what? When the amplitude of the music returns to a level below threshold, the compressor must stop compressing. The amount of time it takes the compressor to return to zero gain change after the signal falls below threshold is set by adjusting the compressors Release . Setting this control properly helps avoid introducing artifacts to your sound.
Welcome to the world of compression. Sometimes its too fast; other times its too slow. Sometimes we know when its just right. Other times we seek to set it so that we cant even hear it working. Tweaking a device until it sounds so good that you cant even hear it isnt easy. This brings us to an important issue with compression: it is often hard to hear. We discuss many applications for compression here in this months episode of Nuts&Bolts. Each application sounds different. And most of them, until youve had some experience and audio ear training, are frustrating to hear accurately. Compression, like so much of what we do as engineers, leads to: - A few mistakes. Overcompressing is a common problem. Sometimes you cant tell that its overcompressed until the next day. The affect of compression is at times quite subtle and at other times quite obvious. Spending all day mixing one song with your ears wide open can make it hard to remain objective.
If it were invented today, it would have some hyped-up, one word with two capital letters sort of name like PowerFaderand it would have a Website. The humble compressor offers a handy way to control precisely and manipulate the dynamics of the signals we record. While these four parameters are always at work, they are not always on the faceplate of the device. That is, they are not always user-adjustable. There are compressors at all price points that leave off some of these controls; its part of their sound. Other compressors offer full control over all the parameters yet also offer presets.
Without amplitude protection a killer take could be lost. Be ready with some gentle compression for the vocal.
- Audio hype and attitude. People might rave about how great the compression soundsand you dont hear what on earth theyre talking about.Again, some compression is hard to hear and requires experience. Perhaps theyve had the chance to hear this kind of compression before. All you need is time between the speakers immersed in compression of all kinds and youll pick it up. On the other hand, sometimes people are just full of bull pucky. Beyond the controls These four parametersthreshold, ratio, attack and releaseenable the compressor to carefully monitor and make fine adjustments to the amplitude of a signal automatically. The engineer is then freed to concentrate more on other things (Is the guitar in tune? Is the coffee strong enough?) The presets reflect someone elses careful tweaking to get the sound in the right place. Sometimes the presets simulate the attack and release characteristics of other, vintage, collectible, famous sorts of compressors. Its a good idea for beginners to spend some time with the fully adjustable type for exploration and ear training. But I dont hesitate to reach for those compressors with only a few knobs on the box during a session. They can often get the job done more quickly and with better sonic results. Easily compressed When the singer really gets confident and excited he or she sings the choruses really loudlouder than during all the other takes in rehearsal. Great performance. Unusable track. Without some amplitude protection a killer take is lost to distortion. Be ready for this with some gentle (around 4:1 or less) compression across the vocal. Then your audio path can withstand the adrenaline-induced increase in amplitude that comes from musicians when they are in the zone. When the guitarist gets nervous he or she starts moving around on the stool, leaving you to mike a moving target. Compelling performer. Nervous in the studio. Without the constant gain-riding of a compressor, you can hear the guitarist moving on- and off-mic. Again, a little gentle compression might just coax a usable recording out of an inexperienced studio performer. When the bass player pulls out that wonderful old, collectible, valuable, sweet sounding, could sure use a little cleaning up, arent those the original strings, couldnt stay in tune for eight bars if you paid itgorgeous beast of an instrument, you can be sure thateven in the
hands of a masterthe A string is consistently a little quieter than the E string. Of course the solution is compression. Without the careful, precision adjustments made to the amplitude of the signal, the very foundation of the song (according to the bass player, anyway) becomes shaky. All too often you need the careful level adjustments of gentle compression (as shown in Figure 1). Theres more to it than fixing a problematic track. We also patch gentle compression across perfectly fine tracks to make them, er, better. Well, louder anyway. A handy side effect of compressingreducing the overall dynamic range of the signalis that now it can be turned up. While this may seem counterintuitive,theres room to make the track louder as a whole when the points of highest amplitude have been lowered by the compressor. Figure 2 demonstrates this sort of gentle compression.
Fitting a signal on tape without overloading, or broadcasting a signal without overmodulating (getting too loud,simply put) requires that the signal never exceed a certain amplitude. Limiters are inserted to ensure these amplitude limits are honored. In live sound applications, exceeding the amplitude capability of the sound reinforcement system can lead to feedback, damage loudspeakers, and turn happy crowds into hostile ones. Limiters offer the solution again. They guard the equipment and listeners downstream by stopping the signal from getting too loud. Ulterior motives When the answering machine was invented, its intended purpose was to answer the phone and take messages when you were away. But the day after the first one was sold, the answering machine took on a new, more important role: call screening. The most common message on these devices is something like , I t s me. Pick up. Pick up! The use of a device in ways not originally intended occurs all too often, and the compressor offers a case in point. While dynamic range reduction and peak limiting are effective, intended use for the device, we use them for other, less obvious, more creative reasons as well.
A handy side effect of compressingreducing the overall dynamic range of the signalis that now it can be turned up.
This is often taken to radical extremes where mixes are absolutely crushed (i.e. really compressed, see also squashed, smushed, et al.) by compression so that the apparent loudness of the song exceeds the loudness of all the other songs on the radio dial. Selling records is a competitive business. Loudness does seem to help sell records. And so it goes. Often the music suffers in this commitment to loudness and hope for sales. Artist, producer, and engineer must make this trade-off carefully. But even in small measures, a little bit of gentle compression buys you a little bit of loudness if you want it. Take it to the limit Another use of the compressor is to attenuate the sharp amplitude spikes within the audio that would overload a device and cause (unwanted) distortion. During the course of a song, some snare hits are harder than others. The slamming that goes on during the chorus might be substantially louder than the delicate, ghost-note-filled snare work of the bridge. A limiter will attenuate the extreme peaks and prevent nasty distortion. And a limiter is nothing more than a compressor taken out to rather extreme settings. Threshold is high so that it only affects the peaks, leaving the rest of the music untouched. Ratio is high, greater than 10:1, so that any signal that breaks above threshold is severely attenuated.Attack is very fast so that nothing gets through without limiting. Called peak limiting,this sort of processing is used to prevent distortion and protect equipment. Figure 3 gives an example.
RECORDING MARCH 2000
The envelope please The envelope describes the shape of the sound, how gradually or abruptly the sound begins and ends, and what happens in between.Drums, for example, have a sharp attack and nearly instant decay. That is, the envelope resembles a spike or impulse. Synth pads might ooze in and out of the mix, a gentle envelope on both the attack and decay side. Piano offers a combination of the two. Its unique envelope begins with a distinct, sharp attack and rings through a gently changing,slowly decaying sustain. All instruments offer their own unique envelope. Consider the sonic differences among several instruments playing the same pitch: piano, trumpet, voice, guitar, violin, and didgeridoo. There are obvious differences
in the spectral content of these instruments; they have a different tone. But at least as important, each of these instruments begins and ends the note with its own characteristic envelopeits signature.
The compressor is the tool we use to modify the envelope of a sound. A low threshold, medium attack, high ratio setting can be used to sharpen the attack. The sound begins, at an amplitude above threshold (set low). An instant later (medium attack), the compressor leaps into action and yanks the amplitude of the signal down (high ratio). Such compression audibly alters the shape of the beginning of the sound, giving it more a more pronounced attack. This approach can of course be applied to most any track. A good starting point for this sort of work is a snare drum sound. Its demonstrated in Figure 4, seen on Page 58.
Find a track or sample to process. Patch in a compressor and sharpen the attack. Be sure your attack isnt too fast or you might remove the sharpness of the snare entirely. Set the ratio to at least 4:1, and gradually pull the threshold down. This type of compression has the effect of morphing a spike onto the front of the snare sound. Musical judgement is required to make sure the click of the sharper attack fits with the remaining ring of the snare. Trading off a low threshold with a high ratio offers the engineer precise control over the shape of the more aggressive attack.
Pop music pushes us to have bright, airy, in your face, exciting vocal tracks...
And this isnt just for snares. Anything goes, but do try similar processing on piano and acoustic guitar. Done well, youll create a more exciting sound that finds it place in a crowded mix more easily. Another unusual effect can be created using the release of a compressor. A fast release pulls up the amplitude of the sound even as it decays. This is also shown in the snare example of Figure 4. Notice the raised amplitude and increased length in the decay portion of the waveform. Dial in a fast enough release time, and the compressor can raise the volume of the sound almost as quickly as it decaysits almost uncompressing it. Applied to piano, guitar, and cymbals, this setting develops a nearly infinite sustain, making these instruments bell or chime-like in character, while still retaining the unmistakable sound of the original instrument. File this under Special Effects, but dont forget about it. An unnatural effect like this can be just what a pop tune needs to get noticed. Another interesting thing happens when you apply some extreme compression with a fast release time. If the compressor has pulled down the peaks of the waveform and then quickly releases the signal after it
RECORDING MARCH 2000
has fallen below threshold, you start to hear parts of the sound that were previously inaudible. Fast release compression enables you to turn up the sound and hear more of the decay of a snare, the expressive breaths between the words of a vocal, the ambience of the room in between drum hits, the delicate detail at the end of a sax note, and so on. Once again, here is a use of compression to make certain parts of the signal louder. The flip side is that you might not want, say, the pick noise to become overly accentuated.
Specifically, lets compress the lead vocal. But instead of compressing it based on the vocal track itself, lets use a different signal to govern the compression. We feed a modified vocal signal into this alternative input (called a sidechain ). The vocal itself is what gets compressed, but the behavior of the compressorwhen, how much, how fast and how long to compress is governed by the sidechain signal. To get rid of esses, we feed a signal into the sidechain that has enhanced esses. That is, the side chain input is the vocal track equalized so as to bring out the esses, and de-emphasize the rest.
We never hear this trackonly the compressor does. But when the singer sings an S, it goes into the compressor loud and clear, breaking threshold and sending the compressor into action. The sidechain signal is the vocal with a high frequency boost (maybe 12 dB somewhere around 4 kHz to 8 kHz, wherever the particularly painful consonant lives for that singer); you can filter out the rest of the side chain vocal. The compressor is set with a mid to high ratio, fast attack, and fast release. The threshold is adjusted so that the compressor operates during the loud esses only. In between esses the
...above a wall of guitars, tortured cymbals, reverb, and sizzling synth patches.
That hurtSSS Pop music standards push us to have bright,airy, in your face, exciting vocal tracks. And this convincing vocal sound must rise above a wall of distorted guitars,tortured cymbals,shimmering reverb, and sizzling synth patches. Needless to say, we push vocals with a high dose of high frequency hype (available on your trusty equali ze r ) .A dd some fast release compression to this bright equalization contour, and you really start to hear the breathing, rasping, sweating, and drooling of the singer; thats where a good deal of the emotion lives. We can get away with this aggressive equalization move everywhere except where the vocal was already bright to begin with: hard consonants like S and F (and even Z, X, T, D, K). These sounds are naturally rich in high frequency content. Run them through the equalizer that adds still more high end, and youve got the sort of vocal that zings the ears with pain on every S. You cant miss it: everyone in the room blinks every time the singer hits an S. Clever compression will solve this problem. In our discussion of compression so far we have been applying our settings of threshold, ratio, attack and release to the signal being compressed. But what if we compressed one signal while looking at another?
RECORDING MARCH 2000
compressor doesnt touch the vocal. This vocal can be made edgy and bright without fear. More is better Sometimes a strong dose of compression is appliedto an individual track or the entire mixjust for the effect of, well,compression. That is, there is something about the sound of extreme compression that makes the music more exciting. The distortion typically dialed in on most electric guitar amps adds an unmistakable, instinctively stimulating effect. By modifying the amplitude of the waveform, compression is also a kind of distortion. And it seems to communicate an intense, on the edge, pushing the limits sort of feeling.
A profoundly effective example of this is Tom Lord-Alges mix of One Headlight by the Wallflowe rs .A t each chorus there is a compelling amount of energy. It feels right. But if you listen analytically, not emotionally, you hear that there is no big change in the arrangement: the drummer doesnt just start banging every cymbal in sight, a wall of extra distorted guitars doesnt come
flying in. Jakob Dylans voice is certainly raised, but its well short of a scream. Mostly the whole mix just gets squashed big time. I almost think, analytically, that the song gets a little quieter at each chorus, with the 2mix compression pushing hard. But musically, the chorus soars. Thats the sort of compression that sells records. Mercedes makes a car with the word Kompressor on it. Alex Case wants one. Request Nuts & Bolts topics via [email protected].
Excerpted from the March edition of RECORDING magazine. 2000 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
PART 10
BY A L EX C A SE As this months issue of Recording focuses on mixdown, Alex talks us through a mix
Launch the appropriate plug-ins or patch in the appropriate hardware. These are effects wed like to have at our fingertips so that we can instantly send a bit of vocal, snare, and lead guitar to the same effect. The way to have all these effects handy is to use aux sends (see Nuts & Bolts #2, 8/99).
Kick-starting
With the console laid out, we can start mixing. Where do we start? Well, the vocal is almost always the most important single piece of every pop song. So most engineers start with... the drums. Starting with the vocal makes good sense, because every track should support it. But easily 99% of all pop mixes start with the drums. Why? Because the drums are often the most difficult thing to get under control. The drum part is a part with at least eight separate instruments playing all at once in close proximity to each other (kick, snare, hi-hat, two or three rack toms, a floor tom, a crash cymbal, a ride cymbal, and all the other various add-ons the drummer has managed). Its hard to hear the problems and tweak the sounds of the drums without listening to them in isolation. So we tend to start with the drums so that they are out there all alone. Once the vocals and the rest of the rhythm section are going, its hard to dial in just the right amount of compression on the rack toms. What do we do with the drums? The kick and snare are the source of punch, power, and tempo for the entire tune. Theyve got to sound awesome, so its natural to start with these tracks.
Global effects
We dont yet know yet all the effects we may want for this mix, but some standards do exist. Well probably want a long reverb (hall-type program with a reverb over two seconds), a short reverb (plate or small to medium room with a reverb time around one second), a spreader (see the sidebar), and some delays (eighth note, quarter note, or quarter note triplet in time).
RECORDING APRIL 2000
Excerpted from the April edition of RECORDING magazine. 2000 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
Step one: keep them dead center in the mix. The kick, snare, bass, and vocal are all so important to the mix that they almost always take center stage. The kick needs both a clear, crisp attack and a solid low frequency punch. Eq and compression are your best tools for making the most of what was recorded. The obvious: eq boost at around 3 kHz for more attack and eq boost at about 60 Hz for more punch. Not so obvious: eq cut with a narrow bandwidth around 200 Hz to get rid of some muddiness and reveal the low frequencies beneath (see Parts 7 and 8 of this series, 1&2/00). Compression does two things for the kick. First, it controls the relative loudness of the kicks, making the weaker kicks sound almost as
strong as the powerful ones. The second goal of compression is to manipulate the attack of the kick so that it sounds punchy and cuts through the rest of the mix. See last months column for a description of the sort of low threshold, medium attack, high ratio compression that sharpens the amplitude envelope of the sound. Placing the compressor after the equalizer lets you tweak in some clever ways. The notch around 200
Getting en-snared
The snare is next. It likely gets a similar treatment: eq and compression. The buzz of the snares is broadband, from 2 kHz on up. Pick a range you like: 8 kHz might sound too edgy or splashy, but 12 kHz starts to sound to delicate and hi-fi. You make the call. A low frequency boost for punchiness is also cool for snare. Look higher in frequency than you did on the kickmaybe 100 Hz or so. Also look for some unpleasant sound to cut. Somewhere between 500 and 1000 Hz lives a cluttered, boxy sound that doesnt help the snare tone and is only going to fight with the vocal and guitars anyway. Try to find a narrow band to cut and the rest of the mix will go more smoothly. The snare definitely benefits from the addition of a little ambience. Plan to send it to the short reverb and/or hope to find some natural ambience in the other drum tracks. The overhead microphones are a good source of extra snare sound. And any recorded ambience or room tracks should be listened to now. With the kick and snare punchy and nicely equalized, its time to raise the overheads and hear the kit
fall into a single, powerful whole. The overheads have the best view of the kit and the snare often sounds phenomenal there. Combine them with the kick and snare tracks to make the song really move. Its tempting to add a gentle high frequency boost across the overheads to keep the kit crisp. If the tracks are already bright as recorded, dont feel obligated to add more high end. In fact a gentle and wide presence boost between 1 and 5 kHz can often be the magic dust that makes the drummer happy. If youve got the toms on separate tracks, reach for your tried and true eq and compression. Eq in a little bottom, and maybe some crisp attack around 6 kHz. Try to eq out some 200 Hz muddiness, as with the kick. Compress for attack and punch, and youve completed your drum mixfor now.
slow attack time adds punch to the bass in exactly the same way we did it on the drums. Release is tricky on bass guitar. Many compressors can release so fast that they follow the sound as it cycles through its low frequency oscillations. That is, a low note at, say, 40 Hz cycles so slowly (once every 25 milliseconds) that the compressor can actually release during each individual cy c l e .S l ow the release down so that it doesnt dis-
Get down
Moving on to bass, we find similar issues. We need to compress to balance the bass line. Some notes are louder than others, and some strings on the bass are quieter than others. Gentle compression (4:1 ratio or less) can even out these problems. A
responseeither too much or too little in a single low frequency area, and equalize in a correction. Glance back at your kick drum too. If your kick sound is defined in the low end, say at about 65 Hz, then make room for it in the bass guitar with a complementary but gentle cut. Find eq settings on both the kick and the bass so that the kicks punch and power dont disappear when the bass fader is brought up. We often add a touch of chorus to the bass. This is most effective if the chorus effect doesnt touch the low frequencies. The bass provides important sonic and harmonic stability in the low frequencies; a chorus with its associated motion and pitch bending would undermine this. Simple solution: place a filter on the send to the chorus and remove everything below about 250 Hz. The chorus effect works on the overtones of the bass sound, adding that desirable richness without weakening the songs foundation at the low end.
Chugging on
Its a rock and roll clich to track the same rhythm guitar twice. The two tracks might be identical in every way except that the perfor-
mance is oh so slightly, humanly different. This results in a rich, wide, ear-tingling wall of sound. The effect is better still as the subtle differences between the two tracks are stretched slightly. Perhaps the second track is recorded with a different guitar, a different amp, different mics, different microphone placement, or some other slightly different sonic approach. In mixdown you make the most of this doubling by panning them to opposite extremes: one goes hard left, the other hard right. Balance their levels so that the net result stays centered between the two speakers.
Presence and intelligibility live in the upper middle frequencies. Use equalization to make sure the consonants of every word cut through that rich wall of rhythm guitars youve created. Search carefully from 1 to maybe 5 kHz for a region to boost the vocal that raises it out of the guitars and cymbals. You might have to go back and modify the drum and guitar eq settings to get this just right. Mixing requires this sort of iterative approach. The vocal highlights a problem in the guitars, so you go back and fix it. Trading off among the competing tracks, youll find a balance of crystal clear lyrics and perfectly crunchy guitars. Strength in the vocal will come from panning it to the center, adding compression, and maybe boosting the upper lows (around 250 Hz). Compress to control the dynamics of the vocal performance so that it fits in the crowded, hyped-up mix youve got screaming out of the loudspeakers.
Where do we start? Well, the vocal is almost always the most important single piece of every pop song. So most engineers start with... the drums.
A touch of compression might be necessary to control the loudness of the performance, but often electric guitars are recorded with the amp cranked to its physical limits, giving it amplitude compression effects already. Complementary equalization contours (boost one where the other is cut and vice versa) can add to the effect of the doubled, spread sound. This compression and equalization track by track has so maximized the energy of the song that it wont forgive a weak vocal. Natural singing dynamics and expression are often too extreme to workeither the quiet bits are too quiet or the loud screams are too loud, or both. Compress the dynamic range of the track so that it can all be turned up loud enough to be clear and audible. The soft words become more audible. But the loud words are pulled back by the compressor so that they dont overdo it. The vocal, a tiny point in the center, risks seeming a little small relative to the drums and guitars. The spreader to the rescue (again, see the sidebar). Send some vocal to the spreader so that the vocal starts to take on that much desired larger-than-life sound. As with a lot of mix moves, you may find it helpful to turn the effect up until you know its too much and then back off until its just audible. Too much spreader is a common mistake, weakening the vocal with a
Key in
The clavinet completes our rhythm section. It probably wants compression to enhance its attack in much the same way the kick, snare, and bass guitar were treated. Giving it a unique sound through eq and effects will ensure that it gets noticed. Consider adding some flange or distortion (using a guitar foot pedal or an amp simulation plug-inor re-recording it through an actual amp) to make it a buzzy source of musical energy.
Strength in the vocal will come from panning it to the center, adding compression, and maybe boosting the upper lows (around 250 Hz).
Panning it midway off to one side is a good use of the stereo soundstage. Pan it opposite the toms and solo guitar to keep the spatial counterpoint most exciting. Add a short delay panned to the opposite side for a more lively feeling. With drums, bass, guitar, and clav going in the mix, weve completed the rhythm section. Time to add the fun parts: vocal and lead guitar. chorused-like sound. The goal is to make the vocal more convincing, adding a bit of width and support in a way that the untrained listener wouldnt notice as an effect. Additional strength and excitement comes from maybe a high frequency eq boost (10 or 12 kHz or higher!) and some slick reverb. The high frequency emphasis will highlight the breaths the singer takes, revealing more of the emotion in the performance. It is not unusual to add short reverb to the vocal to enhance the stereo-ness of the voice still further and to add a long reverb to give the vocal added depth and richness.
Speak up
The vocal gets a good deal of our attention now. The voice must be present, intelligible, strong, and exciting.
RECORDING APRIL 2000
Overall
The entire stereo mix might get a touch of eq and compression. As this can be done in mastering, I recommend resisting this at first. But as your mixing chops are developed, you should feel free to put a restrained amount of stereo effects across the entire mix. You are trying to make it sound the best it possibly can, after all. For equalization, usually a little push at the lows around 80 Hz and the highs around or above 10 kHz is the right sort of polish. Soft compression with a ratio of 2:1 or less, slow attack and slow release can help make the mix sound even more professional. As the entire mix is going through this equipment, make sure you are using good sounding, low noise, low distortion effects devices. And dont forget to check your final mix in mono to make sure itll survive radio airplay.
Sending the vocal to an additional delay or two is another common mix move. The delay should be tuned to the song by setting it to a musically relevant delay time (maybe a quarter note). It is mixed in so as to be subtly supportive but not exactly audible. Add some feedback on the delay so that it gracefully repeats and fades. Send the delay return to the long reverb too, and now every word sung is followed by a wash of sweet reverberant energy that pulses in time with the music.
If your kick sound is defined in the low end, say at about 65 Hz, then make room for it in the bass guitar with a complementary but gentle cut.
Eq, compression, delays,pitch shifting, and two kinds of reverb represent, believe it or not, a normal amount of vocal processing. Its going to require some experimentation, going back and forth among every piece of the long processing chain. And thats just a basic patch. Why not add a bit of distortion to the vocal? Or flange the reverb? Or distort the flanged reverb? Anything goes. Travel safe. The background vocals might get a similar treatment, but the various parts are typically panned out away from center and the various effects can be pushed a little more. Hit the spreader and the long reverb a little harder with background vocals to help give them more of that magic pop sound. That sums up the components of one approach to one mix. It is meant to demonstrate a way of thinking about the mix, not the step by step rules for mixing. I hope it inspires you to form your own variation on this approach. Alexs mixes often fea ture didgeridoo panned dead center and doubled kazoos panned hard left and right. Complain about this to case@r ecordingmag.com.
Whats a spreader?
Its often desirable to take a mono signal and make it a little more stereo-like. A standard effect in pop music is to spread a single track out by sending it through two short delays. Each is set to a different value somewhere between about 15 and 50 milliseconds. Not too short or it starts to flange/comb filter; not too long or it pokes out as an audible echo. One delay return is panned left and the other panned right. The idea is that these quick delays add a kick of supportive energy to the mono track being processed, sort of like the early sound reflections that we hear from the left and right when we play in a real room. The extra trick is to pitch shift them ever so slightly, if you have the gear that can do it. That is, take each delay and detune it by a nearly imperceptible amount, maybe 5 to 15 cents. Again, we want a stereo sort of effect, so it is nice if the spreader has slightly different processing on the left and right sides. Just as we dialed in a slightly different delay time for each side, dial in a slightly different pitch shift as wellmaybe the left side goes up 9 cents while the right side goes down 9 cents. Now we are taking advantage of our signal processing equipment to create a widened sound that only exists in loudspeaker music; it isnt possible in the physical world. This sort of thinking is a real source of creative power in pop music mixing: consider a physical effect and then manipulate it into something that is better than reality (good luck, and listen carefully). We are going to add this effect to the lead vocal, among others. And the lead vocal is going to be panned straight up the middle. In order for the spreading effect to keep the vocal centered, it helps to do the following. Consider the delay portion of the spreader only. If you listen to the two panned short delays (and I definitely recommend trying this) you find the stereo image pulls toward the shorter delay. Now listen to just the pitch side of the spreading equation. The higher pitch tends to dominate the image. Arrange it so that the two components balance each other out (e.g., delay pulls right while pitch pulls left). This way the main track stays centered. Experiment with different amounts of delay and pitch change. Each offers a unique signature to your mix. Overused, the vocal will sound too digital, too processed. Conservatively applied, the voice becomes bigger and more compelling.
Going solo
The lead guitar can be thought of as replacing the lead vocal during the solo. It doesnt have to compete with the lead vocal for attention, so your mix challenge is to get it to soar above the rhythm section. An eq contour like that of the lead vocal is a good strategy: presence and low end strength. Compression should be used with restraint if at all; electric guitars are naturally compressed already. Additional reverb is also unusual for guitars. The tone of the guitar is really set by the guitarist, and that includes the reverb built into the amp. Solo guitar might get sent to the spreader, and it might feed a short slapback delay. The slap delay might be somewhere between about 100 to 200 milliseconds long. It adds excitement to the sound, adding a just perceptible echo reminiscent of live concerts and the sound of the music bouncing back off the rear wall. Its good to pan the solo about halfway off to one side and the slap a little to the other. If the singer is the guitarist, it might make more sense to keep the solo panned to center. Of course, you can add a touch of phaser, flangersomething in your digital multieffects unit that youve been dying to try, and you can even add additional distortion.
RECORDING APRIL 2000
PART 11
urely part of the pleasure of music recording is that it is such a free, liquid, no-rules sort of endeavor. In this episode of Nuts & Bolts we look at the actual process of recording.(Yes, I know we probably shouldve done this befor e looking at a mix the way we did last month, but it was Mixings Art And Science Month in April and those Editor guys asked so nicely....) Armed with the specific knowledge of components of the recording chain discussed so far in this series, lets discuss the actual session and our creative and technical options along the way. Throughout this article Ill be dispensing advice and then making the case for it; while this is all based on experience, bear in mind that different producers and artists have different ways of working. So dont get mad if you disagree.
Talented, passionate new artists often create a band that is simply thrilling live. Then the album fails to capture this. Quite possibly all that has gone wrong is that the band hasnt had a chance to listen to themselves the same way they listen to all the bands they lovethe same way their future fans will listen to them: on loudspeakers. Give the artists a chance to react to themselves as they appear in loudspeaker playback and theyll often make the appropriate adjustments necessary to sound great on a recording. The same band that really works the crowd live can often work the loudspeakers through their recordings; they just need a chance. Preproduction requires just a few mics and a cassette deck. Working with more mics and a DAT or 8-track recorder is sometimes even better. The mission of preproduction is to capture the performances on tape for study and evaluation later. Many bands have never actually heard themselves until the first take in the studio on the first song of the first session for their first album. There is already a lot of pressure built in to that first studio situation. Its a lot of money. There are a lot of mics all over the place. There is a lot of gear in the control room with lights and meters evaluating every thought the musicians have. For the first-time recording artist, an understandable paranoia sets in. An overwhelming fear of making mistakes that will be captured, amplified and mocked by every mic, meter and loudspeaker in the studio leads to a performance that is more conservative, less exciting. Thats not the sort of vibe that will lead to a Grammywinning performance.
youll find yourself trying to smooth over and hide a problem or wasting precious studio time and creative energy waiting for someone to run to the music store for the $5 solution.
Excerpted from the May edition of RECORDING magazine. 2000 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
If the band has never heard themselves before, get ready for some challenges. Think back to the first time you recorded yourself. When you arent playing, and you are just listening, you start to hear things that have perhaps gone unnoticed for years. I drift flat when I sing loud, I rush during the solo, and I do this funny thing at the end of the bridge that just sounds awfulI always thought it sounded awesome. The band deserves a chance to work these things out ahead of the album sessions. The fact is, musicians will fix many technical issues on their own if you just give them a tape of some rehearsals. The drummer will stop rushing during the chorus, the singer will plan out some of those oohs and ahs at the end, etc. Make a rough recording of the preproduction session for every member of the band. The songwriter also benefits from preproduction. Most pop music songs are studied on paper: meter, rhyme, word choice, and structure are evaluated with the same care given a poem. Songs differ from poetry in that they are set to music. The songwriter should therefore get the chance to study his or her work as it lives on loudspeakers. Make a rough recording for the songwriter. The project engineer also benefits from doing the recording during preproduction. The audio quality of the final product will improve markedly if everyone gets to hear what they and their instruments sound like coming back off tape. The drummer may not notice the squeaky kick pedal during performances, but during playback everyone will. The guitarist may not seem to know that the strings on her guitar are replaceable, but during playback the sad, lifeless tone might motivate the effort. Record the instrument and youll find its every weaknessguaranteed. If the squeaky pedal and dull old strings are discovered before the big session, then the problem can be addressed. If it happens in the heat of the actual album-making session,
RECORDING MAY 2000
the arrangement, song structure, guitar tone, lyrics, and so on that will blow you away. Give them a recording of how they sound and let them do what they are really good at: making their own music sound great. Preproduction also gives the producer and engineer a chance to contribute meaningfully to the creative music making process. The jobs of production and engineering happen in the studio. Producers and engineers have a familiarity with the gear of the studio like musicians have with their instruments.
ful than others. Youve got to decide among the live to 2-track, live to multitrack, basics, or overdub sessions. This month we discuss the live sessions. Next months Nuts & Bolts takes on basics and overdubs.
Live to two
It isnt always necessary to record to a multitrack. If you are recording a single, simple instrument, you can record it straight to your 2-track master machineprobably a DAT. Solo piano, voice, or guitar are obvious examples.
Perhaps the band hasnt had a chance to listen to themselves the same way they listen to all the bands they loveon loudspeakers.
The studio experience of the production and engineering team enables them to make musical suggestions that are unique to recorded music. Double the vocals in the choruses, add slap-back to the guitar during the solo, use some gated room mics on the drums, run the piano track through a Leslie cabinet, etc.there is a vast sonic palette to choose from. These are creations that rely on the studio and its equipment to be created. They rely on loudspeaker playback to be realized. It is imperative that the producer and engineer look out for these audio concoctions that will contribute to the music and translate it into an action that the band understands and appreciates. The band is expected to have an opinion on how appropriate such sounds are to their music, but it is the job of the studio cats to be able to create them. Preproduction gives the producer and engineer their first chance to start making these studio decisions. Without the distraction of other instruments and performers, the engineer can really focus. Mixdown wont be necessary, as there is nothing to mix the solo instrument with. Capturing the tone and adding just the right effects is the sole priority of a live to 2-track session. Your decision to go live to two shouldnt be based on engineering convenience or desires alone. The recording strategy must also factor in the musical advantages and disadvantages as well. In a live to two the performer is as focused as the engineer, chasing that elusive goal their best performance. An important musical benefit of the single player live to two session is that there are no other musicians around. Other players often add pressure, stop takes, or require compromise:
recording. Post recording processing consists of two options: editing and mastering. You can cut and paste together (literally or digitally) the best parts of all the takes into a single best take .A n d you can master the 2-track tape just recorded. That is, you can still modify the sound of the recording with a final dose of any effects you desiretypically equalization and compression, but there is no reason not to add reverb or more elaborate effects as well. Do whatever you think sounds best. To achieve simplicity and intimacy, we plan on a live to two recording session. But live to two isnt just for solo instruments. We can certainly record more complex arrangements and bigger bands live to two tracks. Lets put it in context by skipping ahead for a moment to that common multitrack session, the overdub. Say drums, bass and guitar have been recorded. Time for a saxophone overdub. Consider the vibe at the overdub. The saxophone player is all alone in the studio, playing into perhaps a single microphone, living in a musical world that exists within the headphones. It isnt easy to find the killer solo that will take over the world when youre playing all alone in headphones. Certain components of music feed off the live interaction of other musicians. This sax solo might benefit from being recorded at the same time that the rest of the band plays. Record it live. And there are other instances where the live to two is tempting. Drummers and bass players are often so musically interactive that they prefer tracking together (dont miss our discussion of the Basics session
The final product will improve markedly if everyone gets to hear what they and their instruments sound like coming back off tape.
Singer: Lets use take 17! Listen to how I phrased the opening line. Drummer: But I fumbled that fill in the first chorus. Im really digging take 12. In many live to 2-track sessions it is just an engineer looking for a sweet sound and a musician searching for his or her personal best. Of course, there is still opportunity to modify and enhance the live next month). If you can record the solo during the inspired groove of the live session, youll find more expressiveness, more power, more emotion. Certain styles of music are built on a foundation of interaction: jazz, blues, and power trios often like to be recorded all at once. Highly improvisational music is difficult to pull off musically through an assembly of overdubs.
RECORDING MAY 2000
Live to two becomes a much more intense session now. Two tracks of recorded music can easily come from more than a dozen microphones aimed at any number of instruments playing live, at once. And elaborate signal processing might be required. Skip the coffee. Youll have plenty of adrenaline as you adjust the levels on all those microphones; dial in equalization that is just right for each of them; set up compression on half, probably more of them; send the snare to a plate reverb, the Rhodes to a quarter note delay, and the vocal to both the reverb and delay; and so on. Youve got to hear every little thing going on microphone by microphone, instrument by instrument, and effects unit by effects unit in the live to two session. In addition, you must somehow hear the big thing: the overall 2-track mix itself. Back in the day, entire orchestras were recorded live with a single well-placed microphone. It can be done, but its always something of a thrill ride. Consider these ideas to help out. Safety net? What safety net?
Arranged this way they can see each other. Moreover, they can hear each other acoustically. So you can get rid of the headphones. Headphones are a distracting part of any session for the engineer. Musicians dont like em much either. They dont make for a very exciting or comfortable environment to jam. Headphones are a necessary evil in multitrack production. But live recording often permits you to dispense with them altogether. What should we watch out for when we put the band all in one room? First the good news. When an instrument is picked up by microphones other than its own, a magic thing starts to happen. This leakage into other mics starts to capture a different view of the instrument than a closely placed mic can manage alone. When it is working it starts to make the instruments come together into a more compelling single sonic ensemble. The band will sound tighter, the song will gel. The live recording might lack the precision that can come from well-isolated tracks, but it gains a more integrated, more organic total sound that is often well-
Two tracks of recorded music can easily come from more than a dozen microphones aimed at any number of instruments playing live at once. Skip the coffeeyoull have plenty of adrenaline.
First, know the tune. Try to get a chart, attend a rehearsal, get a tape from the preproduction session (see above), and/or just plain learn the tune in detail during the first couple of takes. Youve got to know what the song is about and memorize the arrangement: know who is playing when, when the loud parts are, when the soft parts are, and ride the faders accordingly. Second,take a liveapproach to the recording technique. We know that in a live to two track session there will be no mixdown later. The good news is that in a live to twotrack session we dont perform overdubs. Musical issue: its hard on the performers. Theyve got to get the performance just right, as there can be no fixing of mistakes, just repeated attempts at the tune Okay. Here we go again. Take 94...rolling... But its good news for the engineer. Its fine if the vocal leaks into the guitar mic and the drums leak into the organ mic. Well never rerecord one without the other, so such co-mingling of soundswe call it leakageoften isnt a problem. Live recording liberates the engineer of all those headaches associated with trying to separate the players and get clean tracks. Bye bye booths. Goodbye gobos. No need to hide the guitar amp in the closet and the bass amp in the basement (isnt that why its called a basement?). We constantly go to such trouble to achieve isolation in multitrack sessions. And those habits die hard. Youve gotta try it live and loose. Stick all the players in one room and live it up. They can arrange themselves in the way that is most comfortable for them probably the way they rehearse, the way they write the songs, or the way they play live. Arranged this way, they are so comfortable they might forget they are being recorded. This is a good way to capture something special on tape.
RECORDING MAY 2000
aligned with the aesthetic of the music being recorded. The music we tend to record live to two, this highly improvised, highly interactive sort of music benefits from being recorded in this sonically integrated way.
ning, youll find recording the band all at once in a single room is a liberating way to work.
Live to multi
that is in the same time zone. If the snare leaks loudly into the guitar and you pan the guitar to the left, then youll hear the snare image drift left. If the snare sound also leaks into the piano that gets panned right, the overall snare sound can stay more centered. In fact, the sound of the snare in the more distant microphones often sounds fantastic. You might want to plan your panning strategies so that leakage like that of the snare can be kept under control. You may have to back off on the extreme pan pot settings, pulling things in closer to center to keep the stereophonic image of the band tighter. Alternatively, you might use leakage on purpose.Knowing that the snare will leak into the acoustic guitar track that you want to pan left, you might use an omnidirectional mic on the piano panned right to pick up extra snare leakage on purpose. With a little attention to these strategies on processing and panSometimes it just isnt possible to meet the audio demands of the project in a live to two. Wild and complicated arrangements and large bands make getting the mix right while recording nearly impossible. If youve got drums, bass, guitars, keys, a horn section, a chorus section, lead vocals, and miscellaneous hand percussion, the session is probably too complicated for a live to two track approach. The live feeling and sonic benefits of a live to two session can also be captured in a multitrack environment. Just because the music needs to be recorded all at once doesnt mean the engineering has to happen all at once. That is, we can record the band with all the live and intimate approaches described above and still mix it down later. Record the live session to multitrack. Old timers like me call this live to 24, but as my digital audio workstation goes to (a not yet utilized) 64 tracks, it seems safer to call it live to multi. All or most elements of the tune are recorded simultaneously so that the
musical benefits of the live session are captured.Record with similar strategies.Arrange the musicians to maximize their comfort and encourage their creativity. Seek advantageous blending of the instruments in the room through strategic mic placement that captures the tone of the instruments and a good dose of acoustic leakage. But do avoid too much leakage on those tracks destined for a good dose of signal processing or aggressive panning. The live to multitrack session takes some of the pressure off the engineer as the priority is all about session vibe, musician comfort, and awesome raw tracks. The mixing of the tracks will get to happen in a separate, less crow d e d ,l ower stress session. Next month we explore the more typical production process: recording to multitrack and then overdubbing any number of additional tracks so that they can be mixed into a powerful, polished, and professional stereo master release. Alex Case thinks they should call that TV showSaturday Night Live to Two. Request Nuts & Bolts topics via [email protected].
Onmidirectional patterns are also a good choice for overhead microphones, though they are a little more difficult to place. Capturing acoustic energy from all directions, theyll grab more ambiance than a cardioid or figure eight. As a result, to get the same balance of room sound versus kit sound, omnis will need to be closer to the kit than more directional mics. Less obvious is the fact that the omni mic is a simpler, arguably purer device than the cardioid mic. Directional mics require a little signal processing to achieve rejection in certain directions. Its usually very careful, clever, and excellent sounding, but even acoustic signal processing of the highest quality pays a price. To over-generalize grossly, omnis often have a sweeter low frequency character than a lot of directional microphones, but of course this sort of thing varies from one mic model to the next. There are cardioids with fantastic low end and omnis that are low frequency deficient. What Im really trying to say is that choosing between omni, bidirectional, and cardioid isnt just about pick-up pattern, its also about frequency response. Compare not just the blend of cymbals versus toms and drums versus room sound, but also the sound quality of the drums coming through the mics. Listen to the spectral and timbral effect of choosing a different pick-up pattern. More than two drum mics? It is possible to capture the entire kit with just a pair of overheads. In fact a single microphone can work, placed either overhead or down in the kit tucked between the snare and the rack tom opposite the hi-hat. Using so few microphones on so broad an instrument requires that you have time to really tweak the mic placement and that you have a nice sounding room to help balance the sound. For this sort of work, first listen to the kit in the live room, then position the mics, and finally listen in the control room. Youve got to listen to the whole kit as well as all its individual pieces. If you dont like what you hear, return to the room with a specific objective in mind (e.g. too much crash cymbal, or snare pulls left) and move the mics (or change the mics, change the pick-up pattern, move the kit, etc.) in a way that you think will help. Return to the control room, listen, and repeat...and repeat...until you love the overall balance of the kit. This sort of judgement also requires experience. Just using one or two overhead mics on the drums requires finesse. More typically, we support the overheads with a couple of close mics, even in a live to two-track session. First, the kick drum welcomes a dedicated mic. To extract a decent amount of low end thump without too much messy room ambience, youve got to get a microphone in close. The kick is loud, so youll need a mic with the ability to handle high sound pressure levels (up to and above 120 or even 130 dB SPL). Many condenser mics these days can take it, but most of the time the kick demands the robustness of a dynamic. The snare, so important musically, also gets the special attention of a close mic. In the heat of a session you may not be able to count on the drum balance that you can pull out of the overhead microphones alone. Sticking a mic in close to the snare lets you ride a fader to change the amount of snare in the live mixa handy thing. A dynamic cardioid mic is up to the job, and especially for rock it grabs a present tone that will sound exciting. Condensers and ribbons are also desirable for the high frequency detail. If you go for a condenser, it probably needs a pad to prevent nasty distortion of the microphones electronics. If you use a ribbon mic (especially an old one) on the snare, take out some insurance or book the studio under a false nameone hit, one shredded ribbon. Effects For multitrack sessions it is common to eq and compress every single drum track on its way to the tape machinein pop and rock, anyway. For a live to two, youve got to back off on this approach; its suicidal to dial up a stack of effects in a live to two session. If youve just got overheads up, gentle compression is probably welcome. Three goals: 1. Safety: use compression to prevent the distortion that comes with levels to tape/disk that are too hot. 2. Punch: use compression to tighten up each hit of the drums and add a bit more attack. 3. Care: not too much compression or youll hear the decay of the cymbals become unnatural, pumping softer then louder as the compressor rides the gain too aggressively. These conflicting goals force us to back off the compression on the overheads significantly. If youve added close mics to the kick and snare, go ahead and compress them hard so that they add punchiness, clarity, and attack to the overall sound in the overheads. The idea is to get maybe 80% of the drum sound from the overheads. Sneak in the close microphones to add that extra little power and detail. This two to four microphone approach should enable you to get the kit under control in pretty short order, freeing you to focus on the bass, and the guitar, and the vocal, and Isnt live to two a blast?
PART 12
This ought to be a piece of cake after the challenges of last months live-to-two-track session. Right?
his month we continue our discussion of the recording session vocabulary, moving beyond preproduction and live recording and into the multitrack recording process. We track instruments one at a time when live to twotrack and live to multitrack arent appropriate or possible. The making of the recording becomes now a 4-step process: basics, overdubs, mixdown, and mastering. Mastering is the sole topic of a future Nuts & Bolts column. Mixdown was the focus in the April issue. This month we take on basics and overdubs. The basics session is simply the first track-laying session. As all overdubs will be performed around these tracks, they are called the basic tracks. In 99.9% of pop and rock sessions, basics refers to drums and bass. When the band is to be recorded a piece at a time through overdubs, it is usually easiest to lay down the fundamental groove of the tune first. Drummers and bass players usually play off each other and therefore like to be recorded simultaneously. Thats easy for the recording engineer to accommodate. Drums are placed in the biggest room available in the studio/loft/basement. The bass is recorded through a direct box and/or through a bass cabinet isolated in any way possiblestuck in another room or booth, tucked in a closet, or in the worst case surrounded by gobos and heavy blankets to at least minimize leakage into the drum mics. (Gobos are movable absorbent isolation barriers.) The bass player stands in the same room as the drummer and the jamming commences. Recorded on the multitrack, these drum and bass tracks form the basic tracks.
BY A LE X CA SE
Guided by voices With the exception of drum and bass music, playing a tune that consists solely of drums and bass is musically not very inspiring. Its easy to get lost during a take. So
RECORDING JUNE 2000
the drummer and bassist dont get lost, we always keep them on a leash. But I digress. So the drummer and bassist dont get lost within the tune , we also record scratch tracks. The singer, guitarist, keyboardist, and other members of the rhythm section are also recorded during the basics session. These additional rhythm section tracks are just meant as a guide to the drums and bass and will be re-recorded later. The top engineering priority is the quality of the drum and bass tracks. Scratch tracks are compromised sonically in pursuit of better basic tracks when necessary. For example, to keep the guitar from leaking into the drums it might be run through a small practice amp instead of the louder-than-loud, World Trade Center twin towers of guitar rig. The vocalist might be squeezed into a small booth for isolation, singing into a second-choice microphone because all the good mics are on the drum kit. The point of the scratch tracks is to feed the drums and bass information and inspiration; the point of the basics session is to get the most compelling drum and bass performance ever captured on tape. Period. As always, it is a session priority of the engineer (and everyone else involved) to help ensure that the band is comfortable. They need to hear each other easily. Ideally they should also be able to see each other perfectly. Stuck in headphones, theyll be grumpy at first. Carefully dial up a great sounding mix in the headphones, adding some basic effects (reverb on the vocals at least), and try to make the basics session a satisfying place to perform. Youll know you got a keeper take when the drummer likes it. Drummers know every single hit theyre layering into the tune; they know the feel they are going for; they always know when they missed a fill; theyll certainly know when they nailed it. When theyre happy, bring everyone into the control room for a loudspeaker listen.
The engineer has to make sure the audio quality is top shelf. Sometimes the band nails it on the first or second take. Thats good news musically. But it can be bad news sonically. Perhaps you havent had a chance to tweak: solo the snare to be sure it is crisp or eq the floor tom so it sounds as full as youd like.
With experience you can set up microphones and get levels onto tape that are perfectly usable as of Take One. That is, while it might have been nice to tweak the eq and add a dose of compression during basics, youve got a track recorded well enough that such processing can be dialed in during the mix.
The other thing youve gotta watch is levels. When the drummer and bass player fall into the zone they tend to play harder (i.e. louder) than in all the previous takes. Make sure when the band loves the take that the levels didnt head too far into the red zone and distort. If you record distorted tracks on the multitrack, you cant un-distort them later. Meantime, the band and the producer have to make sure the playback through loudspeakers is as inspiring as the live take was. It isnt easy to keep the loudspeaker playback exciting. While the take was being recorded we had the benefit of watching the players. The true test is the control room playback. Do you still feel that excitement when you cant see the performers? The basics session is complete when the performance passes the loudspeaker playback test. Almost perfect? What if the take feels right except for a few minor mistakes? Many such mistakes are fixable (see below), but first decide if they should be fixed at all. It is temptingvery temptingto fix every single flub so that the tracks on tape represent some ideal, best possible version of the song. Thats a fine approach, and many bands are famous for their perfect recordings. They are also famous for spending a lot of time (sometimes years!) in the studio, not a luxury we can all afford. And sometimes such perfect tracks are accused of being too perf e c t ,l a cking life, warmth, or emotion. Pick your spots carefully so that your project falls at the appropriate spot between the high audio craft of Steely Dan and low-fidelity on purpose of Tom Waits. Spike the punch Typically some fixes are called for at the basics session. The first thing repaired is the bass track. Mistakes in the drum track usually demand that the entire take be redone. There are exceptions, but try to avoid patching and piecing together a drum performance. Track until you get a single, consistently strong drum take. Then evaluate the bass part. Most likely, when the bass player loves the take as much as the drummer youve still got some minor repairs to do. The bassist probably needs to punch-in a few spots to fix funny notesnotes that were early or late, notes that were sharp or flat, wrong notes, loud notes, soft notes, or notes that just dont seem to work.
RECORDING JUNE 2000
Punching-in is the process of going into record during a track already recorded. You cue and play the tape/hard disk a few bars in front of the mistake to be replaced. The bass player plays along. You go into record while playing (typically you hold down the play button with one finger and feather touch the record button) at just the right spot. The red lights go on, and the bass player is live, laying down a new part while you erase the old part. Dont go out for pizza now, because youve got to punch out. Youve got to get the multitrack out of record (typically by pressing the play button alone, without touching record) so that only the mistake is replaced and the rest of the previously recorded track is preserved. Punching in and out on a digital audio workstation is usually simple. Punching in and out on a multitrack recorder is tricky business. Youll acquire the skill only through practice. Sometimes you punch entire verses, but other times you might try to punch in and out on an individual eighth note of music or a single syllable of vocal.
At the basics session you have the unique pleasure of fixing the bass track. Bass is probably the most difficult instrument of all to punch. When we loop a sample, we know to reach for zero crossing points on the waveform to avoid glitches. We aim for the same target for punch points. Ideally we go into and out of record at something very close to a zero crossing point so that we dont get an audible click where the wave abruptly transitions from old to new. The low frequency signal of bass presents a challenge. It has long, slow moving waves that spend a lot of time, compared to high frequencies, away from zero amplitude. Simply put, high frequencies cross the zero amplitude axis more often than low frequencies. As a result, our
odds of getting a click-less punch point are worse for lower frequency sounds like bass. Many digital machines let you select a crossfade time for punches. This adjusts exactly how abrupt the transition from the track to the punch will be. As in editing and looping, a crossfade gives you a brief window in time during which both old and new track are mixed together. The intent is to help dovetail one track into another. Done correctly, this helps make the punch point less audible. But setting a crossfade beyond 20 milliseconds often leads to other audible artifacts. The click you try to avoid is replaced by the flanging sound that comes from briefly hearing the two waveforms (old and new) simultaneously during a slow crossfade. The key to successful punches is in selecting your punch points carefully. Because we are looking for a high frequency moment in the music so that we can punch during a zero crossing, try to perform your punch during the high frequency transients of the part.
On bass, that means we hide our punch points in fret noise and pick noise. Punch in and out at the instant the bass player is articulating a new note. The buzz and grit when the bass player digs in on a down beat gives you an instant of high frequency activity where your punch can hide. Your punching technique is enhanced by a second element to your strategy: punch when no one is looking, er, I mean listening. That is, select punch points on bass that are going to be masked by some other loud and distracting event. A slamming snare hit, for example, will temporarily fill the mix with so much noise that a small error in punching in or out over on the bass track is covered up. Conveniently, the drummer is required by the Rock-N-Roll Drummers Union to hit the snare on beats 2 and 4, and the bass player is going to articulate a new note on most of those snare hits so that many punch points become available. Overdubs After recording the killer drum take of the century and performing maybe a handful of bass punches, youve completed the basics session for the tune. Time to move on to phase two of the multitrack session: overdubs. During overdubs you record single instruments or small sections onto separate tracks of the multitrack. In this way you build up the pop music arrangement around all the other tracks already recorded. Overdubs are typically less stressful and less crazy than basics or live session work. You are given the mental relief that comes from focusing your energies on perhaps a single musician with a single instrument, using maybe a single microphone. You get the chance to hear out the many subtleties of the recording discipline.
RECORDING JUNE 2000
It is during the calm, late night overdub that you get to hear the difference moving a mic one inch makes, the change in sonic character that comes when you change from one mic to another, the audible effect of recording on a wood floor versus carpet, and so on.
As engineer youve got a fair amount of freedom now. I encourage you to use overdubs as a chance to experiment with recording ideas and refine your ever-developing recording technique. For example, a straightforward electric guitar overdub can be as simple as sticking a single, brave dynamic cardioid up close to the amp and hitting record. If the tone at the amp is good, this recording technique will never fail.
But as the producer and the guitarist and the other band members experiment with alternative musical ideas, you can stick a few alternative microphones up next to the trusty dynamic already there to see how the sound changes. You can experiment with alternative mic placements. Find out what a ribbon microphone in the corner sounds like. Try an omnidirectional large diaphragm
condenser over by the brick wall to see if the sound works. Or whatevercheck out Bill Stunts Another Article About Recording Electric Guitars?! 2/00 or Bob Rosss Eclectic Electric Guitar 7/97 for some other hints and craziness. During overdubs,dont be afraid to throw experimental signals over on extra tracks. If you like the sounds, keep the alternative track and be a hero. If it sounds weak, erase it and explore other ideas on tomorrows overdubs. So many instruments reward distant miking, stereo miking, and experimental miking techniques. There is no obvious way to learn all the options and know which ones wo rk .A n d session budgets can rarely afford to let the engineer experiment. Do this sort of work on the side, while the session is distracted by something else. Overdubs onto spare tracks represent a terrific opportunity. What does an overdub session really look like? If you are picturing a single microphone in front of a single instrument in an otherwise empty room, youre missing out on a lot of the fun. The typical day of overdubs fills the studio with as many microphones as a basics session. Heres how it goes. Maybe the day begins with an electric guitar overdub. The engineer sets up the tried and true approach plus an experimental set of mics, should there be time or motivation during the overdub to reach for another kind of tone. When the guitar track is done and the session moves on to the next dub (tambourine for example), leave the guitar setup as is. Bring out another mic for the tambourine. Of course there is room to experiment. Compare a dynamic to a condenser, or an omni to a cardioid.Even the humble tambourine track welcomes some engineering exploration. Then move on to the next overdub, maybe didgeridoo. As the various overdubs are done, the room fills with microphones. This is handy for a couple of reasons. First, with the electric guitar amp previously set up and ready to go the band and producer are free to experiment freely. They can reach for the guitar with a seconds notice to try out a new musical idea.
A few hours into an overdub session you might have emptied the mic closet and used up all the mic stands. The recording room is ready for action. Mandolin? Good idea. Have a seat where we tracked the acoustic guitar a few hours ago and well start from there. Itll only take a second. Rain stick? Cool idea. Just step up to the tambourine mic. The overdub session becomes a comfortable place to explore multitrack recording ideas, liberating the musicians, producer, and engineer to work fast and freely. Accumulating the various overdub arrangements within a single room not only makes getting the different overdubs done more quickly and easi-
best suited to the task (see Nuts & Bolts Part 4, 10/99, and maybe Parts 5 and 6 as well, 11/99 and 12/99). During an intense day of overdubs you may find yourself faced with a tambourine overdub, and all your condenser mics are spoken for. Try a moving coil mic. You might be pleasantly surprised by how accurate that newfangled dynamic you just bought is. Or possibly the more colored sound of El Cheapo Dynamic mic might give the tambourine the edge the tune needs. If the tune is full of high frequency tracks already (cymbals, acoustic guitars, shaker, bright pads,etc.), the tambourine may sound better via a dynamic than a condenser.
Explore multiple recording techniques at once through this dont take it down until the end of the session approach to overdubs.
ly, it also leads to some fun exploring of engineering ideas. While the band plays with different parts on the didgeridoo, you can open up different mics in the room to see how it sounds. That is, while they record into the intended set of microphones, you can raise the faders over on the electric guitar mics in the corner, the tambourine mic in the center of the room, and the acoustic guitar mics by the stone wall, and so on. With each different overdub youll learn a bit more about the recording craft, because youll get to hear half a dozen different kinds of microphones and mic placements all from the comfort of your chair behind the console/DAW. You are occasionally rewarded. But separate from those welcome accidental discoveries, you are giving yourself a chance to learn ever more about the never ending process of tracking. Leaving the microphones set up after each overdub forces you to explore new recording techniques. Perhaps you always record tambourine with a condenser. Good call. Since tambourines are a percussion instrument full of transients and high frequency energy, it makes perfect sense to use the type of microphone
RECORDING JUNE 2000
Theres one other reason to follow this approach to overdubs. Some people think its cooler if they are hanging out in a room full of microphones. They feel more like a power session player, they like the vibe that comes from filling a room with equipment, and they feel like they are getting their moneys worth from the studio if most of the mics get used. Whatever floats their boat. But I am certain that this approach to overdubs gives the engineer a lot more pleasure. In developing your familiarity with microphone makes, models, and placement strategies, there is no substitute for experience. The more time you spend in the studio the better youll get at it. But it is the overdub session most of all that lets you make progress here. Explore multiple recording techniques at once through this dont take it down until the end of the session approach to overdubs. Have fun. Alex Case wonders: in New Zealand and Australia, do they call them underdubs? Request Nuts & Bolts topics via case@r ecordingmag.com.
Excerpted from the June edition of RECORDING magazine. 2000 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
PART 13
BY A L E X C A SE
Figure 1 explains whats going on at the console. How do you set up the delay unit itself? Most delays have available the controls shown in Figure 2: input and/or output level, delay time, and regeneration control. Input/output levels are selfexplanatory. Delay time can be fixed or variableusing the three modulation controls (rate, depth and shape)as well see in later examples. The Regeneration control, sometimes called the feedback control, sends the output of the delay back into itself. That is, a delayed signal can be further delayed by running it back through the delay again. This is how the delay is made to repeat more than once, as happens to the word hello. As well see throughout this and next months article, the simple controls of Figure 2 empower the delay to become a fantastically diverse signal processor. In the case of the long delay used by Pink Floyd, we need to set the delay time to the appropriate length of time and add enough regeneration to make the echo repeat a few times. The other controls on the delay arent necessary here. How long is long? 99.9% of the time these echoes should be set to a time that makes musical sense. That is, dont just pick
a random delay time, dial in a musical delay time instead. Should it repeat with a quarter note rhythm, an 8th note, a triplet,...? One way to do this is simply by listening. Typically, we use the snare to tune a delayto set a musical delay time. Even if you plan to add delay to the vocal, the piano, or the guitar, it is usually easiest to use the snare for setting the delay time both because it is a rhythm instrument and because it hits so often. So much of pop music has a back beatthe snare falling regularly on beat two and beat four. Send the snare to the delay and listen to the echo. Starting with a long delay time of about 250 milliseconds, adjust the delay time until it falls onto a musically relevant beat. This can be mighty confusing. It may help at first to pan the snare off to one side and the delay return to the other. Its pretty jarring to hear a delay fall at a non-musical time interval. But when you adjust it into the time of the music, youll instantly feel it. It is easiest to find a quarter note delay, but with practice and concentration, you can dial in triplet and dotted rhythms too. Sometimes we calculate a delay time instead. How is this calculated? Bear with me here, as some equations
are about to appear. If you know the tempo of the song (well call it T) in beats per minute (BPM) and you want to calculate the length of a quarter note delay in milliseconds (Q), do the following: - First convert beats per minute into minutes per beat by taking the reciprocal: T beats per minute becomes 1/T minutes per beat. - Then convert from minutes to milliseconds: 1/T minutes per beat x 60 seconds per minute x 1,000 milliseconds per second. - The length of time of a quarter note in milliseconds per beat is: Q = (60 x 1,000)/T = 60,000/T For example, we know a song with 60 beats per minute ticks like a watch, with a quarter note occurring exactly once per second. Lets try using the equation. T = 60: Q = 60,000/T = 60,000/60 = 1,000 msec (one second) per quarter note Double the tempo to 120 bpm. T = 120: Q = 60,000/T = 60,000/120 = 500 msec (half a second) per quarter note I use milliseconds because that is the measurement most delay units expect. Knowing the quarter note delay makes it easy to then calculate the time value of an 8th note, a 16th note, dotted or triplet values, etc. Some newer delay devices, like the TC D-Two reviewed elsewhere in this issue, let you display delay times in either milliseconds or bpm directly, but remember that you need to know before you look at a bpm value if the delay is calculating a quarter note or some other length. In the Comfortably Numb example above, they cleverly use a dotted 8th note delay. It is worth transcribing it for some production insight. The tune is dreamy and lazy in tempo, moving at about 64 bpm. The two syllables of hello are sung as 16th notes. To appreciate the perfection in Pink Floyds dotted 8th note delay time, lets consider two other, more obvious choices: a quarter note delay or an 8th note delay (consult Table 1).
Figure 1a) Constant Send - Use an Aux Send (literally an Echo Send) or Spare Track Bus to send the Vocal to the Delay
The quarter note delay strongly emphasizes the time of the song; its orderly and persistent. Sing it to yourself as a quarter note delay: Hello x x hello x x hello.... This would make it seem like Pink is being nagged or pushed around.Sing the 8th note delay and you find the repeats fall one after the other, with no rest in between the words: hello hello hello hello hello. This is just plain annoying. The delay time they chose has the effect of inserting a 16th note rest in between each repeat of the word. Hello is sung on the downbeat. The echo never again falls on a down beat. First it anticipates beat two by a 16th note, then it falls on the and of beat two. It then falls a 16th after beat three. Finally, it disappears as the next line is sung. This timing scheme determines that hello wont fall on a beat again until beat four, by which time the next line has begun and hello is no longer audibly repeating. Its really a pattern of three in a song built on four. This guarantees it a dreamy, disorienting feeling. It remains true to the overall numb feeling of our hero Pink, keeping an uncertain, disconnected feel to the story told. The result is a pre-calculated creation of the desired emotional effect. And its a catchy hooka real Pink Floyd signature. (Theyve used this trick before, to devastating effect: Us And Them from Dark Side Of T he Moon, Dogs from Animals.....)
RECORDING JULY 2000
1 x x x x
e x x x x
& x x x x
a x
&
&
the beat sung word quarter note delay 8th note delay dotted 8th note delay
Hel lo hel - lo hel - lo hel - lo hel - lo hel - lo x x hel - lo hel - lo x x hel - lo hel - lo
Hel lo
Long delay Its a funny idea, adding an echo to a singer, piano, guitar, or whatever. It doesnt seem to have any motivation based on reality. The only way to hear an echo on the vocal of a song is to go to a terrible venue (like an ice hockey rink or the Grand Canyon) and listen to music. The sound of an echo across the entire mix is in fact not a pleasant experience. It is messy and distinctly non-musical. The echoes we find in pop music tend to be used with more restraint. In some cases the echo is added to a single track, not the whole mix. And its mixed in faintly so as to be almost inaudible. In other cases the delay is added only to key words, phrases, or licks. Support If a constant echo is to be added to an entire track, the echo needs to be mixed in almost subliminally, nearly hidden by the other sounds in the mix. A soft echo underneath the lead vocal can give it added richness and support. This approach can strengthen your singer, especially when the melody heads into falsetto territory. Pulsing, subliminal echoes feeding a long reverb can create a soft and delicate sonic foundation under the vocal of a ballad. Then theres the vulnerable rock and roll singer in front of his mates Marshall stack. After the last chorus the singer naturally wishes to scream Yeaaaaaaaaah! and hold it for a couple of bars. It isnt easy to overcome the guitarists wall of sound. Help the singer out by pumping some in-tempo delays into the scream. The best Yeaaaaaaaaah! ever recorded in the history of rock and roll (and I have this from a reliable source) is Roger Daltreys in Wont Get Fooled Again by The Who. The scream occurs right after the reintroduction when those cool keyboards come back in, and right before the line Meet the new boss. The same as the old boss. This scream is a real rock and roll classic. Listen carefully (especially at the end of the scream) and youll hear a set of delayed screams underneath. Its Roger Daltrey, only more
RECORDING JULY 2000
Figure 1b) Automated Send - Uses two different Aux Sends or Track Buses. The Send Channel is a Console Strip dedicated to controlling exactly what is sent to the Delay.
so: its half a dozen Roger Daltreys. This makes quite a statement. You can do this too. All you need is Roger and a long delay with some regeneration. Slap A staple of 50s rock is sometimes part of a contemporary mix: slapback echo. You never heard Elvis without it. Solo John Lennon therefore often had it. And guitarists playing the blues tend to like it. Start with a single audible echo somewhere between 90 ms and 200 ms. On a vocal youll instantly add a distinct retro feeling to the sound. On guitar it starts to feel more live, like you are in the smoky bar yourself. Before the days of digital audio a common approach to creating this sort of effect was to use a spare analog tape machine as a generator of delay. During mixdown the machine remains in record. Signal is sent from the console to the input of the tape machine in exactly the same way youd send signal to any other effects unit: using an echo send or spare track bus. That signal is recorded at the tape machine, and milliseconds later it is played back. That is, though the tape machine is recording, it remains in repromode so that the output of the tape machine is what it sees at the playback head. As Figure 3 shows, the signal goes in, gets printed onto tape, the tape makes its way from the record head to the playback head (taking time to do so), and finally the sig-
tape speed, and how it is calibrated. If you push the level to the tape delay into the red, you introduce that signature analog tape compression, and at hotter levels still, analog tape saturation distortion. Tape delay becomes a more complex, very rich effect now. It isnt just a delayit is a delay plus equalizer plus compressor plus distortion device. This can be darn difficult to simulate digitally. Its sometimes the perfect bit of nuance to make a track special within the mix. Emphasis Adding a long delay to a key word, as in the Pink Floyd example, is a way to emphasize a particular word. It can be obvious, like the hello that begins the song. Simulating a call and response type of lyric, the delay is often a hook that people sing along with. Alternatively, it can be more subtle. A set of emphasizing delays hits key words throughout Synchronicity II on The Polices final album Synchronicity . The first line of every chorus ends with the word away, which get a little delay based boost. Listen also to key end words in the verses: face, race, and, um, crotch. These are a quick dose of several echoes, courtesy of the regeneration control. The Wallflowers One Headlight on Bringing Down the Horse offers a great example of really hiding the delays. Listen carefully to the third versethe words turn and burn each get a single subliminal dotted quarter note delay. Its not unusual to low pass filter these sorts of delays. Removing the high frequency content from each repeat makes it sink deeper into the mix. Nice delay units provide you with this filter as an option. Moreover, there is often the ability to double the delay time on outboard digital delays by pressing a button labeled X2, meaning times two. This cuts the sampling rate in half. With half as many samples to keep track of, the amount of time stored in a fixed amount of memory effectively doubles, hence the times two label.
es at the time. To help out, manufacturers made tape delays, which were tape machines with a loop of tape inside. The spacing between the record and playback heads was adjustable to give you more flexibility in timing the delay. Here in the year 2000 we have more options. Life is good. We can buy a digital delay that is easily adjustable, wonderfully flexible, probably cheaper than a tape machine, and it either fits in one or two rack spaces or exists conveniently in a pull-down menu on our DAW. But if you have a spare tape machine that has perhaps been sitting unused ever since you made the investment in a DAT machine, youve got the opportunity to create tape slap. This can even be a cassette deck if it has a tape/monitor switch to let you monitor the playback head while you record. Why bother? Some people are simply turned on by anything retro. Tape delay is more trouble, more expensive, and we know some great old records used it. That is reason enough for some engineers. I personally am not into retro for retros sake; I take the trouble to use a tape delay when I want that sound. An analog tape machine introduces its own subtle color to the sound. Mainly, it tends to add a low frequency hump into the signal, depending on the tape machine, the
Figure 3) Tape Delay - The Tape Machine is always rolling, In Record. The distance between the Record Head and the Playback Head as well as the selected Tape Speed determine the Delay Time.
Halving the sample rate also lowers the upper frequency capability of the digital device. You know this if you are following the sampling rate wars: 44.1 kHz, 48 kHz, 96 kHz and more. The key benefit of increased sampling rate is improved high frequency resolution (A Nuts & Bolts column dedicated to digital audio is forthcoming). While sampling rates are creeping up on all our digital toys (especially DAWs and multitrack recorders), we sometimes lower the sampling rate on our digital delays. Low pass filtering the delay is often a desirable mix move. Groove Beyond support, slap, and emphasis, we sometimes reach for delay to fill in part of the rhythm track of a song. Reggae is famous for its clich echo. Drum programmers have been known to put in an 8th or quarter note delay across the entire groove. Guitarists use delay too. U2s The Edge has made delay a permanent part of his guitar rig. A classic example is apparent from the very
RECORDING JULY 2000
introduction of U2s Wide Awake on The Unforgettable iF re. The quarter note triplet delay isnt just an effect, its part of the riff. The Edge has composed the delay element into the song. Ditto for In the Name of Love from the same album. An echo isnt just an echo any more, its a part of the tune. Make it short The delays discussed above are all audible as echoes, repeats of an earlier musical phrase. Delays are sometimes so short that they arent perceived as echoes. That is, as the delay time falls below about 50 milliseconds, the sound of the delay is no longer an echo. We still hear the delay, but it takes on a new persona as the delay time gets this short. Next month we explore these shorter time effects. Alex Case wonders: why are flight dela ys always long delays? Request Nuts & Bolts topics via case@r ecord ingmag.com.
Excerpted from the July edition of RECORDING magazine. 2000 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
he humble delay is a powerful production tool. You see, not all delays sound alike. Long delays sound very different from short delays. No duh, Case, you think to yourself. Let me explain. The sonic difference between a long delay and a short delay isnt just the apparent length of the delay. Long delays are pretty intuitive; they sound like an echo, perhaps repeating a few times. Short delays, on the other hand, arent heard as echoes. Very short delays have an important spectral effect on the sound. Then there are the delay times in between long and short. They have a more complex, textured effect. So we classify delays into three broad categories, cleverly called long (greater than about 50 ms), medium (between about 50 ms and 20 ms), and short (less than about 20 ms). We covered long delays in last months column; medium and short delays are so darn cool that well dedicate this and next months columns to them.
Make it short As delay times fall below about 50 milliseconds, they take on a new persona. If you are actually reading this in your studio, try the following experiment. (Those of you reading this on an airplane or tour bus are out of luck. Thatll teach you: never leave your studio, ever.) Patch up a sampler loaded with a variety of sounds or find a multitrack tape with a good variety of tracks. On your mixer/DAW, set up a delay fed by an aux send that returns to your monitor mix at about the same volume as the synth or original tracks. Pan both the source audio and the return from the delay dead center. Listen carefully to the mix of each source sound when combined with the output of the delay. Start with a bass line. Check out the combination of the bass with a long delay, maybe 200 milliseconds. Yuk. Its a blurry, chaotic mess.
Now start shortening the delay. 100 ms, 80 ms, 60ms, 20 ms, 10 ms, 5 ms, down to 3 ms and below. Listen carefully as you do this. What the heck is going on? The long delay is just an echo. The very short delays (15 ms and lower) sound strange, sometimes hollow, sometimes boomy. At one short delay setting theres extra low end, then at a slightly different delay time, a lack of low end. This mix of a bass sound with a very short delay sounds like its been equalized. Gradually lengthen the delay time and listen for the point at which it starts to sound like a distinct echo again. Depending on the bass sound, you may hear the delay separate from the bass into an echo somewhere between about 60 and 80 milliseconds. In between the very long and the very short delay times, well, its hard to describe. Next try a snare sound. Again start with a long delay and gradually pull it down to a short delay. Again we find it is a distinct echo at long settings. The delay introduces a strange timbral change at short delays and something tough to describe as it transitions between the two. While were here, do the same experiment with an acoustic or electric guitar track, or a string patch on the sampler. Welcome to the real world of delays. They arent just for echoes anymore. When delays become shorter than about 50 or 60 milliseconds (depending on the type of sound you are listening to, as demonstrated above) they are no longer repeats or echoes of the sound. The same device that delays a signal starts to change the color, the spectral content of the signal. Lets check out how it works. Sine of the times Consider first a pure tone (no fun to listen to, but helpful to study). Mixing togetherat the same volume and pan positionthe original signal with a delayed version of itself might have results like the two special cases
RECORDING AUGUST 2000
Excerpted from the August edition of RECORDING magazine. 2000 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
shown in Figure 1. (Just look at the solid lines for now; well come back to the dashed lines and what they mean in a minute.) If the delay time happens to be exactly the same as the period of the sine wave, we have the constructive interference shown in Figure 1a. That is, if the delay time we set up on our delay processor is exactly equal to the time it takes the sinusoid to go through one cycle, then they combine cooperatively, and the result is a signal of the same frequency but with twice the amplitude.
The situation in Figure 1b represents another special case. If the delay time happens to be equal to half a period (half the time it takes the sine wave to complete exactly one cycle), then the original sound and the delayed sound move in opposition to each otherthey are 180 degrees out of phase. The combination results in zero amplitudepure silence. If you have access to a sine wave oscillator (either as test equipment or within your synthesizers or computer), give it a try. I recommend 500 Hz as a starting pointit isnt quite as piercing as the standard test tone
of 1000 Hz, and the math is easy. The time it takes a pure 500 Hz tone to complete one cycle is 2 milliseconds (Period = 1 / Frequency = 1 / 500 = 0.002 seconds = 2 milliseconds). So mixing together equal amounts of the original sine wave and a 2 millisecond delayed version will create the case shown in Figure 1a. Set the delay to 1 millisecond, creating the situation of Figure 1b, and youll find that the sine wave is essentially cancelled. Now look at the dashed-line wave forms on Figure 1. They show that these doublings and cancellations happen at certain other higher frequencies as well. For any given delay time, certain frequencies line up just right for perfect constructive or destructive interference. The math works out as follows. For a given delay time (t expressed in seconds, not milliseconds) the frequencies that double are described by an infinite series: 1/t, 2/t, 3/t, .... The frequencies that cancel are: 1/2t, 3/2t, 5/2t, .... Using these equations we confirm that a 1 millisecond delay (t = 0.001 seconds) has peaks at 1000 Hz, 2000 Hz, 3000 Hz, ... and nulls at 500 Hz, 1500 Hz, 2500 Hz, .... This is consistent with our observations in Figure 1b of how a 1 millisecond delay cancels a 500 Hz sine wave. In Figure 1a, the dashed line is the 2/t (constructive) case, and in 1b, the dashed line is the 3/2t (destructive) case.Again, you can see how the peaks and dips in the waves either add up or cancel out. A 2 millisecond delay has amplitude peaks at 500 Hz, 1000 Hz, 1500 Hz, ... and nulls at 250 Hz, 750 Hz, 1250 Hz, .... We looked at the results of this 2 ms delay for the single frequency of 500 Hz in Figure 1a. The math reveals that the peaks and dips happen at several frequencies, not just one. Of course, the only relevant peaks and valleys are those that fall within the audible spectrum from about 20 Hz to 20,000 Hz. To explore this further, return to your mixer setup combining a sine wave with a delayed version of itself set to the same amplitude. Sweep the sine wave frequency higher and lower, watch your meters, and listen carefully. With the delay fixed to 1 millisecond, for example, sweep the frequency of the sine wave up slowly beginning with about 250 Hz. Youll hear the combination of the delayed and undelayed waves disappear at 500 Hz, reach a peak at 1000 Hz, disappear again at 1500 Hz, reach a peak again at 2000 Hz, and
so on. Weve got a dela y (not an equalizer) changing the frequency content of our signals. Weve got a dela y (not a fader or a compressor) changing the loudness of our mix. Lets ride the faders in the following experiment. On your mixer, one fader has the original sine wave at 500 Hz panned to center. And the sine wave is also sent to a delay unit set to a delay time of 1 millisecond. Another fader controls the return from this delay, also panned to center. Start with both faders down.Raise the fader of the source signal to a reasonable leve l .N ow raise the second fader. As you make the delayed signal louder , your mix of the two waves gets quieter . As you add more of the delayed sine wave, you get more attenuation of the original sine wave. This is the phenomenon shown in Figure 1b. And the mix reaches its minimum level when the two signals are at equal amplitude. Time for music Stupid parlor trick or valuable music production tool? To answer this question we have to get rid of the pure tone (which pretty much never happens in pop music) and hook up an electric guitar (which pretty much always happens in pop music). Run a guitar signallive, from your sampler, or off tapethrough the same setup above. With the delayed and undelayed signals set to the same amplitude, listen to what happens. Can you find a delay time setting that will enable you to completely cancel the guitar sound? Nope. The guitar isnt a pure tone (thank God). It is a complex signal, rich with sound energy at a range of frequencies. No single delay time can cancel out all the frequencies at once. But mixing together the guitar sound with a 1 millisecond delayed version of the guitar sound definitely does do something, and what happens is definitely cool. It would be nice to understand what is going on. We already saw a 1 millisecond delay remove the 500 Hz sine wave entirely. In fact, it will do the same thing with guitar (or piano, or didgeridoo, or anything). Musical instruments containing a 500 Hz component within their overall sound will be affected by the short 1 millisecond delay; the 500 Hz portion of their sound can in fact be cancelled. What remains is the tone of the instrument without any sound at 500 Hz. But wait,theres more. Try the 2 millisecond delay. In the case of the 500 Hz sine wave, we saw that the signal
got louder when we added this delay. In the case of the guitar, the 500 Hz portion of the signal gets louder. Taking a complex sound like guitar, which has sound energy at a vast range of different frequencies, and mixing in a delayed version of itself at the same amplitude will cut certain frequencies and boost others. This is called comb filtering (see Figure 2) because the alteration in the frequency content of the signal looks like teeth on a comb.
Combining a musical waveform with a short delayed version of itself radically modifies the frequency content of the signal. Some frequencies are cancelled, others are doubled. The intermediate frequencies experience something in between outright cancellation and full-on doubling. So short delays are less like echoes and more like equalizers; they are too short to be perceived as echoes. In fact they are so short that they start to interact with discreet compo-
Figure 2a
Figure 2b
RECORDING AUGUST 2000
nents of the overall sound, adding some degree of constructive (i.e. additive) or destructive (i.e. subtractive) interference to different frequencies within the overall sound. Figure 1 demonstrates this for a sine wave. Figure 2 summarizes what happens in the case of a complex wave like guitar, piano, saxophone, vocal, etc. The spectral result is that the combining of a signal with a delayed version of itself acts like a radical equalization move: a boost here, a cut there, another boost here, another cut there, and so on. In theory you could simulate comb filtering with an equalizer, dialing in carefully the appropriate boosts and cuts. Thats the theory. In fact, one rarely could. To fully imitate the comb filter effect that a 1 millisecond delay creates, youd need an equalizer with about 40 bands of eq (20 cuts and 20 boosts within the audible spectrum). Ive never seen so crazy an equalizer (other than in software). In fact, part of the point of using short delays in your mix is to create sounds that you cant create with an equalizer. Its pretty impressive. A single short delay creates a wildly complex eq contour. Short delays offer a very interesting extra detail: they create mathematicalnot necessarily musical changes to the sound. Study Figure 2, comparing part 2a to part 2b. They show the same information. But Figure 2a presents the information with a logarithmic frequency axis. This is the typical way of viewing music, because its how our ears hear: double the frequency, go up an octave. Double it again, go up another octave, and so on. This relationship is why, for example, you go up a half step with each fret on a guitar but the frets get closer together as you go up the neck. But if you look at comb filtering with a linear (and non-musical) frequency axis, you see that the peaks and dips in the filter are spaced perfectly evenly. It isnt until you view the implications of the short delay in this linear way (Figure 2b) that you see why it is in fact called a comb filter. Youll get a better hairdo using the comb in Figure 2b instead of 2a.
RECORDING AUGUST 2000
This highlights another unique feature of using short delays to shape the harmonic content of a sound. The distribution of the cuts and boosts is a mathematical peculiarity, all equally spaced in terms of the linear number of Hz. It is distinctly non-musical. Patch up the comb filter with a special radical effect in mind. If you want more careful tailoring of sound, use an equalizer with its logarithmic, more musical controls. Time for reflection Its still fair to ask: why all this talk about short delays and their effect on a signal? After all, how often do we use delays in this way? It is essential to understand the sonic implications of these short delays because all too often they simply cant be avoided.
Fortunately the sound reflected off the floor will also be a little quieter, reducing the comb filter effect. If the floor is carpeted, the comb filtering is a little less pronounced. Place absorption at the point of the reflection, and the comb filtering is even less audible. An important part of the recording craft is learning to minimize the audible magnitude of these reflections by taking advantage of room acoustics in placing musical instruments in the studio and strategically placing absorptive materials around the musical source. This is one approach to capturing a nice sound at the microphone. Better yet, learn to use these reflections and the comb filtering they introduce on purpose. For example, raising the microphone will make the difference in length between the reflected path and the direct path even longer. Raising the microphone therefore lengthens the acoustic delay time difference between the direct sound and the reflected sound, thereby changing the spectral locations of the peaks and valleys of the comb filter effect.
Consider the session shown in Figure 3b: two microphones, one track. Here we have a close microphone (probably a dynamic) getting the in yer face gritty tone of the amp and a distant microphone capturing some of the liveness and ambience of the room. You might label the fader controlling the close microphone something like close and the fader governing the more distant mic something like room. You adjust the two faders to get the right mix of close and room sounds and print that to a single track of the multitrack. Thats only half the story. As you adjust the faders controlling these two microphones, you not only change the close/ambient mix, you also control the amount of comb filtering introduced into the guitar tone. These two mics pick up very similar signals, but at different times. In other words, they act very much like the signal plus delay scenario weve been discussing. Moving the distant microphone to a slightly different location is just like changing the time setting on the delay
sound radiating out of the amp. The direct sounds into multiple microphones arrive at different times, leading to some amount of comb filtering. The reflections from the various room boundaries into each microphone arrive at a later time than the direct sound, adding still more comb filtering. There is an infinite number of variables in recording. In theory, we recording engineers like this complexity. (For certainty, become a tax accountant. For endless opportunities of exploration, become a recording musician.) Understanding comb filtering is part of how we master the vast recording process. The myth of the sweet spot Perhaps you want a tough, heavy, larger than large guitar tone. Maybe a comb filter derived hump at 80 Hz is the ticket. Or should it be 60 Hz? You decide. Explore this issue by moving the microphones around.Place two microphones on the amp as shown in Figure 3b. Keep the close mic fixed and move the distant one slowly. Your goal is to introduce a frequency peak at some powerful low frequency.
musical acoustics, and psychoacoustics. To achieve predictably good sounding results you need recording experience, an understanding of microphone technologies, knowledge of microphone sound qualities, exposure to the various stereo miking techniques, and many other topics. In other words, you need a subscription to Recording.And an essential tool in mic placement is the use of comb filtering for fun and profit. Avoid it as necessary. Or use it on purpose when you can. Electric guitar, which my mom and some scientists would classify as broadband noise, responds well to comb filtering. With energy across a range of frequencies, the peaks and dips of comb filtering offer a distinct, audible sound property to be manipulated. Other instruments reward this kind of experimenting. Try placing a second (or third, or fourth) microphone on acoustic guitar, piano, anything. Experiment with the comb filter-derived signal processing to get a sound that is naturalor wacky. One day you may find yourself in a predicament: the amp sounds phat out in the live room, but thin in the control room. Perhaps the problem is that, courtesy of the short delay between two microphones, youve got a big dip in frequency right at a key low frequency region. Undo the problem by changing the spectral location of the frequency notch: move a microphone, which changes the delay, which changes the frequencies being cancelled. Every time you record with more than a single microphone, make it part of your routine to listen for the comb filter effect. Check out each mic alone. Then combine them, looking for critical changes in the timbre. What frequency ranges disappear? What frequency ranges get louder? The hope is to find a way to get rid of unwanted or less interesting parts of the sound while emphasizing the more unique and more appealing components of the sound. And make short delays part of your mixing bag of tricks. For subtle tone shaping or a radical special effect, the short delay is a powerful signal processor. Mastering it will lead directly to better sounding recordings. Alex Case knows the differ ence between a comb filter and an oil filter. Request Nuts & Bolts topics via [email protected].
RECORDING AUGUST 2000
B Y AL E X C AS E
Figure 1 reiterates the controls on a standard digital delay device (we looked at this some in our two earlier columns on delay). This month we focus on the modulation section of this delay unit. These controls let us change the delay time in controlled, clever ways. Figure 2a describes a fixed delay time of 100 milliseconds. Its a slap echo as discussed last month. The delay unit takes whatever signal you send it, holds it for the delay time you set (100 ms), then sends it out. Thats it. Throughout the song, all session long, the delay time remains exactly 100 milliseconds; all signals sent to it guitar, vocal, or didgereedooexperience the exact same amount of delay. Thats a delay without modulation. Some great effects begin when you start using some of the delay modulation controls. Usually three basic controls are found: Rate, Depth, and Shape. Rate controls how quickly the delay time is changed. Figure 2b gives a graphic representation of what happens when this control is changed. Youll find cases when youll want to sweep the delay time imperceptibly slowly (the dashed line), and other times where you dial in a fast, very audible rate (the solid line). Depth controls how much the delay is modulated. Figure 2c graphically contrasts two different settings. That fixed delay time might be increased and decreased by 5 milliseconds (the dashed line), 50 milliseconds (the solid line), or more. The third control, Shape,describes how the device moves from one delay setting to the next.
As Figure 2d shows, it can sweep in a perfect sinusoidal shape (the dashed line) back and forth between the upper limit and the lower limit you set (those upper and lower limits you set with the Depth control). Or you might want a square wave sort of trajectory between delay times, in which the delay time snapsinstead of sweepsfrom one setting to the other. Figure 2d highlights a common feature of the Shape control: it lets you use a shape that is some mixture of the twopart sinusoid, part square. Your delay may also have a random setting in which the delay time moves less orderly between the two delay extremes. Finally, some delay units let you use a combination of all the above, for example varying the delay time in a slightly random,mostly sinusoidal general pattern. The Shape control lets you mix these options and set a contour for how the delay moves between its highest and lowest settings. These three controls let you take control of the delay and play it like a musical instrument. You set how fast the delay moves (rate). You set the limits on the range of delay times allowed (depth). And you determine how the delay moves from its shortest to its longest time (shape). Flanging, chorusing,doubling, and related effects are now yours for the taking. Flange Dialing in a very short delay time and modulating it via these three controls lets you create flanging. The only rule is that the delay time needs to be less than about 20 millisecondsin fact I recommend starting with a delay time of 10 milliseconds or less. This ensures that audible comb filtering will occur. Set the delay modulation controls to taste.
The flangers ringing, whooshing, ear tickling sound comes from the comb filter effect we discussed last month in combination with the modulation controls we just went over. Recall from last month what happens when you combine a signal with a delayed version of itself. When the delay time is below about 20 milliseconds, certain frequencies are constructively reinforced. Other frequencies oppose each other, attenuating or even canceling each other out. Thats good old comb filtering. Those peaks and valleys in the frequency spectrum introduced by a short delay offer a distinct sound. The specific frequencies where spectral boosts and cuts occur depend on the specific short delay time we use. One delay setting causes the peaks and valleys to occur at one set of frequencies. A different delay setting results in a different set of peaks and valleys (see Figure 3).
The way cool effect comes from modulating the delay time. As your modulated delay sweeps from one delay time to another, the comb filter bumps and notches sweep also. Figure 3 shows the result: flanging. Sometimes you dont hold back: the entire mix gets flanged. Other times you might apply the effect to a single instrument, like the drum kit. You might flange just a single track. Or you might limit the effect to just one section of that track (e.g. only on the bridge). Flange to taste. Pop music is full of examples of flanging.Then She Appearedfrom XTCs last effort as a band on the album Nonesuc h offers a good case study of a gently sweeping flange. Each time the words Then she appearedoccur, a bit of your traditional flange begins courtesy of a short vocal delay being slowly modulated. In this example the flange comes and goes throughout the song, offering us a good chance to hear the vocals with and without the delay treatment. Feel free to take a more subtle approach, as on Michael Penns Cover Up from the album Resigned . A wacky flange appears on the vocal for the single word guests near the end of the second verse. Thats it. No more flange on the vocal for the rest of the tune. Its just a pop mix detail to make the arrangement that much more interesting. The flange effect actually softens the rather hard sounding, sibilant, and difficult to sing sts at the end of the word guests. And it makes me smile every time I hear it. The simple effect that comes from mixing in a short, modulated delay offers a broad range of audio effects. Flanging invites your creative exploration.
Double The first Nuts & Bolts of Delay column (July 2000) focused on long delaysthose greater than about 50 milliseconds. We saw how long delays are used for a broad range of echobased effects.
The result is neither an echo nor a flange. The delay is too short to be perceived as an echo. It happens too fast. But the delay is too long to lead to audible comb filtering. What do these medium delays do? Try a 30 millisecond delay on a vocal
Short delays of about 20 milliseconds or less create the radical comb filtered effect that, especially when modulated, we call flanging. What goes on in between 20 and 50 milliseconds? Naturally, the best way to answer that question is to listen to the effect of combining a signal with a medium delay somewhere between about 20 and 50 milliseconds.
track for a good clue. This kind of medium delay sounds a little bit like a double tracklike two tracks of the same singer singing the same part. It is a common multitrack production technique to have the singer double a track. You record a killer take,then have the singer record the part again on a separate track along with him or herself. The resulting sound is stronger and richer. It even shimmers a little.
If you are unfamiliar with the sound of doubled vocal tracks, a clean example can be found at the beginning of You Never Give Me Your Money on the Beatles Abbey Road. Verse One begins with solo vocal. On the words funny paper the doubling begins. The vocal remains doubled for the next line and then the harmonies commence. Speaking of harmonies, among other things Roy Thomas Baker is famous for taking doubling to the hilt.Check out the harmonies, doubled and tripled (probably much more), throughout The Cars first album. For example, listen to the first harmonies on the first song Good Times Roll, when they sing the hook good times roll. It sounds deep and immense; the vocals take on a slick, hyped sound. This layering of tracks borrows from the tradition of forming instrumental sections in orchestras and choirs. The value of having multiple instruments play the same musical part is indescribably magic. Adding more players doesnt just create more volumethe combined sound is rich and ethereal. It transports the listener. A contemporary application of doubling can be found on Macy Grays I Try from the album On How Life Is . Typically, doubletracks support the vocal, adding their inexplicable extra bit of polish. They are generally mixed in a little lower in level than the lead vocal, reinforcing the principal track from the center or panned off to each side. The Macy Gray tune turns this on its head. At the chorus, where you need a good strong vocal, the vocal track panned dead center does something quite brave: it all but disappears. The chorus is sung by doubletracks panned hard left and right. Its brilliantly done. Rather than supporting the vocal,they become the vocal. The chorus doesnt lose strength and the tune doesnt sag or lose energy one bit. The doubled tracks panned hard but mixed aggressively forwardoffer a contagious hook that invites the listeners to sing along. Pop vocalsespecially background vocalsare the instrument most often doubled, followed fairly closely by the rhythm electric guitar. The same part is recorded on two different tracks. On mixdown, they appear panned to opposite sides of the stereo field. The two parts are nearly identical. Sometimes you switch to a different guitar, a different amp, a different microphone or slightly change the tone
RECORDING SEPTEMBER 2000
of the doubling track in some other way. Maybe the only difference between the tracks is the performance. Even rock and roll guitar legends are human (mostly), leading to a pair of guitar tracks that vary slightly in timing. The chugga chugga of the left guitar track is slightly early in one bar, slightly late in the next. Through the interaction of the two guitar tracks, our ears seem to pick up on these subtle delay changes. At times the two tracks are so similar they fuse into one meta instrument. Then one track pulls ahead and we notice it. Then the other track pulls ahead in time and temporarily draws our attention. Doubled guitars are part audio illusion, part audio roller coaster. They are an audio treat, plain and simple. Layering and doubling tracks can be simulated through the use of a medium delay. If it isnt convenient, affordable, or physically possible to have the singer double the track, just run it through a medium delay. Modulate the delay so that the doubled track moves a little. This helps it sound more organic, not like a clone copy of the original track. The result is the beginning of a slick, multitrack effect. Add a bit of regeneration (the lower control in Figure 1), and youll create a few layers of the track underneath the primary one. Some delay units have the ability to offer several delay times simultaneously (called a multitap delay). Dial in several slightly different delay times in the 20 to 50 millisecond area and you are synthesizing the richness of many layered vocals. Spread them out to different pan positions for a wide wall of vocal sound. Fun stuff. Just make sure the sound is appropriate to the song. The solo folk singer doesnt usually benefit from this treatment. Neither does the jazz trumpet solo. But many pop tunes welcome this as a special effect on lead vocals, backing vocals, keys, strings,pads, bass, and so on. Chorus An alternative name for the doubling effect is chorus. The idea is that you could add this delay effect to a single vocal track to simulate
RECORDING SEPTEMBER 2000
the sound of an entire choir chorusthrough the use of medium delays. Naturally, stacking up 40 medium delays of a single vocalist will not sound convincingly like a choir of 40 people. Think of it instead as a special electronic effect, not an acoustic simulation. And it isnt just for vocals. John Scofields trademark tone includes a strong dose of chorus (and distortion, and a sweet guitar, and brilliant playing, among other things). Youll often hear a bit of chorus on the electric bass. This medium delay concoction is a powerful tool in the creation of musical textures.
of the guitar are mixed together at similar volumes. You get a good taste of chorus. And you get great inspiration to do more with it. A simple delay unit offers a broad range of audio opportunities, representing a nearly infinite number of patches. Short delays create that family of effects called flanging. Medium delays lead to doubling and chorusing. We can take a quick tour of all of the above with a single album: Kick by INXS. To hear a terrific use of flange, listen to Mediate. They really went for it. True doubling? Listen to Sweat and those hard panned questions,How do you feel? What do you think? Whatcha gonna do? Finally, the same album gives us a classic application of chorus to an electric guitar. Check out the rhythm guitar in New Sensation and the steely cool tone the chorusing adds. Of course, when the delay is long enough it separates from the original
To see how out there the effect can be made, spin Throwing Copper by Live and listen to the beginning of the tune Lightning Crashes. Ive no idea what kind of craziness is going on. The guitar sound includes short and medium delays, among the panning, distortion, and phase shifting effects going on. To my ear the delay is being modulated between a short flanging sort of sound (around 10 milliseconds) and up to a longer, chorus sort of delay time (around maybe 40 milliseconds). Note especially the sound of the guitar in the second verse, when the effect and the relatively clean sound
signal and becomes its own perceptible event: an echo. Flange, chorus, and echothree very different kinds of effects that come from a single kind of effects device: the delay. In fact, theres more. This Nuts and Bolts series will soon discuss reverb and pitch shifting, two more classes of effects that, at their heart, are based on the delay. Alex Case used to sing in ahc orus. He hopes to one day sing in a flange. Request Nuts & Bolts topics via [email protected].
Excerpted from the September edition of RECORDING magazine. 2000 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
Pitch Shifting
e all know what happens when you play an audio tape at a faster speed than it was recorded: the pitch of the recording goes up. Play it slower than the recorded speed and the pitch goes down. Somewhere in this simple principle lies an opportunity for audio exploration and entertainment. Can it be done digitally? Of course. How is it done? Using a delay. Yes, I said delay. Loyal readers of Nuts & Bolts have just spent the last three issues reading about delays: echo, chorus, flange, and their many cousins. Yet we arent finished discussing delay, because even pitch shifters are built on this effect. To see how, youve got to put up with a bit of math (which I seem to sneak into every article) and follow along with Figure 1. Figure 1i shows a simple sine wave with a chosen frequency of 250 Hertz. This sine wave completes a cycle every 4 milliseconds, confirmed courtesy of the following familiar equation: Period = the time to complete one cycle Period = 1 / Frequency Period = 1 / 250 Hz Period = 0.004 seconds or 4 milliseconds Figure 1i labels some key landmarks during the course of a single cycle. We start the clock at point A. Here, at time equals zero, the sine wave is at an amplitude of exactly zero and is increasing. It reaches its positive peak amplitude at B, taking exactly 1 millisecond to do so. It has an amplitude of zero again at the halfway point (time equals 2 milliseconds) labeled C. This makes it look a lot like point A, but while the amplitude is the same, notably zero, it is decreasing this time.
BY A LEX CA SE
D is the point of maximum negative amplitude and occurs 3 milliseconds after the beginning of the cycle. And E has our sine wave returning 4 milliseconds after point A to what looks exactly like the starting point: the amplitude is zero, and increasing. Okay so far? We are going to follow these points through some signal processing and move them around a bit. Fixed delay Run this sine wave through a fixed delay of, say, 1 millisecond, and you get the situation described in Figure 1ii. Visually, you might say the sine wave slips along the horizontal time axis by 1 millisecond. Looking point by point, Table 1 shows us what happens. Point A originally occurred at a time of zero. Introduce a fixed delay of one millisecond and Point A now occurs at time equals one millisecond. The other points follow. Point D, for example, occurred undelayed at a time of 3 milliseconds. After it has been run through a fixed, unchanging delay of one millisecond, Point D is forced to occur at a time of 4 milliseconds. Accelerating delay Heres the mind bender. What happens if the delay isnt fixed? What if the delay sweeps from a starting time of 1 millisecond and then increases, and increases, and increases...? Table 2 summarizes. Here the delay changes at a rate of one millisecond per millisecond. Say what? For every millisecond that ticks by during the course of this experiment, the delay gets longer by one millisecond. For
example, if at one point the delay is 10 milliseconds, then five milliseconds later the delay unit is operating at 15 milliseconds. If you havent had the chance to study physics, you might be puzzled by the idea of changing the delay time at a rate of 1 millisecond per millisecond. I find it helpful here to get in my car. Say you are driving at a speed of 85 (edited for your safety) 55 miles per hour and accelerating at a rate of 1 mph per hour (our Canadian neighbors should use kilometers per hour for similar results). That means that for each hour that passes by your speed increases by one mile per hour. Driving 55 mph now becomes 56 mph an hour from now. Whoa! If you subscribe to the idea that cops wont pull you over for speeding until you are at least 10 miles per hour over the speed limit, thenstarting at the speed limit you can drive most of the way across Texas (its about 600 miles from Dallas to El Paso) without getting a ticket. Back to the music. Here we are increasing the delay by 1 millisecond each millisecond. And the surprising result is a change in the pitch of the track. Table 2 shows the location of our sine wave landmarks both before and after the introduction of this steadily increasing delay. Point A initially occurs at a time of zero. At this time the delay is also zero. Point A then remains unchanged and occurs at time zero. Skip to point C. It originally occurs at a time of two milliseconds. By this time the delay has increased from zero to two milliseconds. This delay of two milliseconds leads point C to finally occur at a time of four milliseconds after the beginning of this experiment. Do the math point by point and you get a sine wave that looks like Figure 1iii. The key landmarks are identified. The result is clearly still a sine wave. But since it takes longer to complete the cycle, we know the pitch has changed. Back to our trusty equations. Looking at the new sine wave at the bottom of Figure 1, lets calculate its frequency. The sine wave in Figure 1iii takes a full 8 milliseconds to complete its cycle. Period = the time to complete one cycle Period = 8 milliseconds or 0.008 seconds Frequency = 1 / Period Frequency = 1 / 0.008 Frequency = 125 Hertz Thats right. The constantly increasing delay caused the pitch of the signal to change. We sum it up as follows.A 250 Hertz sine wave run through a delay that increases constantly at the rate of 1 millisecond per millisecond will be lowered in pitch to 125 Hertz. Strange, but true. Skipping the detailsthough you are encouraged to prove these on your ownwe find a number of finer points on pitch shifting. Our example demonstrated that
an increasing delay lowers the pitch. It is also true that a decreasing delay raises the pitch. In addition, our example found that changing the delay at a rate of 1 millisecond per millisecond moved the pitch by an octave. It is also possible to change the pitch by two octaves, or a minor third, or a perfect fifthwhatever you desire; one need only change the delay at the correct rate. So the underlying methodology of pitch shifters is revealed. A pitch shifter is a device that changes a delay in specific controlled ways so as to allow the user to affect the pitch of the audio. Naturally, it aint easy. Weve got a key problem. Return to our example in which we lowered the 250 Hz sine wave by an octave through a steadily increasing delay. If we imagine applying this effect to an entire three and a half minute tune, not just a single cycle of a sine wave, we find we are increasing the delay from a starting point of one millisecond to a final delay time of 210,000 milliseconds (31/ 2 minutes equals 210,000 milliseconds). That is, at the start of the tune we add an increasing delay: 1 ms, then 2 ms, and so on. By the end of the tune, we are adding a delay of 210,000 milliseconds. This highlights two problems.
First, we need a very long delay. Most delays are capable of a one second delay (1,000 milliseconds) at the most. Super cool mega turbo delays might go up to maybe 10 seconds of delay. But a delay of hundreds of thousands of milliseconds (hundreds of seconds) is a lot of signal processing horsepower that is rarely available RAM isnt that cheap. Second, our song, which used to be 31/2 minutes long, doubles in length to seven minutes as we lower the pitch by one octave. Consider the last sound at the very end of the song. Before pitch shifting it occurred 31/ 2 minutes (210,000 milliseconds) after the beginning of the song. By this time our pitch shifting delay has increased from 1 millisecond to 210,000 milliseconds. Therefore the final sound of the pitch shifted song occurs at 210,000 milliseconds (original time) plus 210,000 milliseconds (the length of the delay). That is, the song now ends 420,000 milliseconds (thats seven minutes!) after it began. The 31/ 2 minute song is lowered an octave but doubled in length. Simply increasing the delay forever as above is exactly like playing a tape back at half the speed it was
RECORDING OCTOBER 2000
that increases at rate of 1 millisecond per millisecond will lower the audio by one octave .A ny delay time. So why not keep it a small delay time? The devil is in the details. Getting the pitch shifter to reset itself in this way without being noticeable to the listener isnt easy. It is a problem solved by clever software engineers who find ways to make this inaudible. recorded. The pitch goes down and the song gets longer. Where pitch shifting signal processors differentiate themselves from tape speed tricks is in their cleverness here. Digital delays can be manipulated to always increase, but also to reset themselves. In our sine wave example, what happens if our digital delay increases at a rate of exactly one millisecond per millisecond but never goes over 50 milliseconds in total delay? That is, every time the delay reaches 50 milliseconds it resets itself to a delay of zero and continues increasing from this new delay time at the same rate of one millisecond per millisecond. The result is pitch shifting that never uses too much delay and never makes the song more than 50 milliseconds longer that the unpitch shifted version. After all, our analysis showed it was the rate of change of the delay that led to pitch shifting, not the absolute delay time itself. Any delay time
Older pitch shifters glitched as they tried to return to the original delay time. These dayswe are lucky to be alive in audio in the year 2000those glitches are mostly gone. Today we simply reach for a device labeled pitch shifter, dial in the desired settings (gimme a shift up of a major third mixed in with a shift down of a perfect fourth), and get to work. Life is good. Side effects Before we get into the special effects we create with pitch shifting devices, it is worth noticing that pitch shifting is a natural part of some effects weve already investigated here in the Nuts & Bolts series. Recall the chorus effect that comes from adding a slowly modulated delay of about 30 milliseconds. As you listen to the richness that the chorus effect adds to a vocal or guitar, listen for a subtle amount of pitch shifting. Thats right, pitch shifting is a component of that effect we call chorus. Since a chorus pedal relies on a modulating delay, it introduces a small amount of pitch shifting. As the delay time sweeps up, the pitch is slightly lowered. As the delay time is then swept down, the pitch is then raised, ever so slightly. Repeat until thickened. Special effects The Nuts & Bolts review of the a basic pop mix, Mixing by Numbers in the 4/00 issue, introduced a common effect built in part on pitch shifting: the spreader. A quick review of this effect: the spreader is a patch that enables you to take a mono signal and make it a little more stereo-like. You spread a single track out by sending it through two delays and two pitch shifters. The delays are kept short, each set to different values somewhere between about 15 to 50 milliseconds. Too short and the effect becomes a flange/comb filter (as we discussed last month). Too long and the delays stick out as distinct echoes. So our window for acceptable delay times in this effect is between about 15 and 50 milliseconds.
RECORDING OCTOBER 2000
In using a spreader, the return of one delay output is panned left while the other is panned right. The idea is that these quick delays add a dose of support to the original monophonic track. In effect, these two short delays simulate some early sound reflections that we would hear if we played the sound in a real room. The spreader takes a single mono sound and sends it to two slightly different short delays to simulate reflections coming from the left and right. Thats only half the story. The effect is taken to the next level courtesy of some pitch shifting. Shift each of the delayed signals ever so slightly, and the formerly boring mono signal becomes a much more immersive, interesting loudspeaker creation. Detune each delay by a nearly imperceptible amount, maybe 5 to 15 cents. The goal of the spreader is to create a stereo sort of effect. As a result, we try to keep the signal processing on the left and right sides ever so slightly different from each other. Just as we dialed in unique delay times for each side of this effect, we dial in different pitch shift amounts as wellmaybe the left side goes up 8 cents while the right side goes down 8 cents. Like so much of what we do in recording and mixing pop music, the effect has no basis in reality. By adding delay and pitch shifting, we arent just simulating early reflections from room surfaces anymore. The spreader makes use of our signal processing equipment (delay and pitch shifting) to create a big stereo sound that only exists in loudspeaker music. This sort of thing doesnt happen in symphony halls, opera houses, stadiums or pubs. Its a studio creation, plain and simple. Take this effect further and you end up with what I think of as a thickener. Why limit the patch to two delays and two pitch changes. What if you have the signal processing horsepower in your DAW or in your racks of gear to chain together eight or more delays and pitch shifts? Try it. While itll sound unnatural when used on vocals, many keyboard parts respond well to the thickening treatment. Modulate those delays like a chorus and, guess what? More pitch shifting is introduced.Added in small, careful doses, this densely packed signal of supportive, slightly out-of-tune delays will strengthen and widen the loudspeaker illusion of the track.
RECORDING OCTOBER 2000
Big time Enough with these subtle pitch changes. Lets add a serious amount of pitch shifting. Hammond B3 organs and many blues guitars are often sent through a rather wacky device: the Leslie Cabinet. The Leslie is a hybrid effect that is built on pitch shifting, volume fluctuation, and often a good dose of tube overdrive distortion. It is essentially a tricked out guitar or keyboard amp in which the speakers rotate. Honest. The high frequency driver of a Leslie is horn loaded, and the horn spins around within the amp. Crazy as it sounds, the engineers who came up with this were really thinking. It would be very difficult to spin the large woofer to continue the effect at low frequencies. Instead they enclosed the woofer inside a drum. The drum has holes in it and rotates. The result is a low frequency simulation of what the Leslie is doing with the horns at higher frequencies, er, well, sort of. The Leslie is too funky a device to cover in detail now, but we mention it because it is part of our pitch shifting toolkit. So whats it sound like? With the drum and horn rotating, the loudness of the music increases and decreases at any given listener position amplitude modulation. And with the high frequency horn spinning by, a Doppler effect is created: the pitch increases as the horn comes toward the listener/microphone and then decreases as the horn travels away. The typical example used in the study of the Doppler effect is a train going by, horns ablaze. That classic sound of the pitch dropping as the train passes is based on this principle. Sound sources approaching with any appreciable velocity will increase the perceived pitch of the sound. As the sound source departs, the pitch similarly decreases. The net result of the Leslie system then is a unique fluttery and wobbly sound. The Leslie effect is used wherever B3s and their ilk are used. Typically offering two speeds of rotation, you can hear a fast Leslie and a slow Leslie effect, as well as the acceleration or deceleration in between. Listen to the single note organ line at the introduction to Time and Time Again on the Counting Crows first record, August And Everything After . The high note enters with a fast rotating Leslie. As the line descends, the speed is reduced. Listen carefully throughout this song, this album, and other
RECORDING OCTOBER 2000
B3-centric tunes and youll hear the Leslie pitch shifting vocabulary that keyboardists love. Of course, you can apply it to any track you likeguitar, vocal, oboe if you have the device or one of it many imitators or simulators. The hazard with an obvious pitch shift is that it can be hard to get away with musically. Youve heard special effects in the movies and on some records, where a vocal is shifted up or down by an octave or more. Too low and it conjures up images of death robots invading the mix to eat Shreveport. Too high and your singer becomes a gremlin-on-helium sort of disaster.
pitch shifter makes it easy to add a constant,never ending, never changing, just plain annoying harmony line. Dial in a pitch change that is a major third up and add it to the lead vocal. If the song is entirely diatonic within a major key and is a very happy song, I mean very happy, this might work. Otherwise it is going to sound cloyingly sweet, like adding maple syrup to the ice cream you put on top of your shoo fly pie. The trick to creating harmonies using pitch shifting is to compose musical harmonies. And a static pitch shift will rarely cut it. Fortunately, devices and software plug-ins to facilitate this abound (prob-
In the hands of talented musicians, aggressive pitch shifting really works. TANKAPA (The Artist Now Known As Prince Again) lowers the pitch of the lead vocal track and takes on an entirely new persona in the song, Bob George from The Black Album. The effect is obvious. The result is fantastic. No effort was made to hide the effect in the bass line of Sledgehammer on Peter Gabriels classic So. The entire bass track seems to include the bass plus the bass dropped an entire octave .A n d the octave down bass line is mixed right up there with the original bass. Nothing subtle about it. You can even use a pitch shifter to add two-,three-, or four-part harmony if you are so inclined. But get out your arranging book, because the
ably the most famous is the DigiTech Vocalist series). The pitch shifting can essentially be tied to MIDI note commands enabling you to dictate the harmonies from your MIDI controller. The pitch shifter is processing the vocal line on tape or disk according to the notes you play on the keyboard. This results is a harmony or countermelody line with all the harmony and dissonance you desire. Its built on a single vocal track, and relies almost entirely on good sounding pitch shifting. Go beyond harmonies. Use pitch shifting to turn a single note into an entire chord. String patches can sometimes be made to sound more orchestral with the judicious addition of some perfect octave and perfect fifth pitch shifting (above and/or below) to the patch.
RECORDING OCTOBER 2000
And dont stop with simple intervals.Chords loaded with tensions are okay too, used well. Yes put it front and center in Owner of a Lonely Heart on the album 90125. Singlenote guitar lines are transformed into something more magic and less guitar-like using pitch shifters to create the other notes. A final obvious pitch shifting effect worth mentioning is the stop tape effect. As analog tape risks extinction, this effect may soon be lost on the next generation of recording musicians. When an analog tape is stopped, it doesnt stop instantly; it takes an instant to decelerate. Large reels of tape, like two inch 24 track, are pretty darn heavy. It takes time to stop these large reels from spinning. If you monitor the tape while it tries to stop (and many fancy machines resist this, automatically muting to
ware permits pitch shifting to be done automatically (Antares AutoTune [hardware and software versions], Wave Mechanics Pitch Doctor, TC Electronic Intonator, etc.). That is, the effects device can monitor the pitch of a vocal,violin, or didgeridoo. When it detects a sharp of flat note, it shifts the pitch automatically by the amount necessary to restore tuning. Wow. And it really works. But please be careful with these devices. First,dont over polish your product.Pitch shifting everything into perfect tune is rarely desirable. Vibrato is an obvious example of the de-tuning of an instrument on purpose. And if Bob Dylan had been pitch shifted into perfect pitch, where would folk music be now? There is a lot to be said for a musical amount of out-of-tuneness. Remove all the warts, and you risk removing a lot of emotion from the performance.
If Bob Dylan had been pitch-shifted into perfect pitch, where would folk music be now?
avoid the distraction this causes during a session), you hear the tape slow to a stop. Schlump. The pitch dives down as the tape stops. This is sometimes a musical effect. And its not just for analog tape, as Garbage demonstrates via a Pro Tools effect between the bridge and the third Chorus of I Think Im Paranoid on their second album. Surgical effects Pitch shifting is also used to zoom in and fix a problematic note. Weve all been there. In the old days of multitrack production (about a year ago), we used to sample the bad note. Then we tuned it up using a pitch shifter. It was raised or lowered to taste. Finally, the sampled and pitch shifted note was re-recorded back onto the multitrack. With the problematic note shifted to pitch perfection, no one was the wiser. That was then. Now, clever software taking advantage of powerful hardRECORDING OCTOBER 2000
Second, dont expect to create an opera singer out of a lounge crooner. There is no replacement for actual musical ability. If the bass player cant play a fretless, give her one with those pitch-certain things called frets. If the violin player cant control his intonation,hire one who can. Dont expect to rescue poor musicianship with automatic pitch correction. People want to hear your music, not your effects rack. Out of time This month represents our fourth month of discussion on delay. Are we done yet? Naturally, no. We continue our tour of the delay in a future Nuts & Bolts installment when we take a detailed look at reverb. Alex Case strapped his pitch shifter to his gear shifter and drives byhc anting. Request Nuts & Bolts topics via [email protected].
Excerpted from the October edition of RECORDING magazine. 2000 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
B Y AL EX CA SE
Since the reflected sounds travel along a longer path than the direct sound, they reach the receiver after the direct sound. Bouncing off walls,floor, and ceiling (and furniture, music stands, and other musicians), they also generally arrive at a different angle than the direct sound.
Figure 1: Room reflections. The heavy line is the direct sound, solid lines are single bounces, dashed lines double bounces, and the dotted line shows one of millions of multiple-bounce paths that make up the reverberant sound.
Finally, due to the energy the sound loses as it travels through the air and bounces off various room boundary surfaces, the amplitude of the signal at different frequencies changes. Air and fuzzy surfaces (like carpet, fiberglass and foam) tend to absorb high frequencies. Flexible surfaces (like very large windows or panels of wood) tend to absorb a good amount of low frequencies. All said, the room introduces delay, changes the angle of arrival, and manipulates the loudness and spectral content of a signal. Time How delayed are the reflections? It depends on the room size and geometry. The reflections in larger rooms take longer to reach the listener than the reflections in smaller rooms. If the source or receiver is particularly close to a room surface, that changes the pattern of reflections. Listening to a sound followed immediately by its reflections seems likely to be a review of the Nuts & Bolts Delay Trilogy just completed (JulyOctober 2000). We discussed how a delay of about 5 milliseconds introduces comb filtering when combined, in approximately equal parts, with the undelayed signal. Because sound travels at roughly one foot per millisecond, that means that a signal whose reflected path is about five feet longer than the direct path will create comb filtering. Right? Not necessarily. Try taking a harmonically rich sound like a piano patch or track. Send it to a short delay of about 5 milliseconds. Monitor both at about the same volume. With both signals panned to the same location in the stereo landscape, hard left for example, the comb filter alteration to the frequency content of the signal is unmistakable. Now pan the delay to hard right. Prestothe comb filtering seems to disappear. Instead we get a localization cue: the delay seems to shift the image of the piano toward the undelayed signal. Follow that thought and slowly decrease the delay time. As the delay time approaches zero, the placement of the stereo image heads toward the center. All the while, the comb filtering effect is gone. This points out an enigmatic property of short delays: the angle of arrival matters! Short delays directly combined with their undelayed brethren will create comb filtering. Short
RECORDING NOVEMBER 2000
delays reaching the listener from a Its tempting perhaps to think that very different direction do no such the reflections from all around are thing. In our look at reverberation, ignored so as not to confuse our perthis leads us to ask: sonal audio analysis system. Quite Where do the room reflections the opposite. Sounds without the come from? This too depends on the support of reflections are difficult to physical geometry of the room. You listen to, difficult to localize, and need only patience and a ruler to sound just plain strange. figure out which reflections reach We dont hear sounds in anechoic the receiver. chambers very often, after all, so Figure 1 shows the first handful or our hearing mechanism isnt taiso of reflections. Some reflections we lored to that experience. If youve hear after a single bounce off a surheard sound in an anechoic enviface. Other reflections strike two, ronment, you know its unnerving three, or more surfaces before finally and a little confusing. In fact, reaching our ears. The direction from research has shown our localization which they come seems to be a lot abilities suffer without some addiabout luck, statistics, and/or the tional reflections, even though they physical geometry of the room. come from directions different than Yet our personal audio analysis the direct sound. systems (ears and brains) can make Using amplitude, time of arrival, Reverbus ex machina sense of this. Though it isnt intuand spectral content, we make use of itive, it is important to know that we the clues these reflected sound waves Living as we do at the edge of a arent distracted or confused by offer. Our personal audio analysis new millennium in a thriving digital these reflections. economy full of dot Lets make the com mirages, we may source a singer, singing forget about life your recently penned before audio was digThe room acts like a signal processor: tune, Insulate the itized. But someAttic. We zoom in on where between the music in, music plus effects out. the first word of the time all those critters catchy chorus for this boarded ship with hit-waiting-to-be-disNoah and the present covered. She sings Fiberglass... and system has developed the ability to day, we had a period of non-digital for the sake of analysis we slow time absorb a complex sound field, extract audio. down like a Hollywood movie. The the direct sound, incorporate the While it is fairly trivial today for a receiver hears the word first direct reflected sound field, and add it all computer to do a decent job simulatfrom the source,Fiberglass. Then a up into a complete perception of a ing the sonic character of a space, it reflected version of the word arrives sound in a space. Pretty darn cool. is very difficult to do so with analog from one side, then the other, then electronics. Resourceful equipment Synthesized space from behind, fiberglass...fiberdesigners looked for physical sysglass...fiberglass. To create the sound of a room tems that could sustain a sound like This ought to be confusing, but it without the use of an actual room a decaying acoustic space would. isnt. As you know from listening to one need only assemble the set of They found some success using two music and conversations in real reflections a room would add to a devices: the spring and the plate. spaces, the reflections coming from direct sound. The spring reverb offers an intuitive all around do not stop us from knowA grotesque oversimplification. approach. Initiate subtle vibration in a ingat all timeswhere the singer But even simplified, the illusion spring using your audio waveform, is and what shes singing. works. Each reflected sound suffers and boing, let it go. The spring continResearchers have teased this out a bit of delay and attenuation havues to vibrate for a time, a bit like a of various experiments. We localize ing traveled farther than the direct hall sustains a single violin note. the source based on the angle of sound, and a bit of equalization Well, sorta. The fact is, springs arrival of the first waveform and the due to air and boundary energy dont exactly behave like rooms. pattern of reflections that immediabsorption. They are elastic and can respond to ately follow; we synthesize an opinThe only processes at work are music, but the simulation ends there. ion about the room in which the changes of amplitude, eq, delay, and However, the musical value doesnt! sound event happens based on the angle of arrival. Good news, because Just because a spring doesnt sound amplitude, quality, and angle of effects racks and pull-down menus like the Musikvereinssaal in Vienna arrival patterns of these supporting are full of that sort of capability doesnt mean it isnt good enough for reflections. traditional studio signal processing Jimi or Stevie or You. Leo Fender put
RECORDING NOVEMBER 2000
can, cleverly employed, simulate reverberation. Rackmount units display the word hall and do a fun job of sounding like one. Digital reverberators are, to summarize, very shrewd volume adjusters, spectrum manipulators, changeable panners, and variable delays. An audio waveform goes in and triggers a nearly infinite set of faded, equalized, panned, and delayed versions of itself. Naturally, some equalizers sound better than others, some delay units sound better than others. And the whole algorithm used to simulate the complex pattern of sound energy is going to have an audible effect on the sound of the reverb. Not surprisingly then, some reverb devices sound better than others. At the very least, most reverb devices sound different from most others on the market. Each manufacturer offers its own approach, creating its own sound; our studios benefit from having many different reverbs. There is no single best, just a broad palette of reverbs awaiting our creative use.
spring reverbs in electric guitar amps, and theres been no turning back. Spring reverb rings with its own distinct character. Subtly used, it fills in underneath a track, adding support and shimmer. Overdriven, it crashes and wobbles (ever move a guitar amp while it was cranked andcrwuwawuwawoingthe spring gets jostled?). Taking the spring idea and making it two-dimensional leads us to the plate reverb. This device is essentially a sheet of metal with a driver attached to it to initiate vibration and a sensor or two or more to pick up the decay that ensues. (Will surround sound lead to multichannel plates? I fear the answer is yes.) The plate is another mechanical simulation of an acoustic space. Bang on a sheet of metal and it rings for a while, again somewhat like the solo violin in a symphony hall. And like the spring, as a simulation of an actual space the plate falls short. But as a pop music effect it is a sweet success. Sweet sound, funky smell When an actual large hall isnt feasible and a spring or plate reverb isnt available, theres always the bathroom. Large spaces are reverberant in part because they are large spaces (I get paid to say this sort of thing?). That is, the reverberance of a space is directly proportional to the size of the room. Make the room wider, longer, and/or higher, and the reverb time increases (because the reflections have farther to travel). The other key driver of reverberation in a physical space is the absorptivity of the room surfaces. Absorptive materials on the floor, walls, or ceiling will lower the reverb time. Hard reflective surfaces increase the reverb time. The trouble with using reverb from a hall during a studio production is that there isnt usually a hall around. So lacking a large space with its associated long reverberation time, we go to the only room around with really hard shiny surfaces: the tiled bathroom. Because the tiles reflect sound energy more than your typical room finish treatments like gypsum wall board or carpeting, the bathroom has a little reverberant kick .K i t chens sometimes are a close second place. Rarely carpeted,they have a decent amount of hard surfaces: countertops, appliances, wood cabinets, and such. Elevator shafts and high rise fire stairs have contributed a big reverb to the studio that could get away with it.
RECORDING NOVEMBER 2000
Naturally, some studios built reverberant bathrooms on purpose.Lose the plumbing fixtures and make the room a little bigger and youve got a reverb chamber. Put in loudspeakers (inputs) and microphones (outputs) and youve got a physical space reverberator. What it lacks in physical volume its nowhere near the size of an opera houseit makes up for in highly reflective surfaces of stone, tile, cement, beer bottles, and such. The result, of course, isnt an opera house simulation on the cheap, but a wholly different kind of reverberation. Chambers offer their own unique signature to the audio sent to them. The art of building and maintaining them has distinguished a select few studios that get bookings partly for the sound of their chambers.
an impulse (e.g. a sharp clap, gun shot, balloon pop, or electronically synthesized click) until you cant hear it anymore (roughly 60 dB quieter). Some of the most famous symphonic halls have reverb times averaging just under two seconds; opera houses extract better speech intelligibility by shortening reverberation to just over one second. Digital reverbs,springs, and plates empower you to dial in any reverb time you like. Have fun. Spectrum Listen, in your mind, to the sound of a room decaying. Cut that sound up into different frequency ranges and create a reverb time measurement for each spectral region of interest. RT60 typically refers to the decay of the octave band centered on 1 kHz.
Real spaces always have some predelay. If they have it, why shouldnt reverb patches?
Breaking it on down Reverb in all its flavorsphysical performance spaces, digital effects devices, mechanical resonating systems, and acoustic chamberscan be broken down into a few parameters. But I must preface it with this: all reverbs offer unique and subtle sonic contributions to your audio that defy measurement. Take two different reverbs and set them to the same patch, dialing in the same values for all their adjustable parameters, and theyll still sound different. No symphony hall sounds exactly the same as any other. No plate sounds exactly like any other. Always listen for what you like; its just the sound of the added, synthesized ambience that matters, not the reverb time, not the algorithm, and certainly not the reverb make and model number. Reverb time Easily the most cited descriptor of reverberation is Reverb Time. Sometimes called RT60, reverb time measures the number of seconds necessary for the sound in a room to decay by 60 dB. Practically and historically speaking, RT60 measures how long a sound lingers in a room after But there is nothing stopping us from measuring the RT60 at the octave bands below and above 1 kHz. In fact, architectural acousticians measure and calculate the reverb time at all audible frequency bands. Like using a tone control, acousticians design spaces with different reverb times at different frequencies to satisfy musical taste, not scientific purity. Actually, halls are distinctly not flat in the spectral content of their reverb. Halls for classical and romantic music repertoire typically have low frequency reverb times that are a bit longer than the mid frequency reverb times. This gives the halls a degree of warmth that seems to support the type of music that will be played there. Youll see this expressed in acoustics literature and reverb signal processor manuals as Bass Ratio. Bass Ratio mathematically compares two octaves of low frequency reverb (125 Hz and 250 Hz) to two octaves of mid frequency reverb (500 Hz and 1000 Hz). The resulting ratio quantifies a halls warmth, what we might call its Phatness. Hall designers are finding what works for a Gorecki symphony and a Puccini opera. But only you know the
RECORDING NOVEMBER 2000
color of reverberation that works for tonights track, Insulating the Attic. Experiment with the tone color of your reverb by adjusting its Bass Ratio if it offers one. A Bass Ratio of 1.2 will warm up the reverberant wash of ambience by telling the reverb to create a low frequency reverb time that is 1.2 times as long as the mid frequency reverb time. Some reverbs dont offer bass ratio control. Shape the color of your reverb by using eq on the reverb returns on your mixer or on the send to the reverb. Control the low end to add warmth, not muddiness. Or if you are going for a brighter reverb (why not?), find some magic shimmer and airiness but avoid painful sizzle and sharpness. Predelay and Early Reflections Beyond the length and color of the reverb, two other fundamental properties of reverberation are the time it begins (predelay) and the timing of the first few bouncessingle bounces from the source to a wall to the listener (early reflections). If you use a spring or plate reverb, the wash of decay commences the instant your sound starts. In a large hall (or gymnasium, or canyon, or domed stadium) it takes an instant before the reverb begins, and there are one or more distinct bounces before the wash of reverb sets in. By adjusting the parameter identified on most devices as Predelay, we can adjust the time gap between sound start and reverb start. Predelay simply inserts a delay between the direct sound and the reverberation algorithm. In the world of digital audio, adding a delay is fairly trivial, so predelay controls are found on almost any digital reverb device. In the realm of analog audio, delay isnt so easy. Plates and springs therefore rarely give you this feature. When using a plate or spring reverb or a bathroomhave the best of both worlds by inserting a digital delay on your reverb send so that you can add a controllable amount of delay before the reverb begins. Tape delay is a common feature in this role as well. Early reflection control is common even on the most inexpensive digital reverbs, and has been for a long time.
RECORDING NOVEMBER 2000
In general, the simplest units let you control the proportion of the early reflections by setting their relative volume. Their pattern depends on the shape of the room youve selected What good do these parameters do? The answer is built in two worlds: physical acoustics and psychoacoustics. First, real spaces always have some amount of physical predelay because it takes time for the sound to travel out to all the room boundaries and bounce back at the listener, first in distinct early reflections and then in an enveloping wash of reverb. If real spaces have it, why shouldnt reverb patches? Second,predelay is very valuable to our personal auditory analysis system. Listen carefully to the sound of a
Excerpted from the November edition of RECORDING magazine. 2000 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
BY A L EX CA SE
hat does reverb sound like? There are so many kinds; Figure 1 breaks it down into some logical categories. So far so good. Once we learn what a hall sounds like, and a plate sounds like, well start to master the topic of reverberation. Weve got our work cut out for us, though, precisely because there are so many kinds. And were all dying to know what sort of reverb they used on the new Tattooed Waif album, Pierce Me Here . Lets break it down. Reverb devices in general might be broken down into four broad categories: spring, plate, digital reverberator, and special effects. We discuss the first three here, saving special effects for next month. As we discussed in last months Nuts & Bolts thrill ride, reverbs that rely on a mechanical device like a spring or a plate to generate ambience define their own class of reverb. They each have such a unique sound that they deserve a category to themselves. Learn what they sound like and reach for them whenever the creative urge hits you. Stevie Ray Vaughn offers a case study on both spring and plate throughout his debut album, Texas Flood . In general, his guitar has classic spring reverb and his vocal has plate reverbwith predelay that sounds likely to be tapebased. From the opening guitar notes and vocal line on the first tune, Love Struck Baby, these two classic reverb sounds make themselves known. And theres no reason not to send the guitar to the plate, the snare to the plate, and so on. But that right-most category on Figure One, digital reverb, is a little vague. When the reverb comes in a digital box, as small as half a rack space, it becomes trickier to classify.
Just modifying a single reverb patch opens up a nearly infinite set of possibilities. Reverb times can range from maybe a couple hundred milliseconds up to 20 or 30 seconds. Predelay is adjustable from 0 to maybe a second or two. Part 1 of this series on reverb introduced a number of reverb parameters: bass ratio, predelay, equalization, filtering. Where do we begin? Time & space Digital reverbs can be defined based on the size of the architectural space they simulate: large hall or small room. In between, well, theres medium room, big brothers room (which is larger than my room), the laundry room, the basement, and the gym. Aack. It goes on: stadium, canyon, locker room, live room, etc. So we draw a line in the sand separating large from small. Reverb times (RT) greater than about 1.5 seconds (and they can go as high as a positively insane 30 seconds or more) make up the large reverbs. Naturally, reverb times of about 1.5 seconds and less are small.
Figure 1
Large takes many names: hall, warm hall, bright hall, cathedral, Taj Mahal, and such. Small includes things like chamber, medium room, tight booth, and such. As each has its purpose, it isnt a bad idea to start a session with one large and one small reverb set up and ready to go. The names of the reverb presets might seem nearly meaningless; you know they can all be adjusted to almost any reverb time. Medium room. RT = 1.3 seconds. Its no big deal to change it to 2.2 seconds and convert it into a hall, right? Not exactly. Theres a bit more to it than reverb time. A hall sounds different than a room. Reverb designers have gone to the trouble to capture those differencesthe time delay between the direct sound and the onset of reverberation is greater in a hall than in a room because the walls are farther apart. And as the distance between room boundaries is greater on average for a hall than it is for a room, the general pattern and density of early reflections is different for a hall than a room. There are countless, however subtle, differences between a large hall and a small room. Our ears (and brain) are excellent at catching those subtleties. As a result, reverb designers go to great trouble to capture and/or simulate those magic little differences that define a space as a hall, an opera house, a medium sized room, and so on. So when you dial in a preset reverb that says hall, not room, be assured that someone has taken the time to try to capture those differences. Gorgeous (i.e. expensive) hall programs will sometimes sound flat out bad if you shrink their size down to room-like dimensions. Likewise, lengthening a great sounding room patch to hall-like reverberation will often lead to an unnatural, unconvincing sound full of strange artifacts. Having said that, I can be pretty sure you are all going try it on your next mix. Thats okay, because music and music technologies reward that sort of innovation and chance taking. But its important to know when you are stretching boundaries and what to look out for. So what do we do with a long verb, a short verb, and so on? Thats a little bit like asking Whats a D minor 7 chord for? You use it when it sounds right to you. And you can use it when the theory supports it. What follows is some discussion of good uses of different types of reverbs. Listen carefully to recordings you like and learn by example.
RECORDING DECEMBER 2000
Try similar approaches, and armed with that experience, create your own bag of reverb tricks. Magic dust Sprinkle long reverb onto a vocal or a piano or a string pad for some hype, polish, and glitter. It will almost certainly put the studio stamp on your recording, but the slickness of a huger-than-huge reverb can add a bit of professionalism to the recording you are trying to make. Typical modifications to the standard large hall come courtesy of the bass ratio control (discussed in last months column) and good ol equalization. Brighten it, warm it up, or both. Bright reverbs are often a standard patch in your digital reverberation device. The slightly peculiar thing is that they dont really exist in natural spaces. As sound travels through the air, the highest frequencies attenuate first. As the sound propogates, it is the lowest frequencies you hear last.
Figure 2
RECORDING DECEMBER 2000
Youve heard the dominance of low frequencies over high frequencies if youve ever stood beside a busy street and listened to the sound of the car radios leaking out of the vehicles. You can hear the thump and rumble of the kick and bass but not much of the rest of the musicfrom one car. As for the talk radio addict sitting in the other car nearby, it sounds a lot like the teacher in Charles Schultz Peanuts cartoons, Wawa waaaawuh waaa wo wo wawa waaaaa. Thats the sound of speech that is mostly vowels (lower frequencies) and that lacks consonants (higher frequencies).As sound breaks out of these cars and into your neighborhood, the low frequencies start to dominate; the high frequencies start to evaporate. Believe it or not, the bright reverb, full of sizzle and shimmer, is a rock and roll protest. It is the sound of an acoustic space that doesnt naturally exist. Its what it would sound like if high frequencies won out over lows. And for some applications it sounds pretty good. Paul Simon has such good diction that, rumor has it, he is de-essed at tracking, mixing, and mastering. Using his super human Ss to zing a bright reverb was too interesting an effect to pass up. Listen to the down tempo songs on Rhythm of the Saints . A shot of high frequency energy ripples through the reverb with each hard consonant Paul sings. The other option, if you arent brightening the reverb, is to fatten it. Adding a low frequency bias to the sound of your long hall reverb patch adds a warm, rich foundation to your mix. This comes closer to physical, architectural reality as it is often a design goal of performance halls to have the low frequency reverb time linger a bit longer than the mid frequency reverb time. And if its good enough for Mozart, its good enough for pop. Naturally, we are allowed to select all of the above for a warm and sparkly reverb sound. Be careful, though. If the decay of the reverb fills the entire spectral range of your mix, high and low, it will leave no room for the bass, the cymbals, the vocals, the strings, and so on. Divvying up the spectral real estate is a constant challenge in pop music mixing. And while it might always be tempting to use a full-bandwidth reverb that sings across the entire audible spectrum, it can be wiser to limit the harmonic size of the sound of the reverb and assemble a full multitrack arrangement that, in sum, fills the spectral landscape. The third principal variable after reverb time and reverb tone is predelay. That gap in time between when a sound begins and when a physical space is energized and starts reverberating is an excellent parameter to manipulate. To change it in physical space requires moving walls and raising ceilings. The results are ethereal. Think ballad. Start with a long reverb preset on a voice,maybe the Oooh or Aaaah of a background vocal. Listen carefully as you stretch the predelay from maybe 20 milliseconds to 40 milliseconds, 60 millieseconds, on out to 100 milliseconds or more. The feeling of reverberation certainly increases as you lengthen the predelay. So does the feeling of distance and loneliness. Here weve stumbled onto one the most interesting parts of the recording craft. By manipulating predelay, which is a variable in the studio (but not in the opera house) weve created the feeling of a longer reverb
When you find yourself noticing and liking the ambient sound of a room, capture it in your recording. Two approaches: place microphones so as to capture a satisfying blend of the instrument and the room, or place microphones to just plain capture the room. The first approach is one of the without lengthening the reverb. If it sounds like we get joys of recording. To record the music and the room, you to violate the laws of physics and architecture in the abandon the pop music tradition of close miking and studio, its because we do. start recording instruments from a distance. Ambient If youve ever suffered from a mix that became overly miking approaches abound and are a topic of an upcomcrowded, confusing, and messy as all the tracks and ing Nuts & Bolts column. effects were added, you may wish to remember this: preIt is worth mentioning that this aint easy. To pull the delay can be used to separate the reverb tail from the microphones away from the instrument is to abandon direct sound by a little extra bit of time. This slight sepasome control and consistency in our recording craft. ration makes the reverberation easier to hear. The result Perhaps youve recorded your husbands ukulele a milis the addition of extra reverb in feeling, without the lion times and know exactly where to put the mic to capactual addition of mix-muddying extra reverb in reality. ture the sweet ukulele tone that always satisfies your clients. Youve worked hard to find that perfect mic Far out placement location that works anytime, anywhere, any Adding reverb to some tracks is like adding garlic to gig. It is no doubt a mic position placed very close to the some sauces: yum. Sometimes, though, we are a little instrumentso close to the ukulele itself that it more strategic in our motivations to use reverb. ignores the sound of the room. There is comfort in the With the help of Figure 2, picture in your minds ears close miking approach. the sound of a voiceover you just recorded in your studio. But exploring ambient miking techniques will pay diviFor this example we close-miked the talent in a relatively dends, sometimes setting the vibe for the entire tune. dead ro o m .P l ay back the track and you hear, well, the Capturing those tracks requires experience, quality sound of that person speaking, and he or she sounds near- equipment, and good acousticsand a bit of good luck by. Recorded by a microphone maybe doesnt hurt. Explore this path only six inches away from the voiceover when a project has the time and artist, it isnt surprising that the voice Major warning: a classic mistake that inexperienced recording creative motivation to do so. sounds close and intimate. engineers make is to add too much reverb. For me, learning The second, slightly safer option Now add a good dose of reverb how to use reverb was a little bit like when I learned to make for capturing actual acoustic (hall-type patch with a reverb time chocolate milk a couple (maybe more) years ago. On the secreverb instead of simulating it is of about 2.0 seconds). Perceptually, ond try (without mom watching) I doubled the recipe. On the to record the ambience of the the voice now sounds more distant. third try (sorry mom), the chocolate to milk ratio went decidroom onto separate tracks. Place a The loudspeakers didnt move, but edly in favor of chocolate (who needs the milk part anyway?). mic or two anywhere in the our image of the sound coming out of Such is the life of a kid. roomthe other side of the room, them sure did. As we use pan pots to This more is better approach to life might work for chocolate on the floor, at the ceiling, in a locate discrete tracks of audio left to milk, but it doesnt work for reverb. Too much of a good thing closet, down the hall... Record the right, we use reverb to locate elesounds cheap and poorly produced. Its literally the calling card room in a location you think offers ments of the music front to back. of a young engineer. a musical contribution to the Your mixes take on an unreal depth Dont sweat it, though. Reverb will fool you the first few times, sound of the instruments. as you master this technique. but heres how to outsmart it. Do a mix and add as much Of course you need spare tracks reverb as you want. Dont hold back. Turn up the reverb until for this, but it enables you to Gel you hear it and like it. Print the mix. Three days later, listen to close-mike the instruments as you The sound of the immediate space the mix. Theres nothing like the passage of time to clear our always have and to capture some around a band can be very evocative ears and let us hear things as weve never heard them before. of the sound of the room too. You of, um, the band in a room. Common Youll say What was I thinking? as your mix swims in revermay end up with the opportunity on drums and almost any section berant ooze. Weve all been there. to create unique sounds on mix(strings, horns, choir, kazoos), room Its pretty fascinating that we could be in the studio, leaning down. ambience can help unite 32 tracks of into the speakers, ears wide open, adding what sounds like an Spring, Plate, Large Hall, and overdubs into a single, compelling appropriate amount of reverb only to discover, well, oops. Its Small Room. Those are the obvious whole. Dial in a room patch with a something of an audio illusion. The more you listen for it, the reverbs. And they offer a limitless reverb time of about 1.3 seconds or harder it is to hear it. You get control of the reverb (and other set of sonic possibilities. Next less and start gluing tracks together. effects) in your tracks only when you learn to listen confimonth well look at the more The trombone lines that were record- dently. Relaxed, youll hear everything you need to hear, and, bizarre reverb tactics: to reverse it, ed two months and and two hundred with experience youll know how to adjust the equipment distort it, compress it, and who miles away from the original saxoaccordingly. knows what else. Hopefully the phone parts will fall into the mix. The fact is, reverb is something we have to learn to hear. For audio police wont pull us over. As reverb gets this short, it is time most humans reverb is not a variable, it is a fact. Our hearing to ask ourselves Why synthesize it? system hasnt evolved with the concept that reverberation is Alex Case wonders: before Reverb, Recording studios, large living adjustable. Recording engineers must discover and develop this does it just Verb? Offer help via rooms, converted garages, and renoability. So much of audio (especially compression and [email protected]. vated barns can make a contribution ization) is this way. Give yourself the chance to learn by makto the sound you are recording. It ing some fat, juicy mistakes! makes sense, therefore, to record it.
Technique
Excerpted from the December edition of RECORDING magazine. 2000 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
BY A LEX CA SE
e know that on most sessions, adding reverb to a track is usually a straightforward task. We have an almost infinite range of conventional solutions to choose frombut there are also some long-standing and unusual studio reverb concoctions worthy of study. Well start with backwards reverb, lovingly called breveR. Analog tape machines reward exploration. Tape can be cut, spliced, sped up, slowed down, andyesplayed backwards. Try it. Put your multitrack tape on upside down (swapping the supply and take-up reels) and roll it. It wont hurt the tape or the tape machine. Andas Jimi, the Beatles, Michael Penn, and others have shownit can sound pretty cool indeed. Heres the reverb part. With the multitrack tape playing backwards, add and record some reverb. First and most important find an empty track. Be very, very sure its empty. This isnt easy when youve flipped the tape over for reverse play. If you have an 8-track analog multitrack recorder, track one is on top and track eight is on the bottom. Track one of the tape machine, which you are probably monitoring on channel one of your mixer, is actually playing back track eight off tape. Track two moves to seven, three to six, and so on. If you have the privilege of using an analog 24-track mutitrack tape machine it gets even more confusing. And it isnt easy to identify tracks just by pushing up the faders and listening. Kick, snare, bass, and piano sound close to what you might expect. But it is darn difficult to identify vocals. Is this take one, take two, or what? For reverse effects, I temporarily label the track sheet with the new track numbers by manually starting at the highest track, labeling it Reverse Track One, and counting up from there. Then my track sheet makes clear that vocal take two on track 17 will appear backwards on track 8. Once you know exactly what track you are going to use, push up the source signal fader. Use an aux send to get it into your reverb of choice. And record the output of your reverb to the empty track(s). A good starting
point is to use an instrument prevalent throughout the song, say a snare track or vocal. Maybe the singer sang La la, Baby. Played backwards you hear the nonsensical, ybaB al aL. Add verb and there is a decaying sound after each backwards word. Print that reverb. Now the fun part: flip the tape back over and play the multitrack as originally recorded. The transcendental line is restored: La la, Baby. But push up the faders controlling the backwards reverb you just recorded, and a weird this doesnt happen in nature sort of thing happens. The decay now comes before the word that caused it. Reverse reverb is an effect that strangely anticipates the sound about to happen. Figure One shows whats going on. For simplicity we consider a basic snare back beat falling on beats two and four (Figure 1a). In forward play, the reverb you add decays after each hit of the snare (Figure 1b). This creates the expected combination a dry, close miked snare plus reverb (Figure 1c). Thats the typical approach. This type of reverb adds a natural ambience or perhaps a hyped explosiveness to the mix. Lets follow these same steps for brever (i.e. reverse reverb). When playing the tape backwards, we observe our snare hitting on beats four and two (Figure 1d). Thats the same back beat, only backwards. Record some reverb from this backwards playing snare (Figure 1e). Return to normal, forward play and check out how the backwards reverb now occurs before each snare hit (Figure 1f). This elaborate process is tedious and more than a little disorienting at firs t .D o n t experiment with this for the first time in a high pressure session in front of your most difficult client. And definitely dont attempt this at 3 a.m. after an 18-hour session. The risk of accidentally erasing a track while recording on an upside-down reel is too great. But after some practice on other sessions or on your own music, youll be able to reach for this approach comfortably and add a bit of uniqueness to part of the project.
It takes a fair amount of trial and error to get the effect you want. Its hard to predict how it will sound when you dial in a reverb while the vocal sings gibberish, htnom yreve stlob dna stun daer. Its not until you record the reverb and play it back forwards that you can really tell if you like the reverb type, reverb time, predelay, bass ratio, etc. Used carefully and sparingly you can offer your listeners a wild ride. (This works with echo, too, by the waycheck out the incoming/outgoing vocal echo effects on It Can Happen on 90125 by Yes.) Variations on the theme Backwards-like reverb effects appear as presets on some reverb devices. Often called non-linear reverbs, these reverbs dont decay from loud to soft after a sound. In fact, they do the opposite. Instead of getting gently softer as they decay, non-linear reverbs get louder as they decay. Say what? I know its weird. Since digital reverbs are controlled by software, not room acoustics, they can do some pretty bizarre, nonintuitive things. A regular decaying reverb can be compressed and amplitude-modulated (with a single cycle of a sawtooth wave) as shown conceptually in Figure 2, making a reverb swell soft to loud. Patch this up or look for a preset in your digital reverb to create this effect. Of course, you can use non-linear reverb wherever you like, but look first at percussion instruments in pop music settings. The sound of a conga,triangle, clave, or other sharp percussion instrument lasts mere milliseconds. It is a mixing challenge to make such a short waveform noticeable in a crowded pop mix full of synths, strings, guitars, and layers of background vocals. Use the non-linear verb to lengthen the perceived duration of the percussion event slightly, making it easier to hear and therefore easier to slide into the mix. A heavy dose of the non-linear reverb sounds like a wacky effectsometimes appropriate, sometimes not. A subtle dose can retain the naturalness of the instrument and still accomplish the mix goal of getting the sound noticed. Create the sound you like best for the tune at hand. Playing tape backwards to create reverb that in turn is played forwards is a lot of trouble. Tape machine manufacturers have sometimes built in the ability to play and
RECORDING JANUARY 2001
Figure 1
With apologies to the engineers who so carefully figured out how to digitally simulate the sound of that gorgeous symphony hall, squish it hard with compression. Change the sound of your dry tambourine into a driving, grooving, agitating, in-your-face tambourine surrounded by the surging, distorting, fizzling sonic aura of compressed reverb. record backwards to make this exercise a little easier. But armed with a sampler or digital audio editor, you can record, reverse, cut, and paste with ease. All your effects units just doubled the number of patches they have. You can sample them and play them backwards. Gated reverb Send the snare drum to an aggressively compressed, very long reverb patch (maybe a plate program modified to a ridiculous reverb time of five seconds or so) and you
Figure 2 Squished reverb The inexplicable magic of the delicate decay of sound within an ornate European music performance hall also responds well toIm serious herecompression. Why the heck not? We discussed the Nuts & Bolts of Compression back in Part 9, 3/2000. Using compression to alter the way a waveform attacks and decays is old hat. Reverb is the decay of a sound. Compressing reverb enables you to change the decay of this decay. As the compressor changes the amplitude of the reverberant wash, the musical impact of the reverb changes too. For example, it is perfectly normal to record a tambourine in a dry (i.e. no natural reverberation) booth, bedroom, or basement. No pro b l e m .A dd some bright hall to it at mixdown, right? So far so good. But maybe youve experienced the problem of a distant, weak reverb. That is, adding reverb to a rock n roll tambourine diminishes the impact of the percussion instrument, adding distance between the tambourine and your listener. This isnt surprising.As we discussed last month, we sometimes use reverb with the intent of pushing a particular sound farther back toward the sonic horizon. Adding reverb to our tambourine can rob it of its power, sliding its contribution to the groove away from the rhythm section and away from the listener. Slamming drums, huge bass, a wall of guitars, screaming vocals... and that dude way back over there tapping his tambourine. Not so compelling, as rock and roll statements go. Compression to the rescue. Slam the reverb through a compressor, and it turns into an entirely new kind of sound. Low threshold, high ratio, fast releasing compression changes reverb into a burst of noise and energy associated with every hit of the tambourine (or slam of the snare, or strum of the guitar, ...).
RECORDING JANUARY 2001
can create a bed of noise that seems never to decay. Each snare hit re-energizes the reverb. The long reverb time altered by heavy compression makes sure the sound lasts and lasts. Do this in a mix, and youll find that after snare hit number one it is no longer possible to hear the guitars or understand the vocals. Bad news. This reverb takes over, obliterating all delicate elements of your arrangement that dare to come near it. The reverb essentially becomes a new, loud noise floor.
Seems a little irrational to add noise to a mix, doesnt it? Yup. So when you add this much noise to a mix, also use a noise gate. Gates get rid of the noise. First we add an insanely long reverb to the mix. Then we compress it to bring the level of that reverb/noise up. And finally we add a noise gate to get rid of most of the wacky reverb we created. The result is a gated reverb, shown in Figure 3. The snare drum hits. The noise gate opens up (triggered by the snare). The burst of reverb commences. An instant later (at a time set by you on the gate) the noise gate closes. The noise goes away, revealing those other elements of the mix (ya know, like the vocals). The snare hits again. Repeat.
sound hits. Dial in a very fast release time so that the compressor pulls-up the sonic detail of the decaying tail of the reverb. Finally, hardest of all, youve got to get the noise gate to cooperate so that it opens only on the snare. If its MIDI tracks youre using, its pretty straightforward to find the threshold, attack, hold, and release times for the noise gate that make musical sense. If youre using live drum tracks, the trick is to make sure the gate isnt fooled into opening when other nearby instruments playlike the kick or the hi-hat that might be leaking into the snare mic. Often a simple filter set lets you remove those sounds that are mostly lower (e.g.
Figure 4: Signal Flow Adding a gated burst of reverberation through this fairly elaborate signal path can convert a wimpy snare blip into the powerful snare of God. Its common to set the gated reverb to a musical note valuemaybe giving the decay on the snare a dotted eighth note time feel, for example. Then the gated reverb isnt just loud and energetic, its also grooving hard. Explore compressed and gated reverb and youll see how the non-linear reverb patches we discussed above are created. They dont play reverberation backwards, they just aggressively manipulate the loudness of the decaying reverb over time. As Figure 4 shows, there is a lot to patch up to make it work. It also takes time to tweak it into control. Youve got to find a good sounding reverb. Gated reverb rarely sounds natural, so you are free to chose a wild sounding reverb patch to start with; skip the sweet, high fidelity ones and go for the rowdy, out of this world sounds. Next, youve got to dial in the right amount of compression. Set the threshold well below the level of the initial burst of reverb so that the compressor is still attenuating the signal well after the initial snare kick drum) or mostly higher (e.g. hihat) than the instrument your are using to open the gate (e.g. snare drum). Filter out the lows of the kick and the highs of the hat that leaked into the snare signal you are using to trigger the gate, and youll be able to get the gate to cooperate. And whats good enough for artificial reverberation is good enough for natural reverberation. If you have recorded some natural room sound on to other tracks during the session, remember it will respond well to compression and gating too. In the end, reverb isnt an effect. Its a family of effectssome obvious, some not so obvious. It rewards those who take the presets in different directions and those who dare to combine it with some eq, compression, gating, delay, flanging, distortion, and so on. There are no boundaries. Alex Case (case@r ecordingmag.com) is an architectural acoustician at Cavanaugh Tocci Associates in New England.Dont tell his boss w hats in this articleespecially the bac kwards reverb part. T hanks.
Excerpted from the January edition of RECORDING magazine. 2001 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
In
the next two columns we'll look at ways to document every detail of each studio project. Take sheets, setup sheets, and recall sheets are all useful parts of the well-documented studio, and we'll get to those next month. This month we begin with the best-known of all studio documents, the track sheet.
Identifying tracks The track sheet's most obvious and vital function: identifying what's been recorded on which tracks. What's on track 1? "Hi-hat." What's on track 19? "Background vocal #3Low How fast was I going,Officer? Part." This labeling must be done so meticulously that to see It is essential that the playthe point of this article back speed of the tape be clearan empty space on a track sheet is to know with 100% certainty ly indicated. Can you actually is the not-so-apparent that it is a blank track available play back a tape at the wrong for recording. information that should be speed? Yep. Does it really ever Then there is other informahappen? You betcha. included on each tion that belongs on a track On analog machines, that sheet, mostly fairly obvious means noting the speed in and every track sheet. items that nevertheless someinches per second (ips). Typical times are omitted. What good is speeds are 7-1/2 ips, 15 ips, and it to know that track 1 contains 30 ips. Generally speaking, the the hi-hat when you can't tell what the song is? So start faster tape speeds lead to increased dynamic range. filling in your track sheet by writing down the song But rolling tape at faster speeds also leads to higher title. Don't leave it at thatlist the artist, producer, tape costseach tick up in speed will double your engineer, and assistant engineer. On the off-chance that tape costs. If the project is on a tight budget or if the the track sheet gets separated from the multitrack band is long-winded and aiming for a double album, tapesomething that should never happenall this this can be a big deal. You make this decision before information will come in handy. the first session, and then you document it on every If you are the studio, the engineer, the producer, and/or track sheet. the artist, put a phone number, email address, or both on There is a similar parameter on digital tape and hard every single document having anything at all to do with disk recorders: sample rate, which must be noted (in the project. Make it easy for anyone who finds the docukHz). Most common are 44.1 kHz, 48 kHz, and increasment to find you. ingly 96 kHz. As with tape speed, higher sample rates You can buy blank tape for $X. But once you start arguably lead to better sounding master recordings. But putting music and studio time on tape, that tape quickly the higher sample rates require more tape or hard disk
RECORDING FEBRUARY 2001
becomes nearly priceless, literally and figuratively. By including all of this information you minimize the chance of losing your investment. All this is important, but the point of this article is the not-so-apparent information that should be included on each and every track sheet. Of course, not every project is recorded on tape, let alone analog tape, and digital audio workstations take care of a lot of the housekeeping for you. But the central concepts should be obvious enough that you can apply them to other media.
space to store the increased data. The machine will usually know at once what the sample rate is, but you dontso if you need to match rates from tape to tape (or disk to disk), write it down. Are you my master? Note on Figure 1 that the tape machine used is identified (just above the sample rate). I cant overemphasize this point: always, always note the make and model number of the machine that creates any master tapebe it 24-track, 8track, or even 2-track. This not only identifies the format (um, it wont fit in an ADAT-type machine), it also identifies the specific model number.
Can you actually play back a tape at the wrong speed? Yep. Does it really ever happen? You betcha.
In a perfect world this wouldnt be necessary. All tapes played on all compatible tape machines would perform without a hitch. Bad news: its not exactly a perfect world. Sometimes a tape recorded on one machine wont play back on another machine without glitches. If you keep track of the type of machine used, you can lower the odds that this problem will haunt you. When the tape wont play on Bobs machine, it may be because it is a different model. Find, rent, or borrow a machine of the same make and model originally used during tracking and the tape might play back again without errors and drop outs. Sometimes the only solution is to go back to the original recording machine itself. In this case, make a safety copy onto a different machine as soon as you can. And be thankful your track sheet identified the source machine. In the analog tape machine world, identifying the source machine is
RECORDING FEBRUARY 2001
Figure 1
arguably even more important. Analog tapes will play back fine on most any type of machine. The dramatic muting on and off and the signature zipper noise that only digital recordings gone wrong can make wont dog your analog project. But analog recordings generally sound best when played back on the same type of machine that did the recording. Mastering studios often have several different makes of analog tape machines for this reason. They can match the same make and model you used to get the best sound off tape possible. Or the mastering engineer can resort to a different tape machine on purpose (not by accident) to find a different sound. As you can see, noting the tape machine used is a good idea.
RECORDING FEBRUARY 2001
How bout a date? Note the date of the first tracking on the track sheet. As you get into overdubs later, capture the date of those individual tracks too. Having the date can help you hunt down and identify problems. Months after making these recordings, you will start mixing them. You may notice at mixdown that the acoustic guitar sounds brighter in one song than in another. A little investigation reveals that the strings were brand new on one song, and two days of heavy playing older on another. This is an important observation. When you start mixing a third song, you can glance at the date of the acoustic guitar overdub and know before listening whether you have a bright or dull tone to start with. The date of each track can answer a range of other, similar questions: For a given piano track,how long had it been since the piano was tuned? Was this backing vocal cut before or after she had her cold? Was that track recorded before or after we cleaned the heads on the multitrack? The dates essentially provide an audit trail, should you want to answer some of these kinds of questions as sonic peculiarities unfold. It is quite possible youll never need the dates. Keep track of them just in case. Some problems are darn
subtle and might go unnoticed for days, weeks, or even months. But once you discover that the pedal on the kick drum has developed a faint but powerfully annoying squeak, youll want to figure out when in the course of the project this started, what songs might need fixing, and which ones are safe.
If eq had been used, Id turn the track sheet over and make notes there too. Should we have to retrack part of the vocalwhich could easily happen: the songwriter changes a line, the singer wants to change the phrasing, a previously unnoticed mistake now seems unbearable and must be fixed well be able to match the sound pretty closely and record any changes we wish. The entire signal path has been documented. Match those settings on the equipment, let the singer do a few takes to match his or her earlier performance, and you are ready to rerecord any or all of the vocal track.
In no time youll have six tracks dedicated to the guitar solo, and a dozen tracks for alternative, possible, I think so lead vocal tracks.
Signal path As you can see from the hieroglyphs on Figure One, we squeeze still more information onto the track sheet. Ideally, try to describe the settings of each piece of gear in the signal path. The Lead Vocal on Track 24 offers a good example. This particular overdub was recorded through an AKG 414 in cardioid pattern, without a pad, and without a roll-off. The microphone preamp settings and compressor settings are shown too. Granted, it is shown in a very abbreviated form, but it tells me what I need to know. Develop your own detailed code. As you can see, for this session I always documented the vocal tracks fully. That is standard operating procedure; the vocal tracks are important enough to demand it. The tambourine track, on the other hand, only indicates the mic and date. Im not really worried that Ill have to modify a piece of this track. Noting the mic reminds me of what sort of sound we were going for, and I can get close enough to that sound again if need be. The electric guitar (noted EGT on track 10) needs a fuller description. The guitarist brought in maybe half a dozen guitars, and two amps. Moreover, the studio has five guitars and three other amps. The track sheet therefore notes the guitar, the amp, the microphones, and any signal processing going on. Of course, guitarists do a lot to shape their sound through the various tone and pick-up settings on the instrument, as well as the many settings on the amp and any stomp boxes being used. This gets tricky. Most guitarists Ive had the pleasure of working with have given a lot of thought to their tone. Theyve mapped out all these settings for each and every song they track. They can dial them up consistently without writing them down. In this case, I let them keep track of their settings on the guitar rig mentally, and the assistant and I make notes of our settings in the studio manually. Less experienced guitarists might need you to capture their settings too. This can slow down a session significantly, especially if you dont
Kick Drum --------------------K Hi-Hat--------------------------HH Rack Tom 1 ------------------R1 Floor Tom----------------------Fl Electric Guitar ----------------EGT Tambourine ------------------Tambo Background Vocals ----------BGV Do Not Use--------------------DNU To Be Erased------------------TBE
RECORDING FEBRUARY 2001
Snare Drum----------------------------Sn Drum Overhead Microphones------O/H Rack Tom 2 ----------------------------R2 Acoustic Guitar ------------------------AGT Piano------------------------------------PNO Lead Vocal------------------------------LV Double ----------------------------------DBL Do Not Erase --------------------------DNE Serve Pickles Often ------------------SPO
Figure 2
have an assistant engineer. In these situations, I encourage the guitarist to work with the guitar tone controls set to wide open (turned all the way up so that the tone controls arent shaping the signal). This typically leads to a better tone anyway, and it is easier to repeat at a later date. The various settings on the amp are manually transcribed onto a sheet of paper. This sort of note taking in a session will be discussed in detail next month. But be forewarned: it is often necessary to write down the settings of guitar amps, compressors, equalizers, etc. These notes are taken on a specialized studio document called a recall sheet. This enables you to, you guessed it, recall any studio setting that you may have recorded.
RECORDING FEBRUARY 2001
On the track sheet weve noted a good deal of information about the project and each individual track recorded. When there isnt room to document the entire signal path for a given track, we turn the track sheet over or reach for recall sheets. In this way, we have the paper-based support information needed for every bit of audio we are putting on tape or disk. For the session shown in Figure One, each drum track has recall sheets associated with it documenting the settings of the equipment we used that day
takes. This process is often an effective way to capture a guitarists best solo. The reason this is important is that unlike live performances, which disappear as soon as they occur, a recorded performance must stand up to repeated playing. It is essential to the success of the recording that listeners continue to enjoy the solo even after theyve heard it on the radio 17 times. And the psycho-loyal fans are going to copy, transcribe, and critique the performance note by note, string bend by string bend.
This doesnt leave room for the other elements of the arrangement. The featured performer, the engineer, the band, and/or the producer should commit to the take as soon as possible. I recommend designating the favorite solo right there at the overdub session. At my most generous, I might let the band think about it and listen to it overnight. But the next session begins with a designation of the keeper track, and all the others get labeled TBE (to be erased). It is pretty common during a pro-
...and even if you dont have to erase them to make room on your disk, youd darned well better have a good way to know which are the keepers, and fast.
Collecting takes onto different tracks is a decent approach. You can even edit together the best parts of various takes into a single meta-solo. Howeverand this is very, very importantthe process is a total failure if you dont take the necessary second step. Step Two: after you select the keeper take, get rid of the others. Filling up the multitrack with safety solos that you are afraid to erase will come back to haunt you. In no time youll have six tracks dedicated to the guitar solo, and a dozen tracks for alternative,possible, I think so lead vocal tracks. ject to invite/hire a special guest to sing or play across a number of tunes on the album. Youve got maybe eleven different songs. In the course of this overdub session the guest talent flies from one song to the next. Nice take, lets try that sort of thing on the other ballad. Youve got to zip to the next song, pull-up a great sounding rough mix in the control room, dial up a terrific sounding mix in the headphones, and prepare to record the dub onto a free track. Thats a lot to do all at once. The track sheet needs to communicate clearly exactly where all the tracks
Media
The track sheetand all studio documents, for that matterworks best when it is recorded by hand, in pencil. It is tempting, in this age of slick computer graphics, to transcribe your track sheets into some sort of computer generated format. After a session, you kindly take the track sheets into the office and type them into the computer. Nifty. Cut and paste some graphics, select a cool font, and the print outs will look slick. Please, dont do it! The track sheet is a living document. At any point in the project, from the basics session to the mastering session, the track sheet should welcome creative and free thinking. If the music suggests you should erase the cello and track a triangle, then do it. If youve gone to the trouble to type all the tracks into the computer, youll hesitate an extra bit. Replacing the cello with a triangle means that tonight, after the session, youll have to type in the change and print out a new one. Thats a chore. And it just isnt necessary. Moreover, if it diminishes, in any way whatsoever, the creative energy of the project, then it is a mistake. The manual track sheet system is the preferred approach. In addition, a good track sheet has little scribbles and notes that, though meaningful to the engineer, may not seem important to the assistant transferring it into the computer. In computerizing it, some of that information is inevitably lost. Stick with hand written track sheets. Some people, though talked out of using a computer for keeping track sheets, make a worse mistake: they use ink. Ink doesnt erase. Tape does. Use pencil. We record on tape or hard disk because its easy to erase and record new ideas. Erasing and re-recording is an everyday part of modern multitrack music production. The track sheet should follow. Consider it law: track sheets (and all studio documents) should be done in pencil. Pens and laserjets are too permanent. They are strictly forbidden.
RECORDING FEBRUARY 2001
Excerpted from the February edition of RECORDING magazine. 2001 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
of audio are, which tracks to use, which not to use, what can be erased, and so on. It is wise to allocate tracks as consistently as possible across a project so that, for example, the snare is always on track three and the lead vocal on track 24.Allocate the more variable musical elements to other tracks. Not every song has piano. Some use clavinet, some just use guitar, etc. Good habits laying out the track sheet consistently from song to song reduce the effort associated with advancing to the next song for the next overdub. With maybe drums, bass, and rhythm guitar already set up and sounding balanced for control room and headphone monitoring, you can tweak the didgereedoo and trombone as required for this particular song and get on with the overdub.
Clear notes like TBE communicate exactly which tracks can be nuked if necessary to accommodate additional overdubs. The session loses momentum if you have to pause the overdub session and look for an available track.Umm, it says here,
And all of this goes double for those nifty modern hard disk recorders that let you save gazillions of takes per final track! Some manufacturers, like Roland, provide you with a certain number of alternate takes per track; others, like Akai, offer a generalized pool of available edits and alternate tracks. Either way, youre now talking about dozensor hundreds! of takes, and even if you dont have to
Good habits laying out the track sheet consistently from song to song reduce the effort associated with advancing to the next song for the next overdub.
tambo, take 3. I think were going with take 2. Hang on a minute while the producer and I listen to all five tambourine parts and figure out which one we can erase. Oh! Youre sounding great. Love the energy in that last take. Give us five or ten minutes and well do another. erase them to make room on your disk, youd darned well better have a good way to know which are the keepers, and fast. Push the decision makers to decide. If you are the producer or if its your music, step up to the plate. But even if you are just acting in an
engineering capacity, help the session by coaxing these sorts of commitments out of the key players. Hedging your creative bets by archiving countless mediocre takes will needlessly increase the studio time (a budget breaker) at the very least. Worse, and more likely, it will rob the project of its creative and performance edge. Safe albums dont usually sell. Next month we discuss the rest of the studio documents: take sheets, setup sheets, and recall sheets. Good studio documents are a session tool you can have without parting with too much money. Sure it would be more fun to buy another microphone or compressor, but its worth the effort to develop and use these documents thoughtfully.
Paper Alex Case doesnt stop at studio documents.You should see his grocery list. These documents get pretty rough treatment. Youll use them at the basics session, at every overdub session, and finalSend questions and suggestions to ly during the mix sessions. Theyll get written on, erased, written on again, and erased, and written on again.... Its [email protected] inevitable that theyll be used as coasters, scratch paper, and note paper. Theyll document audio tracks, phone messages and food orders. If your track sheet does all these things, its a session asset. And it will better survive all this abuse if you print it on to heavy paper. Even card stock isnt a bad idea.
Studio Documentation, Part 2The TAKE SHEET, SETUP SHEET, AND RECALL SHEET
B Y AL E X C AS E
hough we are engineers and not bankers, weve got a lot of documentation to complete. Studio paperwork essentials include the track sheet, the take sheet, the setup sheet, and recall sheets. We talked at length about the importance and application of track sheets last month; we now pick up where we left off and continue our discussion with the take sheet. Like the track sheet, the take sheet serves a misleadingly straightforward function: it lists the takes recorded. One goal of this months N&B is to reveal some of the hidden benefits that come from giving the take sheet a little extra care and attention. Figure 1 shows the take sheet we use at my studio, Fermata. On the top we find very nearly the same information that capped the track sheet. The project is identified by artist name (written in large print), the producer, the engineer, the assistant, and the date the project commenced. The reason for all this information is self-evident. The heart of the take sheet is what comes next.
Instead of panicking and wildly turning knobs, you can instead calmly trace problems to their source if you took good notes.
Odometer The principal role of the take sheet is to identify the precise location of each and every take of each and every song recorded. Its risky to rely on memory. Its foolish to rely on the assistant engineers memory. Its flat out dangerous to rely on the drummer. And above all we must try to avoid torturing the client with Wait, let me find it. Here it is. No, wait. Is that Take two? Not sure. Hold on. Its even worse on those sessions where the vocal doesnt get recorded until some future overdub session. Tonight you might just be looking for that killer take for the rhythm section only. Without a vocal track it will be
RECORDING MARCH 2001
difficult to distinguish verse one from verse two and take one from take two. For a smooth session it is positively vital to keep a thorough and accurate take sheet, starting with the song title and start time. Beyond this basic bookkeeping, the take sheet serves a valuable production function. During the course of a session we use the take sheet to keep track of which songs have been recorded and which songs have not. It identifies those songs that are recorded and those that arent. Continuing last months session with the band Scribe, we see the four songs they are working on are scribbled down at the bottom of the take sheet (see Figure 1). Typically this is done on the back of the track sheet; we show it here for illustrative purposes. As the band completes a take that everyone likes, the engineer checks that tune off. For 12song projects, this sort of thing is very useful. The take sheet does more than just list the tunes tracked and their start times. We note the take number, which as well see below is important information for monitoring the health and productivity of a session. We also note the approximate end time for each take (rounded off to the nearest 5-second increment) and calculate the length of the song. Watch and compare these numbers to track how the session is brewing. There is also a Notes column on the take sheet. Naturally, here you make notes of critical observations offered by the producer or the band members. Its important to keep track of comments such as the producer likes the bridge, the drummer loves the solo, the singer hates verse 3, etc.
Figure 1
RECORDING MARCH 2001
Figure 2
RECORDING MARCH 2001
session,there should be only one circled take for each song title. Titles without circled takes arent done yet. Barometer A good engineer and producer will watch the take sheet for clues about how the band is feeling. You might easily need three, four, or more takes of the first tune as the band warms up, the engineer gets the sounds under control, and everyone gets used to the studio and each other. When the band starts getting things in one or two takes, they are in the zone. At this point they should not be interrupted for any-
In addition, every take gets one of three codes: C: Complete this is a complete take, top to bottom. It does not reflect an opinion whether or not this is the preferred, selected, sure to win a Grammy take, just that it is complete. I: Incompletethe band aborts the take somewhere along the way. Theres enough good stuff in it, though, that rather than erase it and lose it forever you save it. FS: False Startthe tune didnt start cleanly, maybe someone missed a cue. The band stops and immediately counts it off, launching right
back into the tune again. We consider all of this part of the same take and just note the start time of the next down beat without stopping tape and interrupting the groove. When the band lays down a take that everyone knows is the one, circle the take number to designate it the selected take. At the end of the basics
A good engineer and producer will watch the take sheet for clues about how the band is feeling.
thing short of a pending nuclear disaster or a really good episode of The Simpsons . If the session persists with multiple unsatisfactory takes (e.g. That isnt happening. Lets move on to the next song) song after song, your take sheet is trying to tell you theres a problem. Check to make sure the headphone mix sounds great, review the studio setup and make sure the players can see each other, and most importantly manage the session mood to help people relax, forget the studio, and just play the music. Looking at Figure 1, youll see we were having trouble with the tune Notoriety. Take 1 was just okay; no one really liked it. Takes 2 and 3 were incomplete, with a false start in between. The band keeps hitting a snag and aborting the take . N o problem, the producer said, that interlude is a tricky section. Lets come back to it later. So the band proceeds to one-take the next two songs. A good sign. Back to Notoriety Take 4, and problems resume.A careful look at the timing of the aborted takes reveals that the band keeps stopping at the same point, the interlude about 90 seconds into the tune. The producer and band have a musical road block to solve; time to rehearse, rewrite, or remove the trouble spot. Speedometer By song number 14, Notoriety Take 7, the band has progressed beyond the musical train wrecks that caused the whole take to stop. But the finished takes are getting
RECORDING MARCH 2001
Figure 3
longer and longer. The tune is dragging. No one seems thrilled with the feel of the take. Maybe the song is too hard; maybe the band is too tired. The take sheet points out the problem. The producer must decode this and direct the session accordingly. Sometimes the take sheet simply hints that the band needs a break, and they should have onepreferably before they notice they need it. The easy way to take a
that tells you that the one plugged into microphone receptacle input number eight is the floor tom mic. Unplug it. Leave it in the rats nest of cables and just add another. The problem is solved and little time was wasted. (Ideally youll also mark the guilty cable with a piece of tape so you know which one gets repaired later.) If you are lucky enough to own four identical compressorssame make and model numberit can be hard to remember which Squish-omatic Tormentor Mark IV was on the snare. Setup sheet to the rescue again.
Inevitably the producer says something like Didnt you use the mic that looks like a giant Tylenol?
break without undermining the bands confidence is to announce Pizzas here or Hey, weve got a fresh pot of coffee. Dont fail to notice the opposite trend: speeding up. No matter how talented they are, bands are prone to rushing the tempo as they fight their way through a complicated arrangement while the studio clock ticks and the adrenaline flows. Good producers already have a target beats per minute goal for each tune. Let the take sheet help you measure the tempo of the tune so you know when the band is sprinting instead of grooving. Setup sheets For any session other than a single overdub, it makes sense to document the general layout of the studiothe equipment used, the location of the players, the placement of the microphones, etc. The heart of the setup sheet is simply a list of what microphone and signal processing was used in the recording of each and every track. This sheet acts as an equipment road map both during and after the session. Follow along on Figure 2. During the session, when the studio is crowded with microphones, buzzing with musicians, and tangled with countless mic cables snaking their way around the studio, it can be difficult indeed to find and fix problems. If, for example, you hear the dreaded crackle and crunch of a failing microphone cable on the floor tom, good luck replacing it. Unless, of course, youve got an accurate and current setup sheet
RECORDING MARCH 2001
When you are sitting at a console full of twitching meters spitting out the sound of the band rehearsing their first number, it can get confusing and more than a little intimidating. You might find yourself unable to locate the fader that controls the electric guitar signal. Youre trying to send it to track ten, but the meter on the multitrack doesnt budge. Instead of panicking and madly pushing faders, throwing switches, and cranking microphone pre-amps up to their maximum gain settings hoping to hear some guitar, you can instead calmly trace the problem from its source. Did someone forget to plug in the EGT mike? Its a simple, common mistake. The setup sheet makes it clear: the electric guitar microphone should be plugged into microphone input number 22. After the session the setup sheet guides you through the many things you accomplished. Inevitably the
Figure 4
producer says something like Give us that killer guitar sound you had during tracking. Didnt you use the big microphone that looks like a giant Tylenol? And this will be a good ten days after the basics session. To have a fighting chance of satisfying this request, youll need a setup sheet that archives the basic elements of the signal you put on tape: mic(s) and any compression, equalization or other effects. The other side of the setup sheet has a floor plan of your studio for you to make notes on where you set
essing, phasing and/or any other effects, this is also noted. The documentation must also capture the exact settings of each piece of gear. Two things help: recall sheets and recallable consoles. For each piece of gear you own, it is wise to create a recall sheet that lays out every knob, switch, and editable window in the device. Figures 3 and 4 show a common pair of recall sheets I use. Figure 3 is the recall sheet for a mic pre/eqthe Geoffrey Daking model referred to in Figure 2. The page visually shows the knobs. Here
device, the venerable Yamaha SPX90. The recall sheet for this device changes form to accommodate the lack of knobs but wide range of editable windows. This much detail is only necessary in situations where your patches can get overwritten by someone elseand youre not running some sort of SysEx librarian on the studio computer to save your own patches away from the effects unit. During a basics or live-to-two session, there will be a fair amount to document; the typical overdub is
Its risky to rely on memory. Its foolish to rely on the assistant engineers memory. Its flat out dangerous to rely on the drummer...
the various players up in the room. Here you note the approximate location of the drum kit in the big space, the location of gobos around the guitar amp, that the singer was in the booth but his amp was out in the hallway, and so on. For super tweaky sessions you might even make measurements of the locations of key microphones. Photosmade especially easy in this age of digital photographyhelp, but a sketch of the session layout on the back of the setup sheet is a useful way to document what happened. Recalls If youve had the pleasure of working in a full-rate studio that assigns an assistant engineer to your project, you know the benefits this brings. Theres a team of runners and assistants getting food, etc. The assistant also takes care of the essential thorough note taking that goes on during your session. Professional facilities will document every setting of every single
generally much simpler. But during midown you may have to document settings on every piece of gear you own. Beyond recall sheets, we have the ability on many consoles and pretty much all digital audio workstations to store the many settings and effects on the mixer. A vocal overdub might get some equalization from your DAW. Documenting those settings is as easy as a Save As... command. At the end of a project a single song might have well more than a dozen saved versions. Taking note Projects end not only with a stack of master tapes, but also with a thick file full of documents. The track sheet (discussed last month), the take sheet, the setup sheet and all those recall sheets are an essential part of the recording craft. These documents help you get more out of your equipment and communicate a higher level of professionalism to clients.
...and above all we must try to avoid torturing the client with Wait, let me find it. Here it is. No, wait. Is that Take two? Not sure. Hold on.
piece of gear you use. This documentation makes it possibleat least theoreticallyto recall at a later date any sound you record at any time throughout the project. If you select a microphone and record a track straight to tape without effects, that is noted. If you add some compression, equalization, deRECORDING MARCH 2001
we added sparkle to the lead vocal by pushing up the super high frequency range at 15 kHz. Presence was helped by a little boost at 3 kHz. The other bands of equalization and the low pass and high pass filters were not used (noted out). Figure 4 shows the philosophical opposite, a digital multieffects
Alex Case reminds you that w hile you can track a take sheet, and you can take a tr ack sheet, you cant sheet a take track. Send questions and suggestions for Nuts & Bolts to [email protected].
Excerpted from the March edition of RECORDING magazine. 2001 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
BY A L EX CA SE
This month we apply some of our studio tools in ways that might seem like some sort of trick. We review some unlikely, unbelievable, or at least counterintuitive approaches to using effects. The beauty of this April Nuts & Bolts column is, it aint no joke.
Compressualization When is a compressor not a compressor? When its an equalizer, of course. A de-esser to be exact. De-essers attenuate the ess sounds in a vocal track made by the letter S. A loud, strong ess of a vocal can zap you with an ear ringing, pain inflicting burst of high frequency energy. Using eq to attenuate the problematic high frequencies associated with the esses will also rob the vocal of its airy, shimmery, voice of the pop music gods quality that youve gone to so much trouble to create. The fact is, the vocal probably sounds great, if not perfect, whenever the singer isnt singing words with the dreaded letter S. To get an edgy, emotion filled vocal that cuts through a mix crowded with fuzzy guitars, hissing cymbals, and shimmering strings, youve got to go for a bright vocal sound from the startit influences mic selection, mic placement, and of course the effects you add. These overly bright esses are an almost unavoidable side effect of otherwise good recording practice. The solution is to use a compressor instead of an equalizer. The goal is to run the vocal through a compressor that attenuates the vocal only on the problematic ess sounds; the rest of the time, the compressor should not change the magic vocal one iota. Trouble is, no amount of fiddling with the threshold, attack, release, and ratio controls will accomplish this. These ess sounds happen so quickly that only an extremely fast compressor attack time could grab them.
RECORDING APRIL 2001
Moreover, even though they are perceptually very loud and annoying, the typical changes in loudness that occur from verse to chorus, line to line, and even word to word are much greater swings in amplitude than a little ol letter S. As a result, the compressor reacts to the louder parts of the singing, not the individually, perceptually louder sizzling sounds of the esses. We need a way to warn the compressor that an S is happening, despite the expressive changes in dynamics of the vocal track. We accomplish this through clever use of the compressors side-chain. The side-chain offers an alternative input into the compressoran input that wont have a corresponding output into the mix. This other input is just used to tell the compressor when and when not to compress. To get rid of the esses in the vocal, we route a copy of the vocal signal with the esses emphasizedinto the sidechain input of the compressor. As shown in Figure 1, we split the Lead Vocal, send it to a parametric eq, and then to the compressor side-chain input. Set the eq to a narrow (high Q) but large boost (+12 dB or maybe more) at the problematic frequency range. To find the exact frequency range, you can hunt around from about 2 kHz8 kHz until the compressor starts to react to the esses. Youll find you can zero in on other sibilant problems that might ariseits not just for esses. You can de-F, deX, de-T, de-Ch, de-Sh...this basic signal flow structure is effective at removing many related problems. A sharp boost enables the compressor to duck the signal in reaction to a single spectral spot.
You can broaden the bandwidth of the side-chain parametric eq to catch a range of sibilant sounds. This can be pushed to other applications: de-squeak an acoustic guitar, de-thump a piano... If you find the de-Suck setting, email me. Some way cool compressors have a switch that lets you monitor the side-chain. This enables you to really fine tune the triggering frequency that gets the hyper-boost. Once you get the compressor to react to the esses, you must then use your good judgement to set the compression ratio just right. Too high, and the compressor overreacts to each S, literally giving the lead singer a lisp. Too low, and the esses continue to annoy. Like many mix
This effect is nothing more than variable eq. If youve a parametric equalizer handy, patch the electric guitar or keyboard track through it. Dial in a pretty sharp midrange boost (high-Q, 1 kHz, +12 dB). As the track plays, manually sweep the frequency knob with one hand and salute Jimi and Stevie with the other. Hip DAWs with automated equalizers make it easy to program this sort of eq craziness. Without automation, you just print your wah-wah performance to a spare track. Its also worth exploring other frequency ranges. Try cuts as well as boosts; use narrow and broad bandwidths; try sweeping a highpass or a lowpass filter or shelving eq. And perhaps most importantly, apply it to any track. Piano offers a welcome wah-wah opportunity. It seems perfectly appropriate to wah-wah a cello or a snare drumabsolutely anything.
The fact is, the vocal probably sounds great, if not perfect, whenever the singer isnt singing words with the dreaded letter S.
Figure 1: De-Essing a Lead Vocal relies on a sidechain input with Boosted Esses. Patch this up. Add Delays, Reverb, and Sundry. Win Grammys.
You can de-F, de-X, de-T, de-Ch, de-Sh... this basic signal flow structure is effective at removing many related problems.
Stimulator Amp simulators have been a boon to the home recordist. Some (most, actually) guitar amps only sound good when they are cranked up to ear splitting levels.Something musical happens as the amp reaches its limitselectronically, mechanically, physically, and metaphysically. But what is an up-all-night home studio to do? Record direct and achieve that guitar amp near death experience courtesy of amp simulation hardware/software.Neato. Perhaps you use DI boxes when recording bass. That is pretty common practice these days. Great sounding bass amps require money, care, strength, space, a good bass guitar, an excellent bass player, and massive amounts of acoustic isolation when tracking (theres that the amps too loud problem again). The direct inject device makes
moves, it is sometimes useful to tweak it too far (where the de-esser is audible and unnatural) and then back off until you imagine that you cant quite hear it working. In the end you should be able to push the eq on the actual lead vocal hard, without fear of sibilant destruction. Then your lead vocal can have all the grit, gasp and guts that pop music demands. Betht of luck. Eqwahlization Wah-wah. What an effect. Used tastefully, it can give a tune that perfect extra push toward, well, whatever youre aiming for. Hows it done? With a Cry Baby effects pedal (or one of its siblings), naturally. But what if you dont have one? What if you do have one but the last nine volt battery in the Tri-State area just pooped out?
RECORDING APRIL 2001
it possible to use the signal coming out of the bass guitar itself for recording onto tape or disk. Think about it. We stick microphones in front of instruments to convert the noise they make in the air into an electrical signal on a wire. Once the music is in that mic cable, we can run it through our racks of audio equipment. Such an approach makes a lot of sense for voice and
it always sounded? Its honky, with no highs, muddy lows, and zero dynamic range. That is what the amp does to the signal coming from the electric guitar. And the electric guitar just isnt an electric guitar without it; I dont think Leo Fender ever wanted us to hear the sound coming out of the guitar itself. So whenever a session forces us resort to recording electric guitar direct to let the neighbors sleep, it is essential that we grab the amp simulator. Why not skip the whole electric signal to amp to The amp simulator offers us a single stomp box, rack space or pulldown menu that throws in a ton of acoustic noise to mic to electric signal thang? distortion, compression, equalization, and god only knows what else. This piano. But electric basses are, um, you have more options for creating a effect begs for experimentation! electric. Why not skip that whole elec- powerful bass sound at mixdown. Dont let anyone pull a fast one on trical signal to amp to acoustic noise Record electric guitar through a you. You can use compression to to microphone to electricity thang? direct box andblip, boink, flirp equalize, equalization to wah-wahThis simple view motivates the DI. ouch. Sounds thin, perky, silly, [other ize, and amp simulation to A DI has to take care of some eleccolorful descriptions thoughtfully delet- improvize. Happy signal processing. tricity book-keeping: it lowers the edEd.]. It wont wake the neighvoltage, lowers the impedance, and bors, but it wont sell any records Alex Case makes the wah-wah face balances the signal so that what either; the guitar amp is too much a whenever he uses the electric pencil comes out of the DI behaves very part of the tone equation. sharpene. rOffer therapy via much like the signal that comes out of Perhaps youve tried to play a CD [email protected]. most microphones. Off it goes into the through a guitar amp. Notice how bad
Excerpted from the April edition of RECORDING magazine. 2001 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
rest of the recording chainequalize it,compress it, and print it to tape. The DI is quite effective on bass. The sound can be tight, crisp, and rich with low end warmth. In fact, even when you have the luxury of recording the bass through an amp, it is common practice to simultaneously record the bass with a DI onto a separate track. With both an amp sound and a direct sound on tape,
If you want to be able to handle the sort of music in which multiple players are recorded simultaneously, youll need extra mic pres. A classic example is the power trio, maybe blues: drums, bass, and electric guitarthey all want to jam together. Youll need enough mic preamps to get them all to tape or disk simultaneously. Want to record a big band? But another reason to go out and buy another mic preamp is just for variety. As is true for mics, loudspeakers, and compressors, no two mic preamps sound exactly alike. They have their own signature or flavor that can sound exactly rightor exactly wrongwhen paired with a certain mic on a certain instrument for a certain kind of tune. One final factor pressures us to acquire additional mic preamps: session flow. Keeping the session moving efficiently saves the band money and makes the studio a more creative place to work. A common approacheven when the session is a string of single-mic overdubsis to leave each signal chain up and unchanged as you move on to the next overdub. When the piano overdub is complete and youre moving on to a few cymbal swells, the mics and mic pre settings on the piano stay where they are. You use different ones on the cymbal overdub. Not only does that avoid stopping the session to set up for the cymbal, it means youre ready to go if someone wants to change the piano part in the bridge. Created equal Equalization is a fundamental part of the music recording craft, so it too is a part of the channel strip. But please dont reach for the knobs of the equalizer too soon. There is definitely no substitute for good mic selection and placement. If youre lucky, patient, and smart enough to have a beautiful sounding instrument that you can place cleverly in a great sounding, well-controlled recording room, using excellent mics placed in that everelusive sweet spot, you may never use eq.
The rest of the time, we get by with a little help from our equalizer. Add punch, remove shrillness, add sparkle, remove muddiness.Equalizers are an essential part of getting our projects ready for prime time. But even more than mic preamplifiers, equalizers of different types from different manufacturers can sound quite different from one another. Using the same equalizer on most every overdub you do starts to give everything the same sonic aftertaste.
In parallel, we must determine how much to alter the frequency we are selecting. The addition(or subtraction of frequencies happens via adjustment of a separate parameter: cut/boost. It indicates the amount of decrease or increase in amplitude at the center frequency you dialed in on the first parameter just discussed above. To take the muddiness out of a piano sound, select a low-ish frequency (around 250 Hz maybe) and cut a small amountmaybe about 3
We need a better strategy than just randomly buying a few different equalizers. And snapping up the latest eq du jour wont guarantee well end up with a coordinated set.
We need a better strategy than just randomly buying a few different equalizers. And snapping up the latest eq du jour wont guarantee well end up with a coordinated set. I suggest diversifying your equalizer collection based on the technology employed and the functional type of equalization: software versus hardware, solid-state versus tube, integrated circuit versus all discreet, digital versus analog, among others. Over time youll learn to hear the subtle sonic differences between them. A session starts. You hear the singers tone. And a bell goes off. Instantly you intuit the right choice of equalizer for this overdub. Beyond technology, it makes sense to enrich your equalizer collection based on functional capabilities: parametric, semi-parametric, graphic, or plain old program eq. The parametric equalizer offers the most precise control for spectral manipulation, with three different parameters (hence the name) for your knob tweaking pleasure. All the other types of equalizers (semi-parametric, graphic and program equalizers) have some subset of these three parameters available for adjusting on the front of the box or in the pulldown menu; the missing parameters are fixed by the manufacturer. When you learn how to use a parametric equalizer, you are learning how to use all types of equalizers. Probably the most obvious parameter needed on an equalizer is the one that selects the center frequency you wish to attack. In search of shimmer, we might dial up an eq shape focused on 10 kHz. Weve got to listen carefully, though, because the shimmeriness may be better at 12 kHz for todays particular track. to 6 dB. To add warmth and punchiness, boost maybe 9 to 12 dB at the low frequency that sounds best, perhaps somewhere between 40 and 120 Hz. As you can see, these two parameters alone, frequency select and cut/boost, give you a terrific amount of spectral flexibility. The final parameter available on a parametric eq, bandwidth (a.k.a. Q), determines the width of the cut or boost. That is, as you boost the frequency selected by the amount shown on the cut/boost knob, how much are the neighboring frequencies affected? A narrow bandwidth (high Q) is very focused on the center frequency, and it introduces a sharp spike or notch to the frequency content of the signal being equalized. A wide bandwidth (low Q) takes a broader brush approach, pulling up a wide region of adjacent frequencies along with the center frequency being tweaked. Obviously, different bandwidth settings have different uses. During the course of a project youll often find the need for a range of bandwidth settings.
it a favorite part of the channel strip. But other options exist, offering their own benefits. Some equalizers have fixed bandwidth; the bandwidth is determined by the designers of the equipment. This type of equalizer gives the recordist the freedom only to adjust the frequency and cut/boost parameters. Because of the downgrade from three parameters to two, this type of eq is sometimes called a semi-parametric equalizer. Alternatively, they are often called sweepable eq, highlighting the fact that the frequency you are cutting or boosting can be adjusted. This configuration in which only two parameters (frequency and cut/boost) are adjustable is very appealing because its perfectly intuitive to use. More importantly, the sweepable eq is still very musical and useful in the creation of multitrack recordings. Down one more level in flexibilitythough not intrinsically in sound qualitysometimes an equalizer only allows control over the amount of cut or boost, and can adjust neither the frequency nor the bandwidth of the equalization shape. Generally called program eq, this is the sort of equalizer found on home stereos (labeled treble and bass). You also see this type of eq on many consoles, vintage and new. It appears most often in a 2- or 3-band form: high, mid, and low. In the case of your consoles channel strip, this same equalizer is repeated over and over on every channel of the console. If it costs an extra $20 to advance the functional capability of the equalizer from program eq to sweepable, that translates into a bump in price of more than $600 on a 32-channel mixer. The good news is that well designed program eq can sound absolutely gorgeous. And it often offers frequencies that
The sonic shaping power that parametric equalization offers makes it a favorite part of the channel strip. But other options exist, offering their own benefits.
A 4-band parametric eq has 12 controls on it, so you can select four different spectral targets and shape each of them. This gives us the ability to effect a tremendous amount of change to the frequency response of a track. The terrific amount of sonic shaping power that four bands of parametric equalization offer makes are close enough to the ideal spectral location to get the job done on many tracks; often you dont even miss the frequency select parameter. A slight twist on the idea above leads us to the graphic equalizer. Like program eq, this device offers the engineer only the cut/boost decision, fixing bandwidth and frequency.
RECORDING MAY 2001
On a graphic eq, several frequency bands are presented as sliders rather than knobs. The faders provide a good visual description of the frequency response modification that is being appliedhence the name graphic. Handy also is the fact that the faders can be made quite compact. It is not unusual for a graphic equalizer to have from 10 to upwards of 30 bands that fit into one or two rack spaces. Graphic eq is extremely intuitive and comfortable to work with. Being able to see an outline of what you hear will make it easier and quicker to dial in the sound you are looking for. Turning knobs on a 4-band parametric equalizer is more of an acquired taste, and that degree of control isnt always necessary.
Nuts & Bolts has raved about the creative applications of compression. It is used to sharpen the transient attack of a sound; to lengthen its decay; to extract all those breaths, grunts, and rattles that performers and instruments make. Trouble is, compression generally cant be taken away, only added. This sort of compression, therefore, doesnt generally happen during recording. Radical compression more typically happens during mixdown, when there is time reserved for tweaking the compressor until it sounds just right. Unless aggressive compression is a key part of the soundtracking piano with fierce compression on purpose, for the timbre it creates, for example (see The Nuts & Bolts of
When signals as variable, emotional, and dynamic as music signals must be squeezed into our audio electronics, they often need to be brought under control.
Compressor liberty When signals as variable,emotional, and dynamic as music signals must be squeezed into our audio electronics, they often need to be brought under control. They dont fit naturally into the constraints associated with storing a signal on tape or modulating it for broadcast out into the ether (thats radio or Internet broadcast). This is bad news for common musical elements like sax solos and drum fills. When the signals get too quiet, the music is obliterated by the hiss, rumble, hum, and buzz of our recording system. Distortion will occur if they get too loud. The extraordinarily delicate timbre of a glass harmonica or the subtly rich decay of a piano risks being lost entirely. We have no choice when recording very loud or very soft tracks. The loud stuff needs to be turned down to avoid distortion, while the quiet stuff needs to be turned up to avoid the noise floor. The compressor/limiter automatically tames music ever so slightly. It takes up residence on the channel strip because of this fundamental capability.
RECORDING MAY 2001
Compression, in the 3/00 issue)its best to defer such an extreme tone alteration until you are sure it sounds right for the whole tune. Think of creative compression as a special effect. More conservative (ratio of about 4:1 or less) compression on the other hand is a common part of the recording path. It is an important part of your channel path. As with eq, its useful to have a variety of compressors around for different applicationsthey all sound different from one another. Pursuit of happiness A few channel strips of very good quality that simultaneously offer a degree of sonic variety can give a small studio the recording vocabulary of the big studios. You dont necessarily need a humongonormous mixer. Acquire or improve your channel strip strategically. Alex Case mistakenly got cable TV , misunderstanding w hat they meant by 200+ channels.Help him find the faders at case@r ecordingmag.com.
Excerpted from the May edition of RECORDING magazine. 2001 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
of balance. Relying almost entirely on volume controls, balancing a mix is one of the most important skills an engineer must master. On the level
If music is picked up with a microphone, youll need a A sense of balance microphone preamplifier. Consider the first step in Guess what? Mic preamps are building a mix. Carefully, sysnothing more than volume In pop music, if the guitar is tematically, and iteratively you devices. And weve got to set louder than the vocals, youre the volume just right when we adjust and readjust the volume and pan position of each record to tape or hard disk going to have trouble selling track until the combination (see sidebar). starts to make musical sense. Because all equipment has records. You work to find a At that point the mix is balsome noise, we naturally try to ancedthe song can stand on balance thats fun to listen to, record music at as high a level its own, and every track conas possible so that the musical yet supports the music and tributes to the music without waveforms drown out the obliterating other parts. noise floor. So it seems true reveals the songs subtleties. In pop music, usually the that louder is indeed better. vocal and the snare sit pretty The question is, how loud? loud in the mix, dead center, There are two different with the other pieces of the strategies for setting recordarrangement (tracks and ing levels, depending on effects) filling in around and underneath. If the guitar whether the storage format is digital or analog. is louder than the vocals, youre probably going to have Youve undoubtedly heard that for digital recording, trouble selling records. If you cant hear the piano the goal is to print the signal as hot as possible without when the sax plays, the song loses musical impact. So going over. Lets think a little bit about what that means. you work hard to find a balance thats fun to listen to, Pressure in the air becomes voltage on a wire (thanks supports the music, and reveals all the complexity and to the microphone), which then becomes numbers on subtlety of the song. tape or disk (thanks to the analog-to-digital converter). This first step of a mix session is really a part of every As the music gets louder in the air, the corresponding session. For tracking and overdubbing, the players cant voltage gets higher on the mic cable. But at some point play, the engineer cant hear, and the producer cant prothe numbers getting stored by the digital system cant duce until the signals from all the live microphones, get any biggerit maxes out in much the same way that recorded tracks, and effects are brought into some kind a child counting on his or her fingers runs out at ten.
RECORDING JUNE 2001
At that point the digital data no longer follows the musical waveform (see Figure 1). This is a kind of distortion known as hard clipping. The peaks are clipped off, gone forever. Obviously, the way to prevent this kind of distortion is to make sure the analog levels going into the digital recorder never force the system past its maximum. The meters will help you here. Digital systems generally have meters that measure the amplitude of the signal in decibels below full scale,which is the ten fingers point at which the digital system has reached its maximum digital value. If you are intrigued by the waveform shown in the lower part of Figure 1 and are wondering what it sounds like, you might want to overdrive the digital system on purpose. Be my guest, but be careful. First, monitor at a low level. This kind of distortion is full of high frequency energy that can melt tweeters.
Second, listen carefully. This type of distortion is extremely harsh; its not a particularly musical effect, so its best used sparingly if at all. But of course its not strictly forbiddenmusic tends to rebel. On analog magnetic recording systems, you typically record as hot as possible, and occasionally go over. Unlike digital audio, analog audio doesnt typically hit such a hard and fast limit; instead, it distorts gradually as you begin to exceed its comfort range. This gradual distortion at the peaks is called soft clipping, shown in Figure 2. At lower amplitudes, the analog magnetic storage medium tracks very accurately with the waveform. As the audio signal starts to get too loud, the analog storage format cant keep up. It starts to record a signal thats not quite as loud. As it runs out of steam, it does so gracefully. Look carefully and you might notice that overdriven analog tape looks a lot like compression. A quick glance at my effects rack reminds me: compression is an effect. Ive bought rack spaces and pull down menus full of compression. Can you overdrive analog magnetic recorders for an effect? You betcha. So we find ourselves using volume as an effect simply by setting levels as we record music. Analog machines, with faint tape hiss, prefer audio waveforms without quiet passages (low volume). While digital systems dont have tape hiss, they do introduce other sonic artifacts at low levels, as well discuss in a future Nuts and Bolts article. Still, this low noise floor was a driving force in the transition from analog to digital audio. Classical and jazz engineers have to record acoustic music with a wide dynamic rangemusic that sometimes has long, open, quiet spaces. For this genre of recording, the nearly silent noise floor of digital storage was a dream come true. Rock and roll, on the other hand, tends to have a much more narrow dynamic range. The song kicks in and rarely lets up; hiss cant raise its ugly head over the screaming vocals and grinding guitars. Moreover, as we know from listening to radio, listening to great mixes, and experimenting in our own studios, rock and roll also loves a bit of compression. As a result, even in this very digital age, many pop records are still recorded onto analog tape. Adding further irony, these days digital audio devices are consistently less expensive to own and operate than professional analog audio tape machines. Today, we essentially must pay extra for the tape compression effect.
RECORDING JUNE 2001
Given a choice, the sound quality differences between analog and digital recorders as they react to where the volume knob is set are a key factor in selecting which format to use on a recording project. And heres another clear case of using the volume knob as an effect. Flavor In the recording studio, we generally run into two types of analog volume control: the variable resistor and the voltage controlled amplifier.
within that power cable conducts electricity from the wall outlet to the piece of audio equipment, getting the LEDs to flicker, motivating the meters to twitch, enabling us to make and record music. The volume knob on a home stereo, electric guitar, or analog synthesizer is (with a few model-specific exceptions) a variable resistor. Set to a high resistance, electricity has trouble flowing and the volume is attenuated. To turn up the volume, lower the resistance and let the
Because all equipment has noise, we try to record music at a high level so that the music drowns out the noise floor. So it seems true that louder is indeed better....
Excerpted from the June edition of RECORDING magazine. 2001 Music Maker Publications, Inc. Reprinted with permission. 5412 Idylwild Trail, Suite 100, Boulder, CO80301 Tel: (303) 516-9118 Fax: (303) 516-9119 For Subscription Information, call: 1-800-582-8326
Electrical resistance is a property of all materials describing how much they restrict the flow of electricity. Materials with very high resistance are classified as insulators ; they pretty much dont conduct electricity at all. We appreciate this property when we handle things like power cords. At the other extreme, devices with very low resistance fall into the category of conductors . Copper wire is a convenient example. The copper
RECORDING JUNE 2001
audio waveform through. Also called potentiometers, we typically think of them as simple volume controls. In the recording studio, we have to look more closely at our volume controls because there is a second type: voltage controlled amplifiers. Hep cats resort to three letter acronyms VCA. The idea behind them is simple and clever. If the fader on an analog console is a potentiometer, it makes sense to
picture the fader as a variable resistor. But in the case of a VCA, the fader that sits on the console is separated from the audio by one layer. Instead of having that slider on the console physically adjust the resistance in a potentiometer, it adjusts a control voltage. This control voltage in turn adjusts the amount of gain on an amplifier.
Most compressors use VCAs, which are capable of reacting to voltage changes very quickly. And for consoles, the only other way to have something other than the engineer adjust the level would be to stick a motor on the fader. This is a pricey, complicated option, but motorized faders are certainly availableand at an ever-decreasing price. Automation Mix automation can do many things these days. If you have a hip digital audio workstation or digital console, you can automate it so that it wakes you to music first thing in the morning (noon), starts the coffee maker, and draws a warm bath. While this is all quite useful, automation is almost always just used for two very simple processes: fader rides and mutes. The point of pushing faders and pressing mute buttons? Controlling volume. Not too long ago even the fanciest consoles offered the ability to automate only the faders and the cut (mute) buttons. Studios spent a few hundred thousand dollars on a top of the line, state-of-the-art console and still couldnt automate pan pots, aux sends, equalizers, compressors, or reverbs.
RECORDING JUNE 2001
Musical dynamics are so important to composition and performance that they are notated on every score and governed closely by every band leader, orchestra conductor, and music director. Making clever use of loud parts and soft parts is a fundamental part of composition and arranging. In the studio we must concern ourselves with a different sort of dynamics: Audio Dynamics. Follow along in Figure 3 as we keep careful control over the range of amplitudes that we encounter when recording audio signals. Exploring the upper limit of dynamic range comes naturally to most of us. We turn it upwhatever it isuntil it hurts our ears, our equipment, or the music. Cranking it till it distorts. It seems to be the sole determinant for the position of the volume knob on most guitar amps (including mine), car radios (at least for the car in the lane next to me), portable stereos (the jogger who just passed me), home stereos (my neighbor in my freshman year college dorm) Here we have encountered a basic property of all audio equipment: turn it up too loud, and distortion results. Look carefully and you might At the other extreme (turning it down too much) lives a different audio challenge: we start to hear the inherent noise of the audio equipment we are using. All audio equipment notice that overdriven analog has a noise floorequalizers, compressors, microphones, and even patch cables. Yup. Even a cable made of pure gold manufactured in zero gravity during the winter solstice of tape looks like compression. a non-leap year will still have a noise floor, however faint. A constant part of the recording craft is using our equipment in the safe zone between these two extremes. This is the dynamic range, and its quantified in decibels (dB). The tarVolume changes are automated just to keep the song in get nominal level is typically labeled 0 VU (thats a zero, not an O). At 0 VU the music gets balance as multitrack components of the song come and through well above the self-noise of the equipment, but safely under the point where it go. But its usually a good idea to keep these moves quite starts to distort. subtle; theyre aimed at the musical interpretation of the If we recorded pure sine waves for a living, wed turn the signal up right to the point mix, trying to make the song feel right. With few excepof distortion, back off a smidge in level, and hit Record. However, the amplitude of a real tions, it should pretty much never sound like a fader was life musical waveform races wildly up and down due to both the character of the particumoved. Listeners want to hear the music, not the console. lar musical instrument and the way it is being played. Another automated volume effect is the Automated Electric guitars amps cranked to the limitat that much savored edge of becoming fire Send. Some very sophisticated mix elements can be crehazardshave very little dynamic range. If you havent already witnessed this yourself, ated this way sends. Automation is employed to add rich record a guitar the way Spinal Taps Nigel Tufnel doeswith the amp set to eleven. Youll and spacious reverb to the vocal in the bridge only, introobserve the meters on your console and multitrack zip up at the downbeat. And they bareduce rhythmic delay to the background vocals on key ly move until the end of the song. words, increase the chorus effect on the orchestral Percussion, on the other hand, can be a complicated pattern of hard hits and delicate taps. strings in the verses, add distortion to the guitar in the Such an instrument is a challenge to record well. The musical dynamic range of the instrument final chorus, etc. must somehow be made to fit within the audio dynamic range of your studios equipment. The automated sendjust another volume effectoffers Accommodating the unpredictability of all musical events, we record at a level well a way to layer in areas of more or less effects, using nothbelow the point where distortion begins. The amplitude distance (expressed in decibels) ing more than straight forward faders and cuts automation. between the target operating level0 VUand the onset of distortion is called headWell keep digging deeper into volume next month, room. This gives us a safety cushion to absorb the musical dynamics without exceeding the moving beyond faders and exploring the finer points of audio dynamic range of the gear. compression, expansion, gating, and tremolo, and how The relative level of the noise floor compared to 0 VU, again expressed in decibels, is volume affects the eq curve. Stay tuned. the signal-to-noise ratio. The trick, of course, is to send your audio signal through at a level well above the noise floor so that listeners wont even hear that hiss, hum, grit and gunk Getting paid to play the volume control is why Alex Case that might be lurking down in the depths of each piece of equipment. became a recording engineer. He used to do it for eefr .Speak Making effective use of dynamic range influences not just how we record to tape, but up to case@r ecordingmag.com. how we use a compressor, a de-esser, a reverb, or any other piece of gear.
RECORDING JUNE 2001
But as we know from all the music released from the beginning of time up to about 1995, extraordinarily elaborate and complicated mixes were built with this relatively limited amount of automation capability. Clever volume effectsmostly using VCA-based automationare the key. For example, using the humble mute switch, the mix engineer controls the multitrack arrangement. Cut the bass in the extra bar before the chorus, pull the flute out of the horn part until the last chorus, etc. This sort of mix move happens throughout pop music. But check out an extreme example by listening to U2s Ac htung Baby. The album begins with some heavy cut activity as the drums and bass enter at the top of the first tune of the album, Zoo Station. Automating fader rides in support of the arrangement is a natural application of automation. Maybe it makes sense to push the guitar up in the choruses, pull the Chamberlin down during the guitar solo, and such. Ideally, the band (maybe with the advice of a producer) gets these dynamics right in their performance. But in the studio, the full arrangement of the song may not come together for several months as overdubs are gradually added to the tune. Fader rides may be just the ticket to help this assembly of tracks fall into a single piece of music.
Dynamic Range