Dr. Strangevox

by Published on

In the back room of a studio in North Austin lit only by the glow of computer screens, a producer and singer who calls himself Madd Creole lets out a string of gospel-inflected vocal improvisations. As his voice wavers and slides from note to note, it’s shadowed by a ghostly shimmer from the speakers, giving his soulful crooning a shiny, robotic skin. “It keeps taking my voice to these strange minor notes,” he says afterward. “It’s killing me, almost like having two drummers play at once, but it’s also giving me some new ideas.”

Listen to Michael May discuss Auto-Tune with the American Public Media host John Moe: Download .mp3 (7.9 MB)

I’m here at ATV Recordz to try to come to terms with Auto-Tune, a piece of Texas-bred technology that has touched, and in some cases transformed, almost every genre of music. The software, released in 1997, lets producers seamlessly correct a bad note, which turned out to be as irresistable as the power of invisibility would be for most mortals. Almost every studio in the world now uses Auto-Tune; often it’s just used to nudge the occasional sour note into tune. In some cases, it’s used throughout the song to cover up a weak performance. I won’t name names, but it’s safe to say some of our recent pop stars owe their careers to the software.

Auto-Tune was intended as pitch-correction tool, but it comes bundled complete with unintended consequences. Madd Creole’s producing partner, Swole, opens a pop-up window on his computer with a graph that traces the notes that Creole is singing, and another line for the notes that Auto-Tune is producing. He can change the scale from major to minor, and the software will bend the notes to that key. The software has a virtual dial that controls how fast the notes are corrected. When Swole spins the “Detune Speed” knob all the way to the left, Creole’s voice no longer slides naturally along the melody, but is jerked forward from note to note, erasing all subtle transitions.

Much like nuclear fission, I’ve long been convinced that Auto-Tune in the wrong hands could destroy the world as we know it. Any tool that can tweak the human voice into airbrushed perfection can also strip away its humanity and soul. So I wasn’t surprised to learn that Auto-Tune, like most fiendish plots, was hatched deep, deep in the ground. Oil deep. Auto-Tune inventor Andy Hildebrand worked for Exxon Mobil Corp. for 18 years in seismic data exploration. His research involved sending sound waves deep into the subsurface strata of the Earth, recording them as they reflected back, and then using a technique called autocorrelation to find oil reserves. Hildebrand left the oil industry to study composition at the Shepard School of Music at Rice University in Houston, and began to experiment with applying autocorrelation to music. He found that the technique could be used to pinpoint pitch just as accurately as oil. Hildebrand has said he created Auto-Tune to solve a common recording problem: a singer’s first take is often the most passionate and energetic, but also full of mistakes. He released the software, and popular music hasn’t suffered another wrong note since.

Auto-Tune was designed as a subtle studio tonic, but it can also transform a voice into something not quite human. It’s toxic capabilities were unleashed on the world by none other than Cher in her 1998 megahit, “Believe.” The producers turned Auto-Tune on full blast, and transformed Cher’s vocal lines into a strange, metallic garble. When she sings “I can’t break through,” it seemed as much a metaphor for the coldness of digital music as for failed love. It took me only one listen to determine that this was the most annoying song in the history of pop music. When the hit faded from the airwaves six months later, it felt like a bad fever had finally broken.

The respite would be brief. In 2005, T-Pain would meld soul and Auto-Tune and submerge the subtle textures of the African-American vocal tradition in a suffocating gloss. It can be hard to find a song on an urban radio station these days that doesn’t have the Cher effect. There’s even an Auto-Tune application for the iPhone called “I am T-Pain,” which democratizes mediocrity by allowing one to sing synthetically corrected harmonies over slow jam beats. So it’s a little ironic that just weeks after rap elder statesman Jay-Z addressed the audience at the BET awards and declared in song the “Death of Auto-Tune,” I’m finally willing to give Auto-Tune a second chance.

It’s hard to pinpoint when I realized that Auto-Tune might not be the death of pop music, but simply the latest example of musicians doing what they’ve always done: using the tools at hand to create new ways of shaping sound. It could have been hearing the New Orleans’ rapper Lil Wayne send his lazy drawl through the effect, giving an unearthly melody to his raps. Or it could have been T-Pain’s ode to Texas rap called “Chopped and Screwed,” where he melds the slowed-down analog sound of Houston’s DJ Screw with an aggressive Auto-Tune hook that’s chopped in pieces and repeated. The effect is bizarre, practically avant-garde.

This is the view Jace Clayton takes. Clayton is a blogger and DJ in Brooklyn, and unlike most critics, who despise the software, he has become one of Auto-Tune’s most intellectually rigorous defenders. He argues that Auto-Tune is the antithesis of a studio gimmick. While effects like reverb or compression merely affect the sound in a static way, Auto-Tune responds to the human voice, creating a sort of duet between man and machine. “A straight, clean vocal performance is not going to get you what you want from Auto-Tune,” he says. “But when you bend a pitch slightly out of tune, you hear the software immediately respond. This is not just like putting on makeup. This is more of a symbiotic relationship, a conversation. And it’s this strange embrace that’s fascinating to me.”

Clayton introduced me to “Youchkad Zin,” a song by Hafida, a virtuosic Berber singer from Morocco. Hafida’s voice is drenched so thoroughly in Auto-Tune that it makes T-Pain sound almost human. As her voice climbs the complex quarter-tones of Berber music, Auto-Tune whips her tones around like a leaf in a cyclone. She sounds like some strange fusion of robot and violin, while the chorus of voices behind her is untreated. “It really foregrounds her shrill, otherworldly glissandos,” says Clayton. “Every song on this CD uses Auto-Tune this heavily. In fact, it’s hard to find Berber music now that doesn’t have Auto-Tune.”

Another Auto-Tune masterpiece is “Baako,” by DJ Champion from the Ivory Coast. Champion, in a moment of hallucinatory brilliance, ran the sound of a baby crying through Auto-Tune. The effect is unnerving to the bone. “We’re hardwired to react to a baby crying,” says Clayton. “We want someone to come and soothe it and make it stop. But Auto-Tune has made the cry musical. It almost evokes a synthetic emotion. This track is eerie.”

Eerie, yes. But this wailing yet somehow musical cry exemplifies the creative possibilities of Auto-Tune. It can help anyone sound like a singer, even a crying baby. It’s a blessing and a curse. Producers Swole and Mad Creole say Auto-Tune has made some singers lazy. That’s why they prefer to use a Talk Box, a difficult-to-use analog tool that involves piping sounds through a tube and into a singers mouth, which Peter Frampton used to get a similar effect 30 years ago. “But it’s unfair to say that Auto-Tune sucks just because it’s too easy,” says Swole. “Remember, the people who came before us had to use tape reels, while we can cut and paste tracks on a computer screen. Auto-Tune isn’t going to kill anything. Tools will keep coming along to correct or change the sound of music, and not everything will be used exactly how it’s intended. And that’s beautiful.”

I’m still an analog guy at heart, but I’ve made my peace with Auto-Tune. It’s the inevitable melding of man and machine, a cyborg new wave. And in the age of Twitter and Facebook, we may be becoming more comfortable communicating through computers than face to face. In a sense, Auto-Tune is nothing less than a bright, sonic reflection of that. It’s who we’re becoming.

Michael May is a former Observer managing editor. He’s now a freelance journalist based in Cambridge, Massachusetts.