Polling: A Survival Guide

by Published on

Ever want to take an adventure? For those of us constrained by jobs, families and other such inconsequential details, we can’t sail down the Amazon or some other exotic river. But we can go deep into the heart of political polling. Take care though—it’s dangerous. One false assumption, and you can be totally misinformed.

Just consider the first half of September. Poll numbers from the Texas gubernatorial race started pouring in and seemed to confuse everyone. One poll showed incumbent Gov. Rick Perry ahead by 12 points, two showed a six point gap between Perry and Democratic candidate Bill White, and one showed the race as a virtual tie. Some folks chose to believe whichever mirage-like poll showed the results they wanted; others simply wrote them all off and wandered away.

But armed with some basic information (although a machete might also come in handy), people might have been able to make use of the polls. What follows is a basic survival guide—some tips for evaluating polls and knowing what they tell you. Of course, just like there are no rules in the wilderness, there are currently few absolutes in polling. No poll will ever tell you absolutely and precisely what will happen on election day . But if you stay alert and keep your eyes open, it’s easy to evaluate whether a poll is in safe territory—or whether you need to run for cover.

 

The Supplies Store

Pioneers had the Sears-Roebuck catalogue and we have websites (less kitsch appeal, I’ll grant you.) Most political tourists love fivethirtyeight.com. Nate Silver, the site’s creator and main writer, got his start analyzing fantasy baseball numbers. He brings the same sports-enthusiasm to political horse racing. The site does a great job keeping the numbers exciting; emphasizing the current forecasts and predictions for different elections. Using polling data and their own prediction models, the site gives readers gambler’s odds—currently, it gives Perry an 86 percent chance of winning in Texas. Silver also makes value judgements about polls, ranking them based on methodology and noting which are partisan. Check out his guide to polling firms whenever, you feel lost in the wilderness.

But for those of us who sat in the front of the class with glasses on, Pollster.com is an excellent option. Full of raw source material specific to Texas with an archive that goes back years, the website generally links to press releases, and where available, the surveys themselves. There’s less synthesizing than fivethirtyeight—instead the site gives you more information for making decisions on your own. This is the cantina for supernerds—discussions on methodology abound, as well as lots of graphs, charts and the like. Both sites feature writing from pollsters and academics. “Nate Silver’s operation is about as good as we have,” says Richard Murray, the director of the Survey Research Institute at the University of Houston.

Now that you’re equipped, grab your pocket-protector.

 

Beware the Quicksand!

Most stories about a poll tell you two things: the results and the margin of error. We assume that the margin of error is a way to judge the validity of the poll—and you know what you do when you assume. Take the recent Wilson Research Strategies poll for the Republican GOPAC. The poll showed 50 percent of respondents favoring Perry and 38 percent preferring White. The margin of error was plus or minus 3.1 percentage points. If we assume margin of error as the only marker of legitimacy, then this poll looks pretty good—3.1 is low as far as margin of error goes. And just like that, we inadvertently wander into no man’s land. “These partisan polls like GOPAC, usually they end up exaggerating the advantage for the candidate or their party,” says Murray. Tread carefully.

Let’s get one thing straight. Because no one can ask every single voter what he or she will do, surveys rely on samples—small groups of people that will represent the entire population. “Public opinion researchers liken it to making a big pot of soup,” explains the site Public Agenda, “to taste-test the soup, you don’t have to eat the whole pot, or even a whole bowl’s worth. You only have to try a bite.” Of course, we’re never going to adequately pick a group that represents the population 100 percent accurately, but we assume that the larger the sample size, the more representative it will be of the general population. The margin of error simply demonstrates that idea—for the most part, it tells you how many people were surveyed. (For you nerds out there, it’s calculated by dividing one by the square root of the sample size.)

For almost all competitive and semi-competitive races, the margin of error simply tells you that 95 percent of the time, doing the exact same survey with other random samples from the same group, your results will come within that margin. The bigger the sample, the smaller the margin of error—although as the samples get past 1,000, the improvements are less and less.

Okay, that’s what the margin of error does tell you. But that’s it. Margin of error is not a proxy for the trustworthiness of a poll and it doesn’t tell you anything about a pollster’s method for picking the sample. In fact, it’s relatively easy to manipulate results without showing any change in the margin of error. To really evaluate a poll, first check fivethirtyeight.com or pollster.com to find out whether the firm conducting the poll is trustworthy—and don’t assume that just because a group is partisan, it’s inherently biased. For instance, the Hill Research Consultants firm—which conducted a poll showing the candidates in a virtual tie—is Republican, but nonetheless, has a relatively good ranking on fivethirtyeight. Once you know you’re on solid ground, then you have to start digging.

 

I need to ask you a question?

As with any expedition, you’re going to need some confidence. So how’s this: you are probably more informed than the average voter. You feel that self esteem rising? The average voter doesn’t necessarily know that much about the candidates he or she is about to support. That’s important to remember, because, while you might be an avowed Rick Perry fan who would never consider voting for a candidate who hasn’t shot a coyote while jogging, you can be easily influenced. That’s why, actually reading a survey is integral to evaluating its overall fairness. Of course this isn’t always possible, but approach with caution any survey that doesn’t give you much information. After all, in this wilderness, things are not always as they seem.

For instance, everyone loves to demand a survey’s cross tabs—the results broken down by various groups like gender or party—but few demand to see the survey itself. How pollsters choose to write their questionnaire can have major implications on the results. The American Association for Political Opinion Research offers some key warning signs for unreliable questions. First there’s the order in which you ask the questions. If the poll begins asking which candidates someone will vote for, it risks favoring the most well-known candidate: the incumbent. That’s particularly true early in a race, before the challenger has been able to establish him- or herself. But if a survey asks the central question—who will you support— later on in the poll, the surveyors may sway respondents.

Are you with me? Here’s an easy example: If a pollster asks about a popular Rick Perry policy—like, do you support the tax cuts he initiated?—then follows up by asking which candidate the respondent is going to vote for, the respondent is more likely to say Perry than they otherwise would be. If a poll asks about an unpopular Perry initiative, like the Trans-Texas Corridor, you’ll see the opposite effect. This may be sad commentary on our public opinion, but it nonetheless accounts for some bias in polls. Keep your binoculars out for these kinds of problems.

Then there’s the questions themselves. Good questions should not seem to favor one side over another—that’s a sure way to manipulate a poll. But a question can introduce bias either by using loaded language or by offering different numbers of options on either side of an issue. For instance, “Do you support Governor Perry?” is not as balanced as a question that offers two options (“Do you support or oppose Governor Perry?”)

Stay alert for more subtle problems as well. Some questions are double barreled—“Do facts 1, 2 and 3 make you more or less likely to support Bill White?” If the respondent finds facts 1 and 2 favorable but hates fact 3, then there’s no good answer.

If you find a questionnaire with some of these problems, it doesn’t invalidate the whole thing. But does mean you should tread carefully and recognize the potential bias in favor of one candidate or the other.

 

Who are these people? Likely voters vs. Registered voters

You aren’t out of danger yet. Not even close. Now, brave explorer, you must consider who is answering the survey. I could go ask the men standing at the street corner who they’ll vote for, but I doubt I could extrapolate much from that.

There are three general types of groups surveyed: “registered voters,” “eligible voters” and “likely voters.” Registered voters are easy to define—they are people who have, at some point, registered to vote. These folks have taken one step towards voting, but that’s all we know. Obviously, bigger elections create a larger turnout, and in a midterm election like this year’s, without any national candidates, fewer Texans will ultimately make it to the polls. So when it comes to surveys of registered voters, we’re seeing the opinions of people who could vote, though many will not. Eligible voters surveys are even more inclusive—literally anyone who could be able to vote. At this point in election season, there’s still time to register, so anyone who resides legally in this state is eligible for the poll. Registered and eligible voter surveys help us see bigger populations—what will happen if White can get out big turnouts from low-participation populations like Hispanics?

There are obvious problems with these inclusive approaches. Clearly, not all of these people will vote—in fact, in a midterm election like 2010, we’ll be lucky to hit 40 percent turnout rates. So this group isn’t representative of the people who will actually pull levers and punch holes for Perry or White, and that limits how much we can extrapolate from the results. Furthermore, they often tend to slightly advantage Democratic candidates.

But wading into who is a “likely voter” can bring you into dangerous territory as well. According to AAPOR, “Most polls ask a combination of questions that cover self-reported vote intention, including measures of engagement (‘Are you following the election closely?’) and past behavior (‘Have you voted in prior elections?’).” Ultimately, determining “likely voters” comes down to figuring out their level of enthusiasm. And—as anyone excited to see Inception can tell you—enthusiasm can change quickly.

This year, for instance, Republicans are more excited about races than Democrats, which make them better informed and more likely to turn out. But that doesn’t mean that a major media incident—like a debate or a series of ads—could not get Democrats revved up. If such a thing did happen, more Democrats could become likely voters. And you don’t have to take my word for it—smart, professors at New York University and Oxford University both made the same argument in a dour paper entitled “Likely (And UnlikelY) Voters and the Assessment of Campaign Dynamics.” If you’re dying to read it, it’s in the 2004 Winter edition of Public Opinion Quarterly.

Murry agrees that in low turnout elections, predicting who will vote is difficult. “It’s tougher to poll in midterm elections because the electorate is small,” he explains.
Many pollsters and academics both recognize the value of likely voter models and also remain skeptical. “People who vote are people who vote,” says Hill Research pollster David Benzion. The recent Hill Research poll targeted people who had voted either in 2004, 2006 or 2008. Benzion shrugs off such models as “intellectual exercises.”

“If you’re actually running for office,” he says, “it’s malpractice to base your strategy on assuming you can do anything at all about a voter model.”

But malpractice or not, likely voter models are common—groups like Rasmussen, Public Policy Polling, and Gallup all rely on them to give a sample that they hope will look more like the people who show up on Election Day.

Well my friend, you are through the worst of it. There’s more adventure to be had, of course—random versus non-random sampling, web surveys versus phone. But best not to jump in all at once. After all, it’s a jungle out there.