Is Political Polling Still Useful?

Compared to any available alternative, yes.

Steven Taylor’s “Split Polls” post from this morning, based on a NYT/Sienna poll, got me digging into the poll’s methodology and a related FAQ. Given the pattern of commenter reaction to any post on polling, I thought some of what I found there useful to highlight.

photo of feedback, satisfaction, customer, client, survey, emoticon, emotion, advertising, alert, attention, banner, business, comment, communication, concept, design, consumer, contact us, emoji, emotional, intelligence, support, emotions, faces, empathy, evaluation, opinion, reaction, red, text, font, line, technology, area, circle, angle, product, graphics, clip art, brand, symbol, graphic design, sign, illustration
CC0 Public Domain photo by Mohamed Hassan from PxHere

The New York Times/Siena College Poll is conducted by phone using live interviewers at call centers based in Florida, New York, South Carolina, Texas and Virginia. Respondents are randomly selected from a national list of registered voters, and we call voters both on landlines and cellphones. In recent Times/Siena polls, more than 90 percent of voters were reached by cellphone.

One of the most common questions we get is how many people answer calls from pollsters these days. Often, it takes many attempts to reach some individuals. In the end, fewer than 2 percent of the people our callers try to reach will respond. We try to keep our calls short — less than 15 minutes — because the longer the interview, the fewer people stay on the phone.

This has long been my understanding of the practice. But, of course, if only 2 percent of those contacted will respond, we’re left with a self-selected sample rather than a representative one. To the extent that the kind of person who will volunteer to take the time to participate in a ~15 minute survey are different than the other 98 percent, that’s a problem.

Phone polls used to be considered the gold standard in survey research. Now, they’re one of many acceptable ways to reach voters, along with methods like online panels and text messages. The advantages of telephone surveys have dwindled over time, as declining response rates increased the costs and probably undermined the representativeness of phone polls. At some point, telephone polling might cease to be viable altogether.

But telephone surveys remain a good way to conduct a political survey. They’re still the only way to quickly reach a random selection of voters, as there’s no national list of email addresses, and postal mail takes a long time. Other options — like recruiting panelists by mail to take a survey in advance — come with their own challenges, like the risk that only the most politically interested voters will stick around for a poll in the future.

In recent elections, telephone polls — including The Times/Siena Poll — have continued to fare well, in part because voter registration files offer an excellent way to ensure a proper balance between Democrats and Republicans. And perhaps surprisingly, a Times/Siena poll in Wisconsin had similar findings to a mail survey we commissioned that paid voters up to $25 to take a poll and obtained a response rate of nearly 30 percent.

That telephone surveys remain the most efficient means available to conduct survey research is not the same thing as telephone surveys being an effective means of conducting said research. What does “fare well” even mean here? Presumably, they’re useful only insofar as they’re predictive of actual behavior.

Our best tool for ensuring a representative sample is the voter file — the list of registered voters that we use to conduct our survey.

This is a lot more than a list of phone numbers. It’s a data set containing a wealth of information on 200 million Americans, including their demographic information, whether they voted in recent elections, where they live and their party registration. We use this information at every stage of the survey to try to ensure we have the right number of Democrats and Republicans, young people and old people, or even the right number of people with expensive homes.

On the front end, we try to make sure that we complete interviews with a representative sample of Americans. We call more people who seem unlikely to respond, like those who don’t vote in every election. We make sure that we complete the right number of interviews by race, party and region, so that every Times/Siena poll reaches, for instance, the correct share of white Democrats from the Western United States, or the correct share of Hispanic Republicans in Maricopa County, Ariz.

Once the survey is complete, we compare our respondents to the voter file, and use a process known as weighting to ensure that the sample reflects the broader voting population. In practice, this usually means we give more weight to respondents from groups who are relatively unlikely to take a survey, like those who didn’t graduate from college.

So, again, this comports with my understanding of how a nonbiased survey is conducted by professional pollsters. But it doesn’t solve the problem I mentioned earlier: How do we know that those who opt in to surveys are representative of their subgroups?

In 2022, we did an experiment to try to measure the effect nonresponse has on our phone polls. In our experiment, we sent a mail survey to voters in Wisconsin and offered to pay them up to $25 to respond. Nearly 30 percent of households took us up on the offer, a significant improvement over the 2 percent or so who typically respond by phone.

What we found was that, overall, the people who answered the mail survey were not all that dissimilar from the people we regularly reach on the phone, on matters including whom they said they would vote for. However, there were differences: The respondents we reached by mail were less likely to follow what’s going on in government and politics; more likely to have “No Trespassing” signs; and more likely to identify as politically moderate, among other things.

But the truth is that there’s no way to be absolutely sure that the people who respond to surveys are like demographically similar voters who don’t respond. It’s always possible that there’s some hidden variable, some extra dimension of nonresponse that we haven’t considered.

So, that’s somewhat comforting. Phone surveys are a lot cheaper, easier, and faster to conduct than mail surveys—let alone those that pay $25 a pop for responses. If the results are essentially the same, then it’s obviously preferable to a mail survey. Still, that’s not the same thing as saying they’re good.

Granting that one’s stated preference and one’s behavior aren’t necessarily going to align—a lot of people who state a preference for either Trump or Biden won’t bother to vote at all and, I suspect, many who say they prefer RFK Jr. will either stay at home or vote for their least unfavorite of the major party candidates—the main value of election polling is to gain an understanding of the mood of the electorate to predict election outcomes.

In the 2022 midterm elections, Times/Siena poll results were, on average, within two points of the actual result across the races we surveyed in the last weeks of the race. That helped make The Times/Siena Poll the most accurate political pollster in the country, according to the website FiveThirtyEight.

At the same time, all polls face real-world limitations. For starters, polling is a blunt instrument, and as the margin of error suggests, numbers could be a few points higher or a few points lower than what we report. In tight elections, a difference of two percentage points can feel huge. But on most issues, that much of a difference isn’t as consequential.

Historically, national polling error in a given election is around two to four percentage points. In 2020, on average, polls missed the final result by 4.5 percentage points, and in some states the final polls were off by more than that. In 2016, national polls were about two percentage points off from the final popular vote.

Being that close in midterm elections is an indicator that polling remained valuable. Midterms have always been harder to predict than general elections precisely because of the preference-behavior gap addressed previously. In the main, people are far more likely to vote in a general election than in a midterm but the degree to which that is the case varies considerably.

FILED UNDER: Public Opinion Polls, US Politics, , , , , , , , , ,
James Joyner
About James Joyner
James Joyner is Professor and Department Head of Security Studies at Marine Corps University's Command and Staff College. He's a former Army officer and Desert Storm veteran. Views expressed here are his own. Follow James on Twitter @DrJJoyner.

Comments

  1. al Ameda says:

    Being that close in midterm elections is an indicator that polling remained valuable. Midterms have always been harder to predict than general elections precisely because of the preference-behavior gap addressed previously. In the main, people are far more likely to vote in a general election than in a midterm but the degree to which that is the case varies considerably.

    I’ve had many ‘Are Polls Relevant’ and ‘Polls Are Bullsh*t’ conversations with my Republican family members and others. By the way, no surprise, it’s almost always a discussion with conservatives, many of whom were shocked that Obama beat McCain and Romney, and that Biden beat Trump. Since the 2016 election and Trump ‘surprise,’ many conservatives are convinced that the ‘surprise’ proves that polling is ‘bullsh*t’ otherwise why didn’t polling predict Trump’s ‘victory’?

    I’m agnostic on this. I see value in polling. Some organizations are better than others to be sure.

    The shift from landlines to smartphones proobably caused (forced?) polling organizations to modify their methods for securing a good sample size from which they do their analysis. While t here are probably more problems now than two or three decades past, I continue to see value in polling – it’s a snapshot in time, in the moment, one that tells us and the candidates where we generally are on various candidates on the issues of the day, in this moment.

    3
  2. Andy says:

    Polls have a margin of error for a reason.

    The problem is that people see, and the media and even pollsters focus on a precise result – for example “51 – 49” without including the margin or range of error. This communicates false precision.

    3
  3. steve says:

    Any thoughts on the betting sites like Pedictit? There is talk about regulators doing away with these sites on the theory that they corrupt elections. There are reported cases where election advisors were using inside info to bet against their own campaigns. However, it’s not common. The betting sites supposedly provide info analogous to polls, but I suspect they have even more of a bias issue and AFAICT they are only useful to some people who write about elections.

    Steve

  4. Mister Bluster says:

    In recent elections, telephone polls — including The Times/Siena Poll — have continued to fare well, in part because voter registration files offer an excellent way to ensure a proper balance between Democrats and Republicans.

    Our best tool for ensuring a representative sample is the voter file — the list of registered voters that we use to conduct our survey.
    This is a lot more than a list of phone numbers. It’s a data set containing a wealth of information on 200 million Americans, including their demographic information, whether they voted in recent elections, where they live and their party registration.

      

    Illinois does not have a political party registration system. However, in a primary election, you must select one political party ballot to vote or request a non-partisan ballot (public questions only) if available. You have the freedom to change your party choice in each primary election.
    Source

    Since Illinois, where I live, does not register voters by political party does this mean that pollsters do not gather information from Illinois residents?

    1
  5. Kylopod says:

    @steve:

    Any thoughts on the betting sites like Pedictit?

    1) They should not be viewed as having any predictive value.

    2) They can be useful as a way of measuring the conventional wisdom of the moment, by a group of people who pay attention to politics and have a stake in at least aiming for accurate predictions. Even that has caveats (the ways in which people manipulate their votes), but I think it’s generally true.

    3) The edge percentages (say, 5% that the winner of the election will be Michelle Obama) are consistently ridiculous and should be discounted at the outset.

  6. Jen says:

    In the 2022 midterm elections, Times/Siena poll results were, on average, within two points of the actual result across the races we surveyed in the last weeks of the race.

    This is the key consideration–the timing. During my time in politics, my bosses–who had worked many elections–would point out that the further out the polling was, the less we should think about it. When do people start to cement an opinion, or decide to vote? About 2-3 weeks out.

    Polling this far ahead of time drives clicks, and it can allow for a campaign to course correct or adjust messaging, but it’s not an accurate predictor of outcomes that are months away.

    3
  7. gVOR10 says:

    @steve: Yesterday Tyler Cowan at Marginal Revolutiondid a thing about this. He’s agin’ banning it, for no reason I can see except that he’s agin’ all regulation. I found it amusing. Not long ago we had a strong, largely religious, consensus that gambling is immoral and it was largely banned. And here’s Cowan, a high priest of the libertarian faith, taking the position that gambling is good. And for no apparent reason except that someone wants to ban it.

  8. DrDaveT says:

    we sent a mail survey to voters in Wisconsin and offered to pay them up to $25 to respond. Nearly 30 percent of households took us up on the offer, a significant improvement over the 2 percent or so who typically respond by phone.

    If the results of this sample were close to the usual, that’s actually informative — it tells us that the self-selection going on is highly correlated with “people for whom < $25 is a significant amount of money."

    That ain't me. I'm pretty sure it ain't Michael, or most of the other regular commenters here. Or the hosts, for that matter. Is the union of "people who will answer a bulk snail mail poll for $25" and "people who will respond to a cold call poll" representative of America? I have no idea, but I am skeptical. I'd expect both of those sets to skew elderly and poor – a combination that also implies less educated.

  9. James Joyner says:

    @Mister Bluster:

    Since Illinois, where I live, does not register voters by political party does this mean that pollsters do not gather information from Illinois residents?

    Indeed, I’ve never lived in a state with party registration and I’ve lived in quite a few. In the case of Illinois, though, it’s very solidly Democratic so not included in a swing state poll.

    @Jen: Oh, for sure. It’s useful for spotting trends but it’s just a snapshot of where the electorate is at any given moment.

    @gVOR10:

    Not long ago we had a strong, largely religious, consensus that gambling is immoral and it was largely banned. And here’s Cowan, a high priest of the libertarian faith, taking the position that gambling is good.

    I’m confused by this juxtaposition. Libertarianism isn’t based on Christian doctrine. There are all sorts of things Christianity is against (alcohol, drugs, prostitution, pornography, non-procreative sex, abortion, etc.) that libertarians think should be free from government regulation.

    1
  10. Jen says:

    One other really random thought on this: I wonder if the “polling isn’t accurate” comes from the sheer volume of polls that make news for month after month, with ones that offer surprising results, which are likely outliers, taking up more space in our heads than they should.

    Basically, there are too many polls done too early. They might be baselines or snapshots, but the number of them is likely contributing to people’s distrust of the results.

  11. Kylopod says:

    @Jen: I have definitely thought about the fact that there are just so many goddamn pollsters, so people can always cherry-pick whatever evidence they need to bolster their preferred narrative. And I’m not just talking about pointing to specific polls to suggest that a candidate is winning or losing–the favorite activity of every hack pundit and partisan. I’m also talking about the poll doubters who dismiss the entire enterprise with sweeping statements about how “the polls are nonsense”–to which my response is always, “Which polls?” In 2022 there were some very questionable pollsters flooding the averages, most notably Trafalgar.

    I also think the question of polling accuracy gets conflated too easily with whether polls several months out from an election are predictive. For example, I would consider 2004 an example of an election with fairly accurate polling. And what I’m referring to is the fact that polls taken shortly before Election Day mostly showed a narrow lead for Bush in the popular vote, which is exactly what he got. However, in the spring and early summer, Kerry had been leading in most of the polls. Moreover, the exit polls that year were notoriously off–they showed Kerry ahead in several states he went on to lose. Because of that, and also because of Kerry’s polling lead earlier in the year, a lot of people point to 2004 as proof of the inaccuracy of polls.

    While I admit election polling has plenty of problems, I think the criticisms are often overstated by people failing to contextualize what (or whom) they are criticizing.