Advertisement

SKIP ADVERTISEMENT

Comments

Frustrated With Polling? Pollsters Are, TooSkip to Comments
The comments section is closed. To submit a letter to the editor for publication, write to letters@nytimes.com.

Frustrated With Polling? Pollsters Are, Too

“What is error? How are we defining errors?” “Do I feel like there is a doomsday clock ticking? Yeah, I kind of do.” “You don’t know exactly who you’re missing.” “There’s a lot of cover-your-ass going on.” “How do I force people to talk to us?” “We use elections as essentially our canary in the coal mine.” “People need to expand their tolerance for error.” “Polls don’t forecast. Forecasters forecast.” “If there wasn’t polling in the public domain, I would be a happy person.”

Mr. Bui is the deputy graphics director for Opinion. From 2015 to 2022, he was a graphics editor for The Upshot.

Pollsters are holding their breath. Their time-tested method of randomly dialing up people isn’t working as it used to. Voter turnout in the past two national elections was a blowout compared to years past. Donald Trump’s most enthusiastic supporters seem to be shunning calls to participate in polls.

But what’s really troubling pollsters going into this election is that it’s unclear how much more error these problems will add during this cycle. In fact, many think it’s unknowable.

To be fair to pollsters, many Americans demand more certainty and precision from political polls than they do from other disciplines of social science. Just a couple of percentage points can make all the difference in an election.

I talked to 10 of the country’s leading pollsters to discuss the midterm elections and what worries them the most about polling. Most understand the public’s frustrations. Some are experimenting with new approaches. Others are concerned that the problems are deeper than what their current toolkit can fix. Spend several hours talking to them, and there’s only one conclusion you can reach: The same cross-currents of mistrust, misinformation and polarization that divide our nation are also weakening our ability to see it for what it is. The stronger those forces grow, the worse our polling gets.

And like you, pollsters are anxiously waiting for Nov. 8. Not necessarily to see if a specific candidate wins but to see if election polling will live another day.

What follows is analysis and edited excerpts from our interviews.

Turnout troubles

Changes in voter turnout drive one major source of error in polls. To accurately survey the electorate, most pollsters have to make an educated guess about who is going to show up on Election Day. Some use voter lists, others use algorithms, and still others rate people on their likelihood to vote.

But if voter turnout is much higher — like the record-breaking years of 2018 and 2020 — then there’s a good chance their polls will be off, particularly if the surge in new voters is from demographic groups that pollsters didn’t expect to vote.

There’s a lot of cover-your-ass going on.
Nobody knows who’s going to vote. In fact, right now, every poll could be wrong, because there are some people, both Democrats and Republicans, who think that we’re going to have higher turnout than 2018. No model is prepared for that. Most of the models are prepared for higher Republican turnout.

There’s a lot of what I call cover-your-ass going on. A lot of people are making their polls more conservative because it’s better to be wrong and your person wins than to be wrong and your person loses.

Anna Greenberg, Democratic campaign pollster


It’s all really an understanding of who’s actually going to show up to vote.
There are very few actual persuadable voters out there, and it’s all really an understanding of who’s actually going to show up to vote.

That’s where we get into the greater importance of likely voter models rather than the basic underpinnings of polling.

You create a sample of known people, but then you create a model where you throw out some people to give you my best guess of which of these voters is going to show up. And that’s not polling. That is just statistical modeling.

Patrick Murray, academic pollster


There are some assumptions that people have made that end up making their polls go awry.
A thing people like to do is use voter lists. You might want to weight the number of registered Republicans and Democrats to balance your sample. [“Weighting” means making some responses count more toward the final result in order to account for small discrepancies in your sample.] Well, until recently, in Iowa there was an even number of Republicans, Democrats and independents. But Republicans were more likely to turn out. So they might be 33 percent of the list; they were 38 percent of the people who actually voted. So they would underestimate Republicans.

There was an assumption that if you made it look like this known list, then you’ve done your job — and that just isn’t the way voting works.

Ann Selzer, Iowa-based private pollster


Call waiting

Another problem pollsters face is actually reaching voters. The irony, of course, is that while we live in a world where our personal data seems to be online everywhere, actually getting in touch with us has gotten much more difficult.

The gold-standard approach has historically been random digit dialing, in which a pollster’s workers call randomly generated phone numbers off a list. But the explosion of unused numbers, cellphone numbers (I live in New York but frequently receive calls from pollsters in Wisconsin) and people’s greater reluctance to answer their phones has driven response rates off a cliff. In a recent poll for The New York Times, Nate Cohn mentioned that it took two hours of dialing just to complete one interview.

As a result, pollsters are looking at other ways to get in touch. Some are sending surveys to people through a link via text message. Others rely on an online stable of people who are paid to take surveys. Some survey people through an automated voice messaging system.

But most admit that they are still in the early stages of figuring out how well these new approaches will work for elections. Some pollsters, like Ann Selzer, aren’t convinced that any of these methods are the right way forward.

It’s too expensive and takes too long to do the traditional random digit dialing.
Think about the explosion of the numbers of phones that we have compared to 30 years ago. If you’re randomly dialing numbers, how many of those numbers are associated with something that isn’t a person? It’s enormous.

Barbara Carvalho, academic pollster


Do I feel like there is a doomsday clock ticking? Yeah, I kind of do.
There isn’t a pollster who is telling the truth who doesn’t worry all the time about [falling response rates]. I like to quote Tennessee Williams, that we rely on the kindness of strangers. A stranger will pick up the phone, and after they hear who you are, they will continue to talk to you — for no payment! That’s not a business model that I see has an extensively long future.

[My approach] is — for now — working. Do I feel like there is a doomsday clock ticking? Yeah, I kind of do. But there isn’t an alternative that I find convincing. I get people calling me all the time: “How are you going to change your methods this year?” Well I’m not. At some point those will be famous last words. I just hope it won’t be this election.

Ann Selzer, Iowa-based private pollster


We use elections as essentially our canary in the coal mine.
In the last 20 to 30 years the ways of collecting data have dramatically changed. We use elections as essentially our canary in the coal mine on what methods work in some states and what methods don’t work. You can do an online poll. How do you know if it works? Well, we have an election to compare it with.

Each election cycle is an opportunity to learn more about the methods.

Spencer Kimball, academic pollster

Methods that Emerson College Polling Institute is experimenting with
Method What is it? Trade-offs
Text-to-web Sending people links to an online survey through a text message. Might be the most promising method because it seems to be able to capture younger and older people.
Interactive voice response People respond to an automated message with their keypad. Limited to landlines due to F.C.C. rules. Better at reaching older people, people with a high school degree or less and people living in rural places.
Online panels A preselected group of people who take online surveys. Biased towards urban, more educated and younger people. Not necessarily representative of the electorate as a whole. And respondents are paid or given "incentives" to participate.

Text-to-web

What is it?

Sending people links to an online survey through a text message.

Trade-offs

Might be the most promising method because it seems to be able to capture younger and older people.

Interactive voice response

What is it?

People respond to an automated message with their keypad. Limited to landlines due to F.C.C. rules.

Trade-offs

Better at reaching older people, people with a high school degree or less and people living in rural places.

Online panels

What is it?

A preselected group of people who take online surveys.

Trade-offs

Biased towards urban, more educated and younger people. Not necessarily representative of the electorate as a whole. And respondents are paid or given "incentives" to participate.


There’s a paradigm shift in how we do data collection for polling.
The ground is constantly shifting under your feet as you’re trying to adopt best practices going forward. The best practices for web-based surveys I don’t think have been developed yet. We’re still in that research mode.

Andrew Smith, academic pollster


Just because you put the right ingredients in a bowl doesn’t mean you’re going to end up with a cake.
There are some difficulties with throwing things together like, “let’s do some with text,” “let’s do some with a panel,” “let’s do some with interactive landline.” I say often, just because you put the right ingredients in a bowl doesn’t mean you’re going to end up with a cake.

I’m not comfortable with that. And for now, I’m doing OK. Someday it will all explode in my face. That’ll be the end.

Ann Selzer, Iowa-based private pollster


The move to online polling seems so inexorable.
In this era of mixed mode polling, there are dramatic differences in terms of where you get your sample — whether it’s an online survey, whether it’s a phone survey that is either tied to the voter file or if it’s a phone survey that’s done through random digit dialing. All of those are going to give you very vastly different result distributions. [A voter file is a list of registered voters.]

I think a lot of that has been swept under the rug because the move to online polling seems so inexorable.

Patrick Ruffini, Republican campaign pollster


The Trump effect

One of the trickiest problems pollsters have had to reckon with over the last few elections is that Donald Trump’s most dedicated supporters won’t respond to their surveys — an error pollsters call nonresponse bias. The key issue here is that demographically similar Republicans who are talking to pollsters haven’t voted the same way as the ones who aren’t.

This effect was particularly acute in the states like Wisconsin, Ohio and Pennsylvania in 2020, where the polls were much more biased toward the Democrats.

You don’t know exactly who you’re missing.
If there is a big miss this cycle, it’s likely to be driven by voters who are less educated not participating in the polls. The people you do get on the phone, you can always weight them up [“Weighting up” means to emphasize a response more in a survey.] but you don’t know exactly who you’re missing. And that’s always been the kind of thing that will keep a pollster up at night. That has certainly been more the case since 2016 with President Trump’s first run.

But the numbers were really good in 2018. Are we in that sort of midterm where Trump is not on the ballot and missing those people isn’t as much of a concern?

Jon McHenry, Republican campaign pollster


Partisan nonresponse is the one that looms largest because there’s no real way to correct for it.
Since you don’t know what’s not observed.

Patrick Ruffini, Republican campaign pollster


How do I force people to talk to us?
In 2020 we encountered some new problems. Not the least of which we now know is something called nonresponse bias, which is a group of voters not speaking to us. That’s probably the most challenging of all the potential errors in a poll, because how do you fix that? How do I force people to talk to us? It was made worse under the influence of Donald Trump and his conversation of: Don’t trust the media, don’t trust the polls, don’t trust academics, don’t trust anyone.

We believe that a third of the error is caused by turnout and about two-thirds of the error is caused by nonresponse bias [in states like Wisconsin, Ohio and Pennsylvania in the 2020 presidential election].

A number of folks on the Democratic side have gotten together to try to figure this out. But we spent over a year trying to figure out certain things we could do to alleviate the problem of nonresponse bias. No regular readers will see that because most of what we do stays with the campaign.

Jef Pollock, Democratic campaign pollster


You’re writing survey questions about politics when someone might be living in a completely different political world.
People are operating off of information that you’re unaware of. Pollsters don’t know what people are seeing, hearing and reading. And so you’re writing survey questions about politics when someone might be living in a completely different political world than the one you’re writing about. That has made it very complicated to measure what’s happening out there.

Anna Greenberg, Democratic campaign pollster


A blurry snapshot

These are the kinds of issues that are on pollsters’ minds when they release their polls, leading them to read polling results far less literally than the average news reader, seeing polls less like a studio portrait and more like a blurry Polaroid.

It’s true that polls include a margin of error, which is essentially the mathematical error inherent in using a small, randomly selected group of people to describe a much larger group of people. But that is just one source of error. This and the problems mentioned earlier add up to a broader category known as total survey error. According to an analysis of polls from 1998 to 2014, it’s roughly twice as large as the published margin of error.

That was before the rise of Mr. Trump, the further erosion of response rates and recent record turnout. It’s probably higher now. Which is why some pollsters, like Patrick Ruffini, are more circumspect about how their work is read and shared.

Pollsters who work for political campaigns, like Robert Blizzard, aren’t as worried. Their job is to help their candidate win. For them, it’s less important for their polling to be right on Election Day than it is to be right about messaging, policy and spending. They claim to have much more data than what’s available to the public, but of course, we’ll never know.

People need to expand their tolerance for error.
I think people need to have a sense that the margin of error is a highly misleading statistic, and I frankly sometimes don’t know why we publish it anymore. It’s highly technical.

It doesn’t measure the totality of the error that could happen. People could change their minds, or you could be surveying a completely different sampling universe of people that actually show up on Election Day.

Patrick Ruffini, Republican campaign pollster

What “true” error could look like

Total survey error is much larger than people may assume and there's not a good way to quantify exactly how large it might be.

For a poll with a margin of error of +/- 2 percentage points.


Margin of error: the statistical uncertainty deriving from the poll’s sample size.

Late break: a shift in voter preferences after the poll is conducted.

Different turnout: voter turnout was different than anticipated.

Non-response: the pollster missed a segment of voters in a way that biased the results.


What is error? How are we defining errors?
That’s part of the problem I have with polling aggregators. You’re treating all polls fairly and equally. That’s not the case. I know at FiveThirtyEight you can search by A-rated or B-rated, but I don’t know how they’re coming up with these ratings.

If I release a poll for a race and it shows my candidate down two and we end up winning by two points, was my poll wrong, or was my advice really good?

I’m not trying to predict. I’m trying to figure out how we move the needle.

Robert Blizzard, Republican campaign pollster


If there wasn’t polling in the public domain, I would be a happy person.
It’s a weird thing in this business, where things that are meant to be strategic documents are in the public domain. You know, the Eagles don’t have to deal with the Giants having their playbook.

If I had my druthers, it wouldn’t exist. But the readers are interested in the horse race, and the rumor is that it drives clicks once or twice.

Jef Pollock, Democratic campaign pollster


Polls don’t forecast. Forecasters forecast.
That’s a whole different animal, even though they use polls to create their forecasts. It’s very different. Unfortunately, I think polls are being evaluated just as much on those forecasts as the numbers that we create through our research.

Barbara Carvalho, academic pollster


How to read the polls

What should we make of all this? Is there a good way to read polls without having spent decades in the field? The pollsters shared a few rules of thumb: Look at the context around the poll results. How large is the share of independent voters? Who is the incumbent? What kinds of people are being surveyed?

A registered voter poll is going to look more Democratic than the turnout would indicate.
Some polls do not report likely voters; they just report registered voters. Mentally I add two or three Republican points to that poll. I know it’s true in Iowa. I don’t know if it’s true elsewhere, but that’s kind of my working hypothesis.

Ann Selzer, Iowa-based private pollster


Independent voters are the ones that typically decide where things are going.
And right now, they should be — based on everything we’ve seen — heavily Republican.

Because, you know, they strongly disapprove of Joe Biden. They think that the country is off on the wrong track. From an issue set, they are much more in line with the priorities of Republicans on inflation and the economy and, to some extent, crime and public safety.

Robert Blizzard, Republican campaign pollster


Focus on the share of the vote. Not the margin.
We do a pretty good job of measuring the percentage that Democrats get, and we haven’t done a great job measuring the percentage that Republicans get.

So if a Democrat is at 49 and the Republican is at 40 and it’s a Democratic incumbent, don’t talk about that Democrat being ahead nine points. Talk about the fact that they’re under 50. Because the undecideds usually break against the incumbent.

Anna Greenberg, Democratic campaign pollster


A candidate in a statewide office generally can only run a handful of points higher than their party.
If on Election Day [in Pennsylvania], Biden is at 40, then it’s extremely difficult for Fetterman to win that race. Because Fetterman has to run nine or 10 points ahead of Joe Biden.

Robert Blizzard, Republican campaign pollster


Make sure that a survey is consistent.
Look at party balance on a survey. See if that lines up with past surveys that they’ve seen. I mean expectations do get defied, but make sure that a survey is consistent.

I remember, maybe in 2014, a national poll from The Washington Post had shown a huge sea change in Democratic fortunes from one survey to the next. But what it all boiled down to was that the party ID was just off. They had a 12-point jump in Democratic participation in their survey, and therefore they thought that Democrats were 12 points better off.

Jon McHenry, Republican campaign pollster


Even if you average the polls, it doesn’t really remove all these other errors with the polls.
I think we need to both be skeptical about polls in general and know that it can be subject to all the same kinds of errors that we saw in 2020 and 2016. That could very well happen again even if you’ve done everything right.

Patrick Ruffini, Republican campaign pollster


If they’re not transparent, I wouldn’t pay any attention to them.
I’m always dubious of anybody that actually does work for candidates or political parties. You get what you pay for, essentially. If a candidate releases polls about their own race, they’re doing it to influence coverage of the campaign.

Andrew Smith, academic pollster