Pollsters aim to learn from 2016

Campaign signs for Bernie Sanders and Hillary Clinton are plastered outside the Gaillard Center, which will host the Democratic presidential debate, on Jan. 17, 2016 in Charleston, S.C. The issues that led battleground polls to miss the mark in the 2016 election could happen again if they’re not addressed directly, experts warn. Travis Dove/New York Times

Meetings of the American Association of Public Opinion Research tend to be pretty staid affairs. But when members of the group gathered for a conference call at this time in 2016, the polling industry was experiencing a crisis of confidence.

Donald Trump had swept most of the Midwest to win a majority in the Electoral College, a shocking upset that defied most state-by-state polls and prognoses. An association task force, which was already working on a routine report about pre-election poll methodologies, was suddenly tasked with figuring out what had gone wrong.

“We moved from doing this sort of niche industry report to almost like more of an autopsy,” said Courtney Kennedy, the director of survey research at Pew Research Center, who headed the task force. “Something major just happened, and we have to really understand from A to Z why it happened.”

The group released its report the following spring. Today, with the next presidential election less than a year away, pollsters are closely studying the findings of that document and others like it, looking for adjustments they can make in 2020 to avoid the misfires of 2016.

“Polling is one of those things like military battles: You always re-fight the last war,” said Joshua D. Clinton, who co-directs the University of Vanderbilt’s poll and served on the AAPOR committee. The 2020 election “might have a different set of considerations,” he said, but pollsters have an obligation to learn from the last cycle’s mistakes.

By and large, nationwide surveys were relatively accurate in predicting the popular vote, which Hillary Clinton won by 2 percentage points. But in crucial parts of the country, especially in the Midwest, individual state polls persistently underestimated Trump’s support. And election forecasters used those polls in Electoral College projections that gave the impression Clinton was a heavy favorite.

AAPOR’s analysis found several reasons the state polls missed the mark. Certain groups were underrepresented in poll after poll, particularly less educated white voters and those in counties that had voted decisively against President Barack Obama in 2012. Respondents’ unwillingness to speak honestly about their support for Trump may have also been a factor.

These and other issues could reappear in 2020, pollsters warn, if they’re not addressed directly.

‘Weighting’ by education

To make sure their results reflect the true makeup of the population, pollsters typically “weight” their data, adding emphasis to certain respondents so that a group that was underrepresented in the random sample still has enough influence over the poll’s final result. Polls typically weight by age, race and other demographic categories.

But some state-level polls in 2016 did not weight by education levels, therefore giving short shrift to less educated voters, who tend to be harder to reach.

This often understated Trump’s support, since he was markedly more popular than past Republican nominees among less educated voters — and noticeably less popular among those with higher degrees, who research suggests are more likely to participate in polls.

The AAPOR analysts found that many polls in swing states would have achieved significantly different results had they been weighted for education. This, in turn, would have noticeably decreased Clinton’s lead in much-watched polling averages and forecasts of these states.

(BEGIN OPTIONAL TRIM.)

A Michigan State University poll that found Clinton holding a 17-point lead in that state just before the election did not weight by education. If it had, her lead would have dropped to 10 points in that poll, the AAPOR researchers found — still a far cry from predicting Trump’s narrow victory there, but a significant change.

And a University of New Hampshire poll put Clinton up by 16 points in that state on the eve of the election, though in the end she barely won it. That poll’s gap would have closed entirely if its analysts had weighted for education, according to the AAPOR report.

(END OPTIONAL TRIM.)

Some polling firms that did not weight by education in 2016 have since taken up the practice, but not all of them. Mark Blumenthal, former head of polling at Survey Monkey and a member of the AAPOR task force, said that weighting by education ought to be accepted as necessary. “I think that’s a reasonable line to draw,” he said.

‘Shy Trump’ voters

A pre-election study by Morning Consult warned that wealthier, more educated Republicans appeared slightly more reluctant to tell phone interviewers that they supported Trump, compared with similar voters who responded to online polls.

Pollsters refer to this phenomenon as the “shy Trump” effect, or — in academic parlance — a form of “social-desirability bias.” Studies have affirmed that in races where a candidate or cause is perceived as controversial or otherwise undesirable, voters can be wary of voicing their support, especially to a live interviewer.

(BEGIN OPTIONAL TRIM.)

Charles Franklin, director of the Marquette University Law School poll of Wisconsin voters, said he worried that the shy Trump effect had played a role in skewing the poll’s results away from Trump in 2016.

Franklin, who was a member of the AAPOR team, suggested how telephone interviewers might confront the issue with respondents next year: “When they indicate they’re undecided or maybe considering a third-party vote, maybe push people a little more on whether they could change their mind,” he said.

(END OPTIONAL TRIM.)

One polling firm that showed Trump narrowly leading in some of the most inaccurately polled states — Michigan, Pennsylvania and Florida, all of which he won — was Trafalgar Group, a Republican polling and consulting firm that uses a variety of nontraditional polling methodologies.

It sought to combat the shy Trump effect by asking respondents not only how they planned to vote but also how they thought their neighbors would vote — possibly offering Trump supporters a way to project their feelings onto someone else.

The AAPOR report posited that the neighbor question could help overcome shyness among Trump supporters, particularly in phone interviews. It “warrants experimentation in a broad array of contests,” the report said.

Who’s voting?

That was not the only way Trafalgar innovated. Polls typically use a formula based on past elections to determine which voters are likely to show up on Election Day. They then discard or devalue responses from those who seem less predisposed — typically those without much history of voting, or who don’t express much enthusiasm about politics.

Trafalgar used a generously inclusive model, with a particular eye toward less frequent voters whom Trump’s anti-establishment campaign had drawn in.

“With Trump, we saw in the primary how new people were being brought into the process, and so we widened the net of who we reached out to,” Robert C. Cahaly, a pollster at Trafalgar, said in an interview.

When the Census Bureau in 2017 released detailed voting information from the 2016 election, it revealed that turnout had surged in many counties that Obama had lost by 10 points or more in 2012 — particularly in Michigan, Pennsylvania and Wisconsin. It is a reminder that who voted in the previous election is not always a good indicator of who will vote the next time.

(BEGIN OPTIONAL TRIM.)

“This is where the art comes in, and it’s hard to know until it actually happens which approach is the best approach,” Joshua Clinton said, referring to how polling firms construct their likely-voter models.

The polls from 2016 make clear that finding a representative sample is both the hardest and the most important part of conducting an effective survey. This is not new knowledge for public-opinion professionals, but many said it was a lesson worth relearning.

(END OPTIONAL TRIM.)

Late deciders

Compounding all the other factors in 2016 was the simple fact that — in a race with two historically unpopular candidates — many voters didn’t reach a decision until just before Election Day.

In Michigan, Pennsylvania and Wisconsin, between 13% and 15% of respondents in exit polls said they had decided in the last week of the campaign. Those voters broke for Trump by a wide margin; in Wisconsin, it was about 30 points.

Pew researchers also called back respondents of their pre-election polls and found that many had changed their minds and voted differently than they’d said they would, which is not uncommon. But these voters broke for Trump by a 16-point margin — a heavier tilt than in any other year on record.

So, in a volatile election, even a perfectly effective poll might not be able to gauge the outcome; a poll can only take the pulse of where voters’ feelings lie in a particular moment.

That points to a major source of agita for some observers of the 2016 election: electoral forecasts in the news media and elsewhere that used polling data to suggest Clinton was highly likely to win. Most of them put her chances at somewhere between 70% and 99%.

“I’m not sure people understand how these probabilistic projections are produced or what they mean,” Gary Langer, a pollster who works with ABC News, said in an email. “I’d suggest that predicting election outcomes is the least important contribution of pre-election polls. Bringing us to a better understanding of how and why the nation comes to these choices is the higher value that good-quality polls provide.”

(STORY CAN END HERE. OPTIONAL MATERIAL FOLLOWS.)

Election forecasters do not mean to convey absolute certainty. Just before Election Day, The New York Times’ Upshot forecast gave Trump a 15% chance of winning, and FiveThirtyEight’s model put his chances at 29%, indicating that a Republican win was not out of the question.

But the Princeton Election Consortium, which had predicted the 2012 results with striking accuracy, was more certain of a Clinton win, giving her a 99% chance in the days leading up to the election.

Sam Wang, a neuroscientist who runs the Princeton model, said in an email that in 2016 he had not factored in enough potential “systematic error” — a catchall variable that accounts for imperfections in individual polls. In 2016, he never set that variable higher than 1.1 percentage points, but in 2020 he plans to set it at two points.

“That will increase the uncertainty much more,” he said, “which will set expectations appropriately in case the election is close.”

New York Times

Recommended for you

(0) comments

Welcome to the discussion.

Keep it Clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
PLEASE TURN OFF YOUR CAPS LOCK.
Don't Threaten. Threats of harming another person will not be tolerated.
Be Truthful. Don't knowingly lie about anyone or anything.
Be Nice. No racism, sexism or any sort of -ism that is degrading to another person.
Be Proactive. Use the 'Report' link on each comment to let us know of abusive posts.
Share with Us. We'd love to hear eyewitness accounts, the history behind an article.