<a href="https://www.pewresearch.org/fact-tank/2020/08/05/key-things-to-know-about-election-polling-in-the-united-states/" target="_blank" rel="noopener noreferrer">Key things to know about election polling in the United States</a>  <font color="#6f6f6f">Pew Research Center</font>

A robust public polling industry is a marker of a free society. It’s a testament to the ability of organizations outside the government to gather and publish information about the well-being of the public and citizens’ views on major issues. In nations without robust polling, the head of government can simply decree citizens’ wants and needs instead.

After the 2016 presidential election, some observers understandably questioned whether polling in the United States is still up to the task of producing accurate information. Errors in 2016 laid bare some real limitations of polling, even as clear-eyed reviews of national polls in both 2016 and 2018 found that polls still perform well when done carefully.

One way to help avoid a repeat of the skepticism about surveys that followed the last presidential election is to narrow the gap between perception and reality when it comes to how polling works. People have many notions about polling – often based on an introductory statistics class, but sometimes even less – that are frequently false. The real environment in which polls are conducted bears little resemblance to the idealized settings presented in textbooks.  

With that in mind, here are some key points the public should know about polling heading into this year’s presidential election.

The real environment in which polls are conducted bears little resemblance to the idealized settings presented in textbooks.  

Different polling organizations conduct their surveys in quite different ways. Survey methodology is undergoing a period of creative ferment. Currently, CNN and Fox News conduct polls by telephone using live interviewers, CBS News and Politico field their polls online using opt-in panels, and The Associated Press and Pew Research Center conduct polls online using a panel of respondents recruited offline. There is even a fourth group of pollsters that combine methods like robocalls and online surveying with opt-in samples. These different approaches have consequences for data quality, as well as accuracy in elections.

The barriers to entry in the polling field have disappeared. Technology has disrupted polling in ways similar to its impact on journalism: by making it possible for anyone with a few thousand dollars to enter the field and conduct a national poll. As with journalism, there are pluses and minuses to this democratization. There has been a wave of experimentation with new approaches, but there has also been a proliferation of polls from firms with little to no survey credentials or track record. In 2016, this contributed to a state polling landscape overrun with fast and cheap polls, most of which made a preventable mistake: failing to correct for an overrepresentation of college-educated voters, who leaned heavily toward Hillary Clinton. Some newcomer polls might provide good data, but poll watchers should not take that on faith.

A poll may label itself “nationally representative,” but that’s not a guarantee that its methodology is solid. When applied to surveys, the phrase “nationally representative” sounds like a promise of a poll’s trustworthiness. But the term doesn’t convey any specific technical information or come with any guarantees. Surveys can be sampled and adjusted to represent the country on certain dimensions, so any person can make this claim about any poll, regardless of its quality. Unfortunately, this is part of a broader trend in which the lingo used to promote surveys (“organic sampling,” “next-gen sampling” or “global marketplace,” for example) can on some occasions obscure flawed methodologies that lead to bias. Poll watchers would do well to focus on key questions for vetting polls, such as those included in this guide for reporters published by the American Association for the Advancement of Science’s SciLine, or Pew Research Center’s own field guide to polling.

The real margin of error is often about double the one reported. The notion that a typical margin of error is plus or minus 3 percentage points leads people to think that polls are more precise than they really are. Why is that? For starters, the margin of error addresses only one source of potential error: the fact that random samples are likely to differ a little from the population just by chance. But there are three other, equally important sources of error in polling: nonresponse, coverage error (where not all the target population has a chance of being sampled) and mismeasurement. Not only does the margin of error fail to account for those other sources of potential error, it implies to the public that they do not exist, which is not true.

Several recent studies show that the average error in a poll estimate may be closer to 6 percentage points, not the 3 points implied by a typical margin of error. While polls remain useful in showing whether the public tends to favor or oppose key policies, this hidden error underscores the fact that polls are not precise enough to call the winner in a close election.

Huge sample sizes sound impressive, but sometimes they don’t mean much. Students learning about surveys are generally taught that a very large sample size is a sign of quality because it means that the results are more precise. While that principle remains true in theory, the reality of modern polling is different. As Nate Cohn of The New York Times has explained, “Often, the polls with huge samples are actually just using cheap and problematic sampling methods.”

Students learning about surveys are generally taught that a very large sample size is a sign of quality because it means that the results are more precise. While that principle remains true in theory, the reality of modern polling is different.

Adding more and more interviews from a biased source does not improve estimates. For example, online opt-in polls are based on convenience samples that tend to overrepresent adults who self-identify as Democrats, live alone, do not have children and have lower incomes. While an online opt-in survey with 8,000 interviews may sound more impressive than one with 2,000 interviews, a 2018 study by the Center found virtually no difference in accuracy.

There is evidence that when the public is told that a candidate is extremely likely to win, some people may be less likely to vote. Following the 2016 election, many wondered whether the pervasive forecasts all but guaranteeing a Clinton victory – two modelers put her chances at 99% – led some would-be voters to conclude that the race was effectively over and their vote would not make a difference. Now there is scientific research to back up that logic. A team of researchers found experimental evidence that when people have high confidence that one candidate will win, they are less likely to vote. This helps explain why some analysts of polls say elections should be covered using traditional polling estimates and margins of error rather than speculative win probabilities (also known as probabilistic forecasts).

Estimates of the public’s views of candidates and major policies are generally trustworthy, but estimates of who will win the “horse race” are less so. Taking 2016 as an example, both Donald Trump and Clinton had historically poor favorability ratings. That turned out to be a signal that many Americans were struggling to decide whom to support and whether to vote at all. By contrast, a raft of state polls in the Upper Midwest showing Clinton with a lead in the horse race proved to be a mirage.

Leaving aside the fact that the national popular vote for president doesn’t directly determine who wins the election, there are several reasons why the final vote margin is harder to accurately gauge, starting with the fact that it is notoriously difficult to figure out which survey respondents will actually turn out to vote and which will not. This year, there will be added uncertainty in horse race estimates stemming from possible pandemic-related barriers to voting. Far more people will vote by mail – or try to do so – than in the past, and if fewer polling places than usual are available, lines may be very long. All of this is to remind us that the real value in election polling is to help us understand why people are voting – or not voting – as they are.

All good polling relies on statistical adjustment called “weighting” to make sure that samples align with the broader population on key characteristics. Historically, public opinion researchers have relied on the ability to adjust their datasets using a core set of demographics to correct imbalances between the survey sample and the population. There is a growing realization among survey researchers that weighting a poll on just a few variables like age, race and sex is insufficient for getting accurate results. Some groups of people – such as older adults and college graduates – are more likely to take surveys, which can lead to errors that are too sizable for a simple three- or four-variable adjustment to work well. Pew Research Center studies in 2016 and 2018 found that adjusting on more variables produces more accurate results.

A number of pollsters take this lesson to heart. The high-caliber Gallup and New York Times/Siena College polls adjust on eight and 10 variables, respectively. Pew Research Center polls adjust on 12 variables. In a perfect world, it wouldn’t be necessary to have that much intervention by the pollster – but the real world of survey research is not perfect.

Failing to adjust for survey respondents’ education level is a disqualifying shortfall in present-day battleground and national polls. For a long time in U.S. politics, education level was not consistently correlated with partisan choice, but that is changing, especially among white voters. As a result, it’s increasingly important for poll samples to accurately reflect the composition of the electorate when it comes to educational attainment. Since people with higher levels of formal education are more likely to participate in surveys and to self-identify as Democrats, the potential exists for polls to overrepresent Democrats. But this problem can easily be corrected through adjustment, or weighting, so the sample matches the population. The need for battleground state polls to adjust for education was among the most important takeaways from the polling misses in 2016.

Transparency in how a poll was conducted is associated with better accuracy. The polling industry has several platforms and initiatives aimed at promoting transparency in how polls are conducted, including the American Association for Public Opinion Research’s Transparency Initiative and the Roper Center archive. FiveThirtyEight’s Nate Silver found that polling firms participating in these organizations have less error on average than those that don’t. Participation in these transparency efforts does not guarantee that a poll is rigorous, but it is undoubtedly a positive signal. Transparency in polling means disclosing essential information including the poll’s sponsor, data collection firm, where and how participants were selected and the mode of interview, field dates, sample size, question wording and weighting procedures.

The problems with state polls in 2016 do not mean that polling overall is broken. Yes, polls in the Upper Midwest systematically underestimated support for Trump, but experts figured out why: Undecided voters ultimately broke heavily for Trump; most state polls overrepresented college graduates; and turnout was higher than expected in many rural counties but lower in urban ones. Lost in the shuffle, meanwhile, was that national polls in 2016 were quite accurate by historical standards. Clinton’s advantage in the national popular vote ended up being 2 percentage points, compared with 3 points in the final polling average.

The 2018 midterms brought further evidence that polling still works well when done carefully. The Democratic Party’s advantage nationally in the U.S. House of Representatives ended up being 9 points in the final vote, versus an average of 7 points in the final polls.

Evidence for “shy Trump” voters who don’t tell pollsters their true intentions is much thinner than some people think. Do people sometimes lie to pollsters? Sure. But the notion that Trump supporters were unwilling to express their support to pollsters was overblown, given the scant evidence to support it. A committee of polling experts evaluated five different tests of the “shy Trump” theory and turned up little to no evidence for each one. Later, a researcher from Yale and Pew Research Center conducted separate tests that also found little to no evidence in support of the claim. The “shy Trump” theory might account for a small amount of the error in 2016 polls, but it was not among the main reasons.

A systematic miss in election polls is more likely than people think. A legendary quote from House Speaker Tip O’Neill said that “all politics is local.” But that has become less and less true in the U.S. over time. State-level outcomes are highly correlated with one another, so polling errors in one state are likely to repeat in other, similar states.

As Nate Silver has explained, if Clinton was going to fall short of her standing in the polls in Pennsylvania, she was also likely to underperform in demographically similar states such as Wisconsin and Michigan. In 2016, most of the forecasters trying to predict the election outcome underestimated the extent to which polling errors were correlated from one state to another. Forecasters are more aware of this issue than they were four years ago, but they do not have a foolproof way to overcome it.

National polls are better at giving Americans equal voice than predicting the Electoral College. The 2000 and 2016 presidential elections demonstrated a difficult truth: National polls can be accurate in identifying Americans’ preferred candidate and yet fail to identify the winner. This happens when the national popular vote winner (e.g., Al Gore, Hillary Clinton) differs from the Electoral College winner (e.g., George W. Bush, Donald Trump).

National polls can be accurate in identifying Americans’ preferred candidate and yet fail to identify the winner.

For some, this raises the question: What is the use of national polls if they don’t tell us who is likely to win the presidency? In fact, national polls try to gauge the opinions of all Americans, regardless of whether they live in a battleground state like Pennsylvania, a reliably red state like Idaho, or a reliably blue state like Rhode Island. In short, national polls tell us what the entire citizenry is thinking. If pollsters only focused on the Electoral College, the vast majority of Americans (about 80%) who live in uncompetitive states would essentially be ignored, with their needs and views deemed too unimportant to warrant polling.

Fortunately, this is not how most pollsters view the world. As the noted political scientist Sidney Verba explained, “Surveys produce just what democracy is supposed to produce – equal representation of all citizens.”