About Polling Discrepancies

About Polling Discrepancies

  Straw poll at the 2015 Conservative Political Action Conference (CPAC) in National Harbor, Maryland.  Photo by Gage Skidmore , via Flickr,  Creative Commons by Share-Alike 2.0

Straw poll at the 2015 Conservative Political Action Conference (CPAC) in National Harbor, Maryland. Photo by Gage Skidmore, via Flickr, Creative Commons by Share-Alike 2.0

Steven Miner, Social Policy Contributor
 
Opinion -- Public polling has been an intricate aspect of politics for decades. Recently, however, cynicism has been a common reaction to polls, seeming to increase every election cycle. It appears that every election, polling errors get picked apart and accused of some kind of bias. With this article, I will try to explain how the public began losing faith in polls, why each poll can be so different, and why polls are still valuable tools in predicting election outcomes.

1. How We Lost Faith in Polls
 
I could go back a ways to show why people in both the United States and the rest of the modernized world increasingly distrust polls, but I’ll start with the 2014 election cycle. That year, polls were skewed toward Democratic candidates. In fact, according to Nate Silver of FiveThirtyEight, Senate polls “overestimated the Democrat’s performance by four percentage points,” while gubernatorial polls overestimated “Democrat’s performance by 3.4 points.” This means that in some races, Republican candidates that were predicted to lose ended up winning, while other races predicted as toss-ups were won handily by Republicans. Then there was the Virginia senate race, which Democratic Senator Mark Warner was predicted to win by nine points but ended up winning by less than one. 
 
We have seen several other prediction errors by polling organizations since 2014. In Britain of the same year, polls showed the Labour Party winning by a large margin. They were wrong, though, as the election resulted in a Tory victory with David Cameron remaining Prime Minister. Two years later, 52 percent of British citizens voted in favor of Brexit after polls showed “Britain would overwhelmingly vote to remain in the European Union.” 

More recently, Donald Trump’s election over Hillary Clinton caused some controversy pertaining to the accuracy of polls. Every poll and every expert predicted that Clinton was going to win, some with a near 100 percent certainty. It was pretty awkward for them when she lost. Particularly in Wisconsin, Michigan, and Pennsylvania, Trump outperformed polls by anywhere from 4.3-6.4 percent. Clinton was consistently predicted to win each of those states relatively comfortably. In fact, her campaign was so confident on a victory in both Michigan and Wisconsin that it focused very few resources on either state.
 
The 2016 election has contributed perhaps the most significant blow to the current lack of faith in polling. Criticism erupted from all arenas after the election. News outlets blamed polling for their overzealous predictions. The New York Times published a typical article with the headline, How Data Failed Us in Calling an Election. The article describes data science, including polling, as a “blunt instrument, missing context and nuance.” Many who read such articles go away with less faith in polls and the data used in them. 
 
2. Context and Nuance of Polling
 
It’s understandable that people would start doubting polls of late. However, I want to provide some context as to why different polls might portray different outcomes, as well as why polls might wrongly predict the election.
 
In candidate preference polls, it might seem simple pretty simple to ask people which candidate they like better. “In this particular race, do you prefer candidate A or candidate B?” However, polling is a science and, as with any science, every variable can impact the result. For instance, the way the questions are asked, the order the candidates are provided, whether the party affiliation is included with the candidates, and whether third-party candidates are included in the poll can all influence the results.
 
Along with the way the question is phrased, the demographics of the sample impact the results as well. For example, choosing the proportion of Republican versus Democratic voters, different age groups, ethnicities, and other demographics is crucial to obtaining an accurate poll. It is not enough to simply look at the statistics of the area you are polling, but to predict who is likely to vote in the coming election and how to reach them through the poll. Different pollsters may use different methods, whichever they feel is most accurate, to make such predictions. 
 
Even if pollsters do everything right in setting up their polls, they are not through the woods in dealing with variables. Sometimes, recent events can affect poll numbers without actually reflecting the mood of the electorate. For instance, after Barack Obama’s poor showing in the first 2012 presidential debate versus Mitt Romney, Democratic voters “became less likely to answer surveys, as they just didn’t want to talk about politics, while newly enthusiastic Republicans did.” Therefore, polls showed increased support for Romney when he actually didn’t have any. Another example is if a poll is done only in English, then it may not represent Hispanics (or other ethnicities) who don’t speak English. In some cases, poll respondents simply lie about their preferences
 
3. Understanding the Value of Polling
 
Despite the recent backlash against data, and polling in particular, they are still extremely useful tools. It is important to keep in mind that one poll is simply a snapshot of the electorate in that moment, and could be impacted by any number of things. A single poll may not be an accurate prediction of the eventual election outcome, but it is usually very useful in determining the state of the campaign at that moment. A particular campaign can use the information from that poll, taking into account the data within it, to determine what steps they need to take going forward. 
 
Even if a poll is inaccurate, they are usually a pretty good estimate when taken collectively with others. Of course, many would point to the 2016 presidential election as evidence to the contrary, but many other examples show that polls have correctly predicted election outcomes most of the time. Besides, polls are not meant to be guarantees. They allow for margins of errors and confidence intervals. This means that if Donald Trump received 51 percent support in a poll with a margin of error of plus or minus three percent and a confidence interval of 95 percent, he is 95 percent likely to fall between 48-54 percent. 
 
Besides, the 2016 wasn’t a total loss for polling. The national polls were within about one percent of the final election results. Donald Trump did outperform polls in several states, including Wisconsin, Michigan, and Pennsylvania. Nevertheless, his performance was within the margin of error for most polls in all three of those states. In fact, he was even ahead in late polls in both Michigan and Pennsylvania, even though he was still trailing in the average. Nate Silver of FiveThirtyEight also points out that this election had a much higher number of undecided voters in the closing days of the election than previous ones. Compared to the 2012 presidential election, in which polls generally showed only three percent of the electorate as undecided, 2016 had a 12 percent undecided rate. This means that there was a high level of uncertainty last election, which polls cannot predict.

Therefore, the mistake last election did not lie so much with the polls as it did with analysts who took the polls and made overly confident predictions. Some absurdly had the odds of Clinton winning at 99 percent. Other analysts who have a better track record, such as Nate Silver, gave Clinton a 71 percent chance of winning the election. Although Trump ended up winning, that forecast was one of the more accurate ones, especially since his victory was considered an upset. Why couldn’t prediction models, such as the one Silver used, accurately predict the election? Well, when it comes down to it, according to MIT professor Erik Brynjolfsson, if the “chance that something will happen is 70 percent, that means there is a 30 percent chance it will not occur.” The election outcome, he says, is “not really a shock to data science and statistics. It’s how it works.”
 
Ultimately, polls are simply predictions. They are snapshots in time, not portraits of the entire election. Therefore, they need to be taken within the context of that election. They are extremely useful tools for campaigns, analysts, members of the media, and anyone else who follows politics. Nevertheless, like any tool, they must be used correctly. When it comes to the way they’re portrayed to the general public, those who do so need to cover them accurately. Nate Silver has expressed his frustrations that this did not happen during the last election. Clinton’s lead, according to Silver, was misreported as a “sure thing,” while any minor polling error was a “massive failure of the data.” He argues that the best way to restore faith in polling, and the coverage of them, is for journalists to stop overrating their precision and then “being shocked when [they’re] even a couple of percentage points off.” This would certainly be a step in the right direction.
 
Follow Steven on Twitter @StevenMiner14
 
The Millennial Review is taking the fight to the front lines as we battle for conservatism in the millennial generation. Join us! Like us on Facebook and Follow us on Twitter.

The Need to Read More

The Need to Read More

Would you force a Jewish baker to bake a Nazi cake?

Would you force a Jewish baker to bake a Nazi cake?