Nate Cohn at NYT: The polls were one of the big winners of the 2012 presidential election. They showed Barack Obama ahead, even though many believed a weak economy would propel Mitt Romney to victory.
The polls conducted online were among the biggest winners of all.
The most prominent online pollsters — Google Consumer Surveys, Reuters/Ipsos and YouGov — all produced good or excellent results. With the right statistical adjustments, even a poll of Xbox users fared well.
These successes seemed to herald the dawn of a new era of public opinion research, one in which pollsters could produce accurate surveys cheaply, by marrying online polls with big data and advanced statistical techniques.
A decade later, the new era has arrived — and has fallen far short of its promise. Ever since their 2012 breakout performance, the public polls relying exclusively on data from so-called online opt-in panels have underperformed the competition.
Only YouGov, long at the cutting edge of this kind of polling, is still producing reasonably accurate results with these panels.
Many of the online pollsters who excelled in 2012 have left public polling altogether:
Google Consumer Surveys — by 538’s reckoning perhaps the best poll of 2012 — was arguably the single worst pollster in the 2016 election, and it has stepped out of the political polling game.
The Xbox poll did not return. The researchers behind it used a different online survey platform, Pollfish, to predict Hillary Clinton victories in Texas and Ohio in 2016.
And last year, Reuters/Ipsos abandoned opt-in, or nonprobability, polling. There are still Reuters/Ipsos polls, but they’re now traditional surveys with panelists recruited by mail.
Nonetheless, a majority of polls are now conducted in exactly this way: fielded online using people who joined (that is, opted into) a panel to take a survey, whether by clicking on a banner ad or via an app. Traditional polling, in contrast, attempts to take a random sample of the population, whether by calling random phone numbers or mailing random addresses.
The newer opt-in pollsters haven’t fared any better. SurveyMonkey and Morning Consult, two of the most prolific opt-in pollsters to pop up since 2012, have posted some of the well-below-average results among major pollsters since their inception, despite having established pollsters and political scientists leading their political polling.
More recently, a whole new wave of online pollsters has popped up. In just the last few months, pollsters like SoCal Strategies, Quantus Polls, ActiVote and Outward Intelligence have published dozens of polls, often with scant methodological detail. Maybe one of these firms is a diamond in the rough, but history offers no reason to expect it.
Online opt-in pollsters have fared so poorly in recent cycles that they receive less weight than other surveys in our polling averages.
Why are these polls faring poorly? The core challenge was always obvious: how to find a representative sample without the benefit of random sampling, in which everyone has an equal chance of being selected for a poll. Over the last decade, this has gotten harder and harder. Even the best firms have struggled to keep up; for the rest, it’s hard to tell how much they’re even trying.