Skip to main content
ABC News
What The Hell Is Happening With These Alabama Polls?

Follow our live coverage of Alabama’s special election for U.S. Senate.

Somebody’s going to be wrong in Alabama.

We’ve already urged caution when interpreting polls of Alabama’s special election to the U.S. Senate, which will be held on Tuesday. Some of that is because of the media’s usual tendency to demand certainty from the polls when the polls can’t provide it. And some of it is because of the circumstances of this particular race: a special election in mid-December in a state where Republicans almost never lose but where the Republican candidate, Roy Moore, has been accused of sexual misconduct toward multiple underaged women.

What we’re seeing in Alabama goes beyond the usual warnings about minding the margin of error, however. There’s a massive spread in results from poll to poll — with surveys on Monday morning showing everything from a 9-point lead for Moore to a 10-point advantage for Democrat Doug Jones — and they reflect two highly different approaches to polling.

Most polls of the state have been made using automated scripts (these are sometimes also called IVR or “robopolls”). These polls have generally shown Moore ahead and closing strongly toward the end of the campaign, such as the Emerson College poll on Monday that showed Moore leading by 9 points. Recent automated polls from Trafalgar Group, JMC Analytics and Polling, Gravis Marketing and Strategy Research have also shown Moore with the lead.

But when traditional, live-caller polls have weighed in — although these polls have been few and far between — they’ve shown a much different result. A Monmouth University survey released on Monday showed a tied race. Fox News’s final poll of the race, also released on Monday, showed Jones ahead by 10 percentage points. An earlier Fox News survey also had Jones comfortably ahead, while a Washington Post poll from late November had Jones up 3 points at a time when most other polls showed the race swinging back to Moore. And a poll conducted for the National Republican Senatorial Committee in mid-November — possibly released to the public in an effort to get Moore to withdraw from the race — also showed Jones well ahead.1

What accounts for the differences between live-caller and automated polls? There are several factors, all of which are potentially relevant to the race in Alabama:

  1. Automated polls are prohibited by law from calling voters on cellphones.
  2. Automated polls get lower response rates and therefore may have less representative samples.
  3. Automated polls may have fewer problems with “shy” voters who are reluctant to disclose their true voting intentions.
  4. Automated pollsters (in part to compensate for issues No. 1 and 2 above) generally make more assumptions when modeling turnout, whereas traditional pollsters prefer to let the voters “speak for themselves” and take the results they obtain more at face value.

Issue No. 1, not calling cellphones, is potentially a major problem: The Fox News poll found Jones leading by 30 points among people who were interviewed by cellphone. Slightly more than half of American adults don’t have access to a landline, according to recent estimates by the federal Centers for Disease Control and Prevention, which also found a higher share of mobile-only households in the South than in other parts of the country. Moreover, voters with landline service are older than the voting population as a whole and are more likely to be white — characteristics that correlate strongly with voting Republican, especially in states such as Alabama.

Pollsters are aware of these problems, so they use demographic weighting to try to compensate. Even if you can’t get enough black voters on a (landline) phone, for instance, you may have some reasonable way to estimate how many black voters there “should” be in the electorate, based on Census Bureau data or turnout in previous elections — so you can weight the black voters you do get on the phone more heavily until you get the “right” demographic mix.

This sounds dubious — and there are better and worse ways to conduct demographic weighting — but it’s a well-accepted practice. (Almost all pollsters use demographic weighting in some form.) And sometimes everything turns out just fine — automated polls don’t have a great track record, but firms such as Trafalgar Group that do automated polling generally performed pretty well in 2016, for example. Some automated firms have also begun to supplement their landline samples with online panels in an effort to get a more representative sample. Still, cell-only and landline voters may be differentiated from one another in ways that are relevant for voting behavior but which don’t fall into traditional demographic categories — cell-only voters may have different media consumption habits, for instance. If nothing else, failing to call cellphones adds an additional layer of unpredictability to the results.

Apart from their failure to call mobile phones, automated polls have lower response rates (issue No. 2) — often in the low single digits. This is because voters are more likely to hang up when there isn’t a human on the other end of the line nudging them to complete a survey. Also, many automated polls call each household only once, whereas pollsters conducting traditional surveys often make several attempts to reach the same household. Calling a household only once could bias the sample in various ways — for instance, toward whichever party’s voters are more enthusiastic (probably Democrats in the Alabama race) or toward whoever tends to pick up the phone in a particular household (often older voters, rather than younger ones).

As for issue No. 3, proponents of automated polls — and online polls — sometimes claim that they yield more honest responses from voters than traditional polls do. Respondents may be less concerned about social desirability bias when pushing numbers on their phone or clicking on an online menu as opposed to talking to another human being. That could be particularly relevant in the case of Alabama if some voters are ashamed to admit that they plan to vote for Moore, a man accused of molesting teenagers.

With that said, while there’s a rich theoretical literature on social desirability bias, the empirical evidence for it affecting election polls is somewhat flimsy. The Bradley Effect (the supposed tendency for polls to overestimate support for minority candidates) has pretty much gone away, for instance. There’s been no tendency for nationalist parties to outperform their polls in Europe. And so-called “shy Trump” voters do not appear to have been the reason that Trump outperformed his polls last year.2

Finally (No. 4), automated and traditional pollsters often take different philosophies toward working with their data. Although they probably wouldn’t put it this way themselves, automated pollsters know that their raw data is somewhat crappy — so they rely more heavily on complicated types of turnout and demographic weighting to make up for it. Automated pollsters are more likely to weight their results by party identification, for instance — by how many Republicans, Democrats and independents are in their sample — whereas traditional pollsters usually don’t do this because partisan identification is a fluid, rather than a fixed, characteristic.

Although I don’t conduct polls myself, I generally side with the traditional pollsters on this philosophical question. I don’t like polls that impose too many assumptions on their data; instead, I prefer an Ann Selzer-ish approach of trusting one’s data, even when it shows an “unusual” turnout pattern or produces a result that initially appears to be an outlier. Sometimes what initially appears to be an outlier turns out to have been right all along.

With that said, automated pollsters can make a few good counterarguments. Traditional polls also have fairly low response rates — generally around 10 percent — and potentially introduce their own demographic biases, such as winding up with electorates that are more educated than the actual electorate. Partisan non-response bias may also be a problem — if the supporters of one candidate see him or her get a string of bad news (such as Moore in the Alabama race), they may be less likely to respond to surveys … but they may still turn up to vote.

Essentially, the automated pollsters would argue that nobody’s raw data approximates a truly random sample anymore — and that even though it can be dangerous to impose too many assumptions on one’s data, the classical assumptions made by traditional pollsters aren’t working very well, either. (Traditional pollsters have had a better track record over the long run, but they also overestimated Democrats’ performance in 2014 and 2016.)

So, who’s right? There’s a potential tiebreaker of sorts, which is online polls. Online polls potentially have better raw data than automated polls — they get higher response rates, and there are more households without landline access than without internet access. However, because there’s no way to randomly “ping” people online in the same way that you’d randomly call their phone, online surveys have no way to ensure a truly random probability sample.

To generalize a bit, online polls therefore tend to do a lot of turnout weighting and modeling instead of letting their data stand “as is.” But their raw data is usually more comprehensive and representative than automated polls, so they have better material to work with.

The online polls also come out somewhat in Moore’s favor. Recent polls from YouGov and Change Research show him ahead by 6 points and 7 points, respectively; in the case of the Change Research poll, this reflects a reversal from a mid-November poll that had Jones ahead.

But perhaps the most interesting poll of all is from the online firm SurveyMonkey. It released 10 different versions (!) of its recent survey, showing everything from a 9-point Jones lead to a 10-point Moore lead, depending on various assumptions — all with the same underlying data.

Although releasing 10 different versions of the same poll may be overkill, it illustrates the extent to which polling can be an assumption-driven exercise, especially in an unusual race such as Alabama’s Senate contest. Perhaps the most interesting thing SurveyMonkey found is that there may be substantial partisan non-response bias in the polling — that Democrats were more likely to take the survey than Republicans. “The Alabama registered voters who reported voting in 2016 favored Donald Trump over Hillary Clinton by a 50 to 39 percentage point margin,” SurveyMonkey’s Mark Blumenthal wrote. “Trump’s actual margin was significantly larger (62 to 34 percent).”

In other words, SurveyMonkey’s raw data was showing a much more purple electorate than the solid-red one that you usually get in Alabama. If that manifests in actual turnout patterns — if Democrats are more likely to respond to surveys and are more likely to vote because of their greater enthusiasm — Jones will probably win. If there are some “shy Moore” voters, however, then Moore will probably win. To make another generalization, traditional pollsters usually assume that their polls don’t have partisan non-response bias, while automated polls (and some online polls such as YouGov) generally assume that they do have it, which is part of why they’re showing such different results.

Because you’ve read so much detail about the polls, I don’t want to leave you without some characterization of the race. I still think Moore is favored, although not by much; Jones’s chances are probably somewhere in the same ballpark as Trump’s were of winning the Electoral College last November (about 30 percent).

The reason I say that is because in a state as red as Alabama, Jones needs two things to go right for him: He needs a lopsided turnout in his favor, and he needs pretty much all of the swing voters in Alabama (and there aren’t all that many of them) to vote for him. Neither of these are all that implausible. But if either one goes wrong for Jones, Moore will probably win narrowly (and if both go wrong, Moore could still win in a landslide). The stakes couldn’t be much higher for the candidates — or for the pollsters who surveyed the race.

Footnotes

  1. There weren’t many details released to the public about the methodology of the NRSC poll, but the party committees generally have a lot of money and prefer to conduct traditional, live-caller polling when possible.

  2. Bigger problems were pollsters failing to weight by education levels and undecided voters breaking toward Trump in swing states.

Nate Silver founded and was the editor in chief of FiveThirtyEight.

Comments