Skip to main content
ABC News
Politics Moves Fast. Peer Review Moves Slow. What’s A Political Scientist To Do?

Politics has a funny way of turning arcane academic debates into something much messier. We’re living in a time when so much in the news cycle feels absurdly urgent and partisan forces are likely to pounce on any piece of empirical data they can find, either to champion it or tear it apart, depending on whether they like the result. That has major implications for many of the ways knowledge enters the public sphere — including how academics publicize their research.

That process has long been dominated by peer review, which is when academic journals put their submissions in front of a panel of researchers to vet the work before publication. But the flaws and limitations of peer review have become more apparent over the past decade or so, and researchers are increasingly publishing their work before other scientists have had a chance to critique it. That’s a shift that matters a lot to scientists, and the public stakes of the debate go way up when the research subject is the 2016 election. There’s a risk, scientists told me, that preliminary research results could end up shaping the very things that research is trying to understand.

Take, for instance, two studies that hit the press in late September. One was a survey of nonvoters in Wisconsin that seemed to show that the election could have swung President Trump’s way because of voter ID laws that kept people from the polls. The other was an analysis of junk news shared on Twitter that offered evidence of misinformation being targeted at people living in swing states in a way that implied a strategic effort. Neither had gone through peer review before receiving largely uncritical write-ups in major publications like The New York Times and The Washington Post. Both contained the sort of everyday flaws that the peer review process is designed to catch — flaws that undermined the reliability of the results.

But political scientists, and social scientists who study science as an industry, told me that the choice to publish before peer review isn’t rare — and isn’t necessarily even all that problematic. Across the sciences, it’s increasingly normal for research to appear in publicly accessible places — on research archives, Twitter and Facebook, blogs — and, from there, find its way to the media before it’s been vetted by anyone other than the people who wrote it. Political scientists disagree broadly on whether that’s a good thing, a bad thing, or a little of both.

Results that make it onto the public radar can play a big role in shaping how people think and what they believe, even if that research turns out to be wrong later.

Historically, most research hasn’t been presented to the public until after peer review. What comes out the other side is not guaranteed to be correct — in fact, individual peer-reviewed papers often turn out to be wrong. But, on aggregate, 100 studies that have been peer-reviewed are going to produce higher-quality results than 100 that haven’t been, said Justin Esarey, a political science professor at Rice University who has studied the effects of peer review on social science research. That’s simply because of the standards that are supposed to go along with peer review — clearly reporting a study’s methodology, for instance — and because extra sets of eyes might spot errors the author of a paper overlooked.

The debate over peer review’s role takes on a more expansive meaning in political science, where the results of a study can quickly shape public opinion and public policy. For example, the Trump administration has used one peer-reviewed study from 2014 as a major piece of evidence for claiming that American elections are undermined by illegal voting — going so far as to set up a commission to study the issue. That a majority of researchers have found no evidence that fraudulent voting is widespread or likely to have a big impact on elections doesn’t seem to matter when politicians want evidence to justify what they already believe.


The afterlife of that voter fraud study demonstrates how political science research — peer-reviewed or not — can have immediate political implications. And that creates dueling incentives for political science: Is it more important to get work into the public while it is most relevant, or is it more important to go through the often slow process of peer review and hope that makes the work more accurate? Ten or 15 years ago, the answer would have clearly been to wait for peer review, said Nicholas Valentino, professor of political science at the University of Michigan. But he, and other political scientists I spoke with, said that norm has shifted, and relevancy is now much more important than it used to be.

Those two studies that were released in September are great examples of this trend. Both involved research that is deeply relevant to current political news, and — according to researchers I spoke with — both are flawed in ways that peer review might have caught.

“I don’t know what the right answer to this is. And I have colleagues I deeply respect on either side. I switch sides.”

Take that survey on voter suppression in Wisconsin. Kenneth Mayer, professor of political science at the University of Wisconsin-Madison, was the lead researcher on a project that sent surveys to 2,400 people in two counties who hadn’t voted in the 2016 election, then published the results as a press release. Twelve percent of people replied to the survey, and by extrapolating those 288 responses to all people in those counties who were registered to vote but did not, Mayer’s team estimated that between 11,000 and 23,000 Wisconsinites could have been deterred from voting because of the state’s ID law.

But Nathan Kalmoe, a professor of political communication at Louisiana State University, said the survey left a lot of room for small measurement errors to make a big difference on results. The survey showed that voter ID-related issues played a small role in respondents’ decisions to not vote. For instance, 33 percent of respondents1 gave as the primary reason they didn’t vote that they didn’t like the candidates. Just 1.4 percent were told at the polling place that their ID was inadequate.

That means we’re talking about very small numbers of people — so small that it would only take a couple of measurement errors to alter the outcome. Say one person massaged her answers to make the socially undesirable choice of not voting seem a little less like her fault. Or another accidentally filled in a bubble he didn’t intend to. All of a sudden, the results could shift. “I view the result as additional evidence that voter ID laws probably demobilized some people, but that the magnitude is probably less than the press release indicates,” Kalmoe told me.

The other September study focused on misleading “junk news” shared on Twitter. Led by Philip Howard, an Oxford University professor of internet studies, this project tracked the locations people were tweeting from in the days leading up to the 2016 election and found, on average, a higher concentration of junk news posts in swing states. That could be read as evidence that propaganda and misleading information played a role in the outcome of the election. But the way the study was conducted calls that kind of claim into question, said Brendan Nyhan, professor of government at Dartmouth.

Most Twitter users don’t include information about their location, and Twitter itself isn’t used by most Americans. Both of those things make it difficult to take what the study found and extrapolate it into meaningful facts about what was happening nationally, Nyhan said. And Howard agreed with that assessment. Ideally, Howard told me, he’d like to see political scientists stop studying Twitter altogether, but Twitter’s data is free to use, and many other social networks’ data is not. “[We] hope the things we learn about social networks on Twitter matter to Facebook,” Howard said. But he suspects they don’t. Twitter is a bad proxy for social media use, but it’s the proxy everyone is using.

The problem was exacerbated by the fact that the study focused on tweets sent from a state, not what was actually being read or engaged with by people in that state. Even if junk news was being posted in swing states, that’s not a clear indicator of the impact it had. “This is a supply-side analysis, not demand side,” Nyhan said.


Both these studies were legitimate research conducted by respected scientists, and neither was flawed in any spectacular or unique way. Mayer told me that he thought his data was strong enough to withstand peer review — and it well could have been. So why release it before that process had a chance to happen?

The answer comes down to timing. “We wanted to contribute to public discussion,” Mayer said. “If you waited until an article has actually been published … you’re talking about a year and a half, maybe two years before the information is out there.” Political science isn’t the only field where publication before peer review is increasingly common: Biologists now “pre-publish” more than 1,000 new articles every month, more than 10 times the monthly average of a decade ago. Nor is political science the only field where researchers can struggle with long wait times before their work is published through the traditional peer review process. But the political scientists and social scientists I spoke to described a particularly uncomfortable tension between feeling that the information they had gathered was deeply important to pressing questions and that publication wait times that could keep that information sitting out of public view for as long as two years.

Social media and blogging has really become political scientists’ solution to slow peer review.

That long wait time could be a result of the length of political science research papers — upwards of 10,000 words long, compared with the 3,500-word articles more common in physical and life sciences. There also just isn’t that much space to publish research. Poli sci journals tend to come out quarterly, and one recently reported a record number of submissions: nearly 1,000 articles in 10 months, for a journal that publishes only about a dozen articles each issue. And the problem could also have to do with the fact that there’s more than one valid methodology for studying a question in political science, Esaray said. So peers don’t always agree on whether someone is “doing it right.”

But this issue with timing, combined with the desire to make research results available when they are most relevant to the public discourse, helps explain why there doesn’t seem to be a strong consensus within political science about whether releasing data before peer review is a good idea. The 12 political and social scientists I spoke with presented a wide range of opinion. “I don’t know what the right answer to this is,” Valentino told me. “And I have colleagues I deeply respect on either side. I switch sides.”

Regardless of their stance, almost all of them described having made research public prior to peer review themselves at some point or another — either speaking with a reporter, writing a blog post or sending a Tweet. They told me that bypassing peer review was sometimes necessary, enabling scientists to get publicly funded research to the public when it was most important and even improving research by allowing peers to weigh in, critique one another and craft better papers before a formal peer review.

But most of those same scientists also believed there were serious risks to bypassing peer review, and that those risks were particularly relevant for political science. The problem is that the public — and the press — tend to consider individual studies on their own and not in the context of all the other research being done on the same subject, said Dominique Brossard, a professor of life sciences communication at the University of Wisconsin-Madison who studies the public communication of science. That’s especially true when individual papers end up politicized by partisan stakeholders. Journalists can, and certainly do, write articles about individual papers where a range of scientists are given the chance to comment on and critique the work — almost like a sort of public peer review. But that doesn’t always happen, even in the most-respected newspapers. So results that make it onto the public radar can play a big role in shaping how people think and what they believe, even if that research turns out to be wrong later. That’s also true for work that’s been peer-reviewed, but if we think peer review adds any element of quality control at all, bypassing it is likely to mean more wrong information shaping public life. Not less.

And that’s particularly risky for controversial subjects like the effects of voter ID laws. While Mayer doesn’t consider his survey the definitive answer to a broad question about how those laws affect voter turnout, media reports on the survey didn’t mention that most research that has been done suggests the laws don’t have a very big impact. There are solid, ethical reasons for why you would want to be against voter ID laws, Valentino said, and there’s solid evidence that those laws are meant to keep large numbers of people from voting, whether they actually do or not. But if a study like Mayer’s is easy to pick apart, Valentino worried it could end up undermining trust in that other evidence.

Kalmoe and Esarey told me that political science journals are trying to speed the publication process up — incentivizing faster turnaround on reviewing and revising and publishing articles online rather than holding them until there’s room in a print issue. But social media and blogging has really become political scientists’ solution to slow peer review, they said. So it’s likely that we will continue to encounter situations where research reaches the public before it reaches peer review. And the basic fact is that, while scientists can speculate about risks and rewards, we don’t really know what the outcomes of this change will be. Ironically, what happens when scientists bypass the imperfect, slow process of peer review is a new frontier, one scientists are really only just beginning to study, Brossard said. “People are looking at the production of scientific knowledge and how those new communication processes may be changing, but it’s still a lot on the thinking phase … and not much in very good data,” she said. “But it’s clear that it’s changing.”


Read more: The Tangled Story Behind Trump’s False Claims Of Voter Fraud

Footnotes

  1. Percentages are based on weighted responses.

Maggie Koerth was a senior reporter for FiveThirtyEight.

Comments