Skip to main content
ABC News
The Supreme Court Is Allergic To Math

The Supreme Court does not compute. Or at least some of its members would rather not. The justices, the most powerful jurists in the land, seem to have a reluctance — even an allergy — to taking math and statistics seriously.

For decades, the court has struggled with quantitative evidence of all kinds in a wide variety of cases. Sometimes justices ignore this evidence. Sometimes they misinterpret it. And sometimes they cast it aside in order to hold on to more traditional legal arguments. (And, yes, sometimes they also listen to the numbers.) Yet the world itself is becoming more computationally driven, and some of those computations will need to be adjudicated before long. Some major artificial intelligence case will likely come across the court’s desk in the next decade, for example. By voicing an unwillingness to engage with data-driven empiricism, justices — and thus the court — are at risk of making decisions without fully grappling with the evidence.

This problem was on full display earlier this month, when the Supreme Court heard arguments in Gill v. Whitford, a case that will determine the future of partisan gerrymandering — and the contours of American democracy along with it. As my colleague Galen Druke has reported, the case hinges on math: Is there a way to measure a map’s partisan bias and to create a standard for when a gerrymandered map infringes on voters’ rights?

The metric at the heart of the Wisconsin case is called the efficiency gap. To calculate it, you take the difference between each party’s “wasted” votes — votes for losing candidates and votes for winning candidates beyond what the candidate needed to win — and divide that by the total number of votes cast. It’s mathematical, yes, but quite simple, and aims to measure the extent of partisan gerrymandering.

Four of the eight justices who regularly speak during oral arguments1 voiced anxiety about using calculations to answer questions about bias and partisanship. Some said the math was unwieldy, complicated, and newfangled. One justice called it “baloney” and argued that the difficulty the public would have in understanding the test would ultimately erode the legitimacy of the court.

Justice Neil Gorsuch balked at the multifaceted empirical approach that the Democratic team bringing the suit is proposing be used to calculate when partisan gerrymandering has gone too far, comparing the metric to a secret recipe: “It reminds me a little bit of my steak rub. I like some turmeric, I like a few other little ingredients, but I’m not going to tell you how much of each. And so what’s this court supposed to do? A pinch of this, a pinch of that?”

Justice Stephen Breyer said, “I think the hard issue in this case is are there standards manageable by a court, not by some group of social science political ex … you know, computer experts? I understand that, and I am quite sympathetic to that.”

“What Roberts is revealing is a professional pathology of legal education.”

And Chief Justice John Roberts, most of all, dismissed the modern attempts to quantify partisan gerrymandering: “It may be simply my educational background, but I can only describe it as sociological gobbledygook.” This was tough talk — justices had only uttered the g-word a few times before in the court’s 230-year history.2 Keep in mind that Roberts is a man with two degrees from Harvard and that this case isn’t really about sociology. (Although he did earn a rebuke from the American Sociological Association for his comments.) Roberts later added, “Predicting on the basis of the statistics that are before us has been a very hazardous enterprise.” FiveThirtyEight will apparently not be arguing any cases before the Supreme Court anytime soon.

This allergy to statistics and quantitative social science — or at least to their legal application — seems to present a perverse incentive to would-be gerrymanderers: The more complicated your process is, and therefore the more complicated the math would need to be to identify the process as unconstitutional, the less likely the court will be to find it unconstitutional.


But this trouble with math isn’t limited to this session’s blockbuster case. Just this term, the justices will again encounter data again when they hear a case about the warrantless seizure of cell phone records. The Electronic Frontier Foundation, the Data & Society Research Institute, and empirical scholars of the Fourth Amendment, among others, have filed briefs in the case.

“This is a real problem,” Sanford Levinson, a professor of law and government at the University of Texas at Austin, told me. “Because more and more law requires genuine familiarity with the empirical world and, frankly, classical legal analysis isn’t a particularly good way of finding out how the empirical world operates.” But top-level law schools like Harvard — all nine current justices attended Harvard or Yale — emphasize exactly those traditional, classical legal skills, Levinson said.

In 1897, before he had taken his seat on the Supreme Court, Oliver Wendell Holmes delivered a famous speech at Boston University, advocating for empiricism over traditionalism: “For the rational study of the law … the man of the future is the man of statistics and the master of economics. It is revolting to have no better reason for a rule of law than that so it was laid down in the time of Henry IV.” If we hadn’t made much progress in the 500 years between Henry IV and Holmes, neither have we made much progress in the 120 years between Holmes and today. “What Roberts is revealing is a professional pathology of legal education,” Levinson said. “John Roberts is very, very smart. But he has really a strong anti-intellectual streak in him.”

I reached Eric McGhee, a political scientist and research fellow at the Public Policy Institute of California who helped develop the central gerrymandering measure, a couple days after the oral argument. He wasn’t surprised that some justices were hesitant, given the large amount of analysis involved in the case, including his metric. But he did agree that the court’s numbers allergy would crop up again. “There’s a lot of the world that you can only understand through that kind of analysis,” he said. “It’s not like the fact that a complicated analysis is necessary tells you that it’s not actually happening.”

During the Gill v. Whitford oral argument, the math-skeptical justices groped for an out — a simpler legal alternative that could save them from having to fully embrace the statistical standards in their decisionmaking. “When I read all that social science stuff and the computer stuff, I said, ‘Is there a way of reducing it to something that’s manageable?’” said Justice Breyer, who is nevertheless expected to vote with the court’s liberal bloc.

It’s easy to imagine a situation where the answer for this and many other cases is, simply, “No.” The world is a complicated place.


Documentation of the court’s math problem fills pages in academic journals. “It’s one thing for the court to consider quantitative evidence and dismiss it based on its merits” — which could still happen here, as Republicans involved in the Wisconsin case have criticized the efficiency gap method — “but we see a troubling pattern whereby evidence is dismissed based on sweeping statements, gut reactions and logical fallacies,” Ryan Enos, a political scientist at Harvard, told me.

One stark example: a 1986 death penalty case called McCleskey v. Kemp. Warren McCleskey, a black man, was convicted of murdering a white police officer and was sentenced to death by the state of Georgia. In appealing his death sentence, McCleskey cited sophisticated statistical research, performed by two law professors and a statistician, that found that a defendant in Georgia was more than four times as likely to be sentenced to death if the victim in a capital case was white compared to if the victim was black. McCleskey argued that that discrepancy violated his 14th Amendment right to equal protection. In his majority opinion, Justice Lewis Powell wrote, “Statistics, at most, may show only a likelihood that a particular factor entered into some decisions.” McCleskey lost the case. It’s been cited as one of the worst decisions since World War II and has been called “the Dred Scott decision of our time.”

Maybe this allergy to statistical evidence is really a smoke screen — a convenient way to make a decision based on ideology while couching it in terms of practicality.

Another instance of judicial innumeracy: the Supreme Court’s decision on a Fourth Amendment case about federal searches and seizures called Elkins v. United States in 1960. In his majority opinion, Justice Potter Stewart discussed how no data existed showing that people in states that had stricter rules regarding the admission of evidence obtained in an unlawful search were less likely to be subjected to these searches. He wrote, “Since, as a practical matter, it is never easy to prove a negative, it is hardly likely that conclusive factual data could ever be assembled.”

This, however, is silly. It conflates two meanings of the word “negative.” Philosophically, sure, it’s difficult to prove that something does not exist: No matter how prevalent gray elephants are, their numbers alone can’t prove the nonexistence of polka-dotted elephants. Arithmetically, though, scientists, social and otherwise, demonstrate negatives — as in a decrease, or a difference in rate — all the time. There’s nothing special about these kinds of negatives. Some drug tends to lower blood pressure. The average lottery player will lose money. A certain voting requirement depresses turnout.

Enos and his coauthors call this the “negative effect fallacy,” a term they coined in a paper published in September. It’s just one example, they wrote, of an empirical misunderstanding that has proliferated like a tsunami through decades of judges’ thinking, affecting cases concerning “free speech, voting rights, and campaign finance.”

Another example of this fallacy, they wrote, came fifty years later in Arizona Free Enterprise v. Bennett, a 2011 campaign finance case. The topic was Arizona’s public campaign financing system, specifically a provision that provided matching funds to publicly financed candidates. The question was whether this system impinged on the free speech of the privately funded candidates. A group of social scientists, including Enos, found that private donations weren’t following the kind of patterns they’d expect to see if the public funding rule were affecting how donors behaved. The Supreme Court didn’t care and ultimately struck down the provision.

In his majority opinion, John Roberts echoed Stewart and repeated the fallacy, writing that “it is never easy to prove a negative.”


So what can be done?

McGhee, who helped develop the efficiency gap measure, wondered if the court should hire a trusted staff of social scientists to help the justices parse empirical arguments. Levinson, the Texas professor, felt that the problem was a lack of rigorous empirical training at most elite law schools, so the long-term solution would be a change in curriculum. Enos and his coauthors proposed “that courts alter their norms and standards regarding the consideration of statistical evidence”; judges are free to ignore statistical evidence, so perhaps nothing will change unless they take this category of evidence more seriously.

But maybe this allergy to statistical evidence is really a smoke screen — a convenient way to make a decision based on ideology while couching it in terms of practicality.

“I don’t put much stock in the claim that the Supreme Court is afraid of adjudicating partisan gerrymanders because it’s afraid of math,” Daniel Hemel, who teaches law at the University of Chicago, told me. “[Roberts] is very smart and so are the judges who would be adjudicating partisan gerrymandering claims — I’m sure he and they could wrap their minds around the math. The ‘gobbledygook’ argument seems to be masking whatever his real objection might be.”

But if the chief justice hides his true objections behind a feigned inability to grok the math, well, that’s a problem math can’t solve.

Footnotes

  1. Justice Clarence Thomas famously abstains from speaking during oral arguments, except in extremely rare cases.

  2. We can only find four other instances of justices using the word. The late Justice Antonin Scalia used it in a dissenting opinion footnote in 2008 to describe a standard put forth by the solicitor general. He also uttered it in a 2006 oral argument and the 2015 oral argument over the Affordable Care Act. Justice Elena Kagan dropped a g-bomb in 2015 in argument in a case about lethal injections.

Oliver Roeder was a senior writer for FiveThirtyEight. He holds a Ph.D. in economics from the University of Texas at Austin, where he studied game theory and political competition.

Comments