Skip to main content
ABC News
No Technology — Not Even Tesla’s Autopilot — Can Be Completely Safe

When I read the headlines Friday about the fatal crash of a Tesla vehicle while in self-driving mode, I immediately thought about Three Mile Island.

It’s not that Tesla’s autopilot mode is the vehicular equivalent of a nuclear meltdown.

As the company would very much like you to note, self-driving cars are doing better, statistically speaking, than human drivers. Tesla says autopilot was used for 130 million miles worth of driving before this fatal crash. Human-driven cars in the U.S. have 1.08 fatal crashes for every 100 million miles.

Instead, what a car crash and a nuclear power plant have in common is an engineering conundrum shared by many other complex technological systems. You can engineer a system to be safer, but you can’t engineer it to be completely safe. It’s like casual sex, but with even more moving parts.

The safer you make a technological system — the more obvious, known risks you eliminate — the more likely it is that you’ve accidentally inserted unlikely, unpredictable failure points that could, someday, come back to bite you in the ass. All the more so because the people operating the system don’t expect the failure.

That seems to be what happened in the Tesla crash, based on what we know. A 40-year-old man named Joshua Brown was killed in May on a divided highway when a tractor-trailer made a left turn in front of his car at an intersection and Brown’s car didn’t stop, perhaps because it read the trailer as a highway sign. Autopilot is programmed to ignore highway signs that hang above the road — nobody wants a car that lays on the brakes in the middle of the interstate.

The crash reminded me of “Normal Accidents,” a 1984 book by Yale sociologist Charles Perrow. The book grew out of Perrow’s work on the President’s Commission on the Accident at Three Mile Island. In that case, disaster happened both despite and because of a complex chain of safety systems.

The underlying problem was small. No big deal. Just a pump that broke in the non-nuclear part of the power plant. To be on the safe side, though, the entire system went into automatic shutdown. There was a system to stop fission in the nuclear reactor. That worked. There was a system to reduce pressure in the reactor as the fission reaction slowed. That worked, too. There was a system to close the pressure-relief valve once the reactor was back at safe levels. All the computers told the human operators that was working, as well. But it wasn’t, and there was no way to double-check the computer’s report. There was no reason to even assume the computer might be wrong. So steam kept escaping, and the reactor kept losing coolant. Meanwhile, the system that was supposed to add more coolant automatically in an emergency had just been tested to make sure it was working — and somehow, during that test, the workers had failed to reset it properly.

People designed Three Mile Island to be safe. They designed systems that would automatically solve a whole host of foreseeable problems. But nobody expected that those systems might, under just the right set of unforeseen circumstances, crash headlong into one another, creating a disaster nobody could have predicted. And when that happened, the people in control of the systems couldn’t wrap their heads around what was going on fast enough to stop it.

Perrow’s book argues that the people who fail to catch systems as they fail aren’t the bad guys and they aren’t being stupid. Instead, they’re faced with a problem that Perrow calls “incomprehensibility” — in a high-stress situation, your brain reverts to known facts and practiced plans. You respond as you have been taught to respond. But in these “normal” accidents, the problem is almost never the thing you’ve planned for.

And that matters in the Tesla case, as well. Tesla’s autopilot is designed to make sure that the human driver is nominally still in charge. Move your hands off the steering wheel, for instance, and it gradually slows down until you put your hands back on. We don’t yet know how well this system was working. Maybe it failed. But the point is that it didn’t have to. All that had to happen was for the driver to believe his car was going to stop for obvious hazards in the road — until it didn’t.

Normal accidents are a part of our relationship with technology. They are going to be a part of our relationship with driverless cars. That doesn’t mean driverless cars are bad. Again, so far statistics show they’re safer than humans. But complex systems will never be safe. You can’t engineer away the risk. And that fact needs to be part of the conversation.

Maggie Koerth was a senior reporter for FiveThirtyEight.

Comments