Skip to main content
ABC News
A Better Way To Find The Best Flights And Avoid The Worst Airports

Get stuck at the airport last year? You weren’t alone. In 2014, the 6 million domestic flights the U.S. government tracked required an extra 80 million minutes to reach their destinations. At 80 passengers per flight, that works out to an extra 12,000 years in transit. Hope you brought a good book along.

Our new fastest flights interactive, which launched Wednesday, may help you spend more time at your destination and less at the Sbarro in Terminal D. It can tell you which airlines perform the best on which routes and which airports you should avoid if you can.

Our method operates on the same principle you do when you’re angry because your flight is late: It finds someone to blame.

Here’s one example: Your American Airlines flight from Chicago O’Hare (ORD) to Dallas (DFW) is hopelessly late. After a first look at the airline’s on-time data, you might blame American. Thirty-two percent of its flights were late,1 canceled or diverted on the ORD-DFW route in 2014, considerably higher than the 24 percent that airlines averaged across all routes last year.

But consider American’s main competitor at O’Hare, United Airlines. It was even worse; 40 percent of United’s flights from ORD to DFW were late, canceled or diverted. So should you blame American Airlines for that late flight? Or chalk it up to O’Hare, and count your blessings that you’re not flying United?

The government has one way of assigning blame. It asks airlines to report the reasons for their delays in five categories: security, weather, national aviation system, late arriving aircraft and carrier. You might think of the first three categories as “no-fault” delays (as in: no fault of the airlines), while the airlines deserve blame for the other two.

The problem is these categories are fuzzy. Did that aircraft arrive late because of a thunderstorm at the previous airport (presumably not the airline’s fault) or a mechanical problem (presumably so)? Some airlines are better than others at keeping their networks intact under challenging weather or air-traffic conditions.2 Plus, the causes are reported by the airlines.

We need a better way to measure airline performance, one that adjusts for the routes the airlines fly and doesn’t allow for any gaming of the data. Sure, it’s not easy to fly out of O’Hare, but is that enough to redeem United and American?

There are a few more problems to resolve before we take the step of apportioning blame between airports and airlines. This post about the methodology provides more detail on these, but I’ll cover them briefly here to explain how they affect our rankings.

Problem #1: Not all delays are equal

The majority of flights — 54 percent — arrived ahead of schedule in 2014. (The 80 million minutes figure I cited earlier is a net number: It consists of about 115 million minutes of delays minus 35 million minutes saved from early arrivals.) The government treats all those flights as “on time,” along with anything that arrives up to 14 minutes behind schedule. Anything after that counts as late.

But while a 15-minute delay is an annoyance, a two-hour delay is a scheduling disaster. And a cancellation or a diversion is even worse — equivalent to an average delay of four or five hours by our method (see the methodology post for how this is calculated).

Most of the 80 million minutes of delays every year come from flights like these.

Severely delayed, canceled and diverted flights represent less than 5 percent of flights but account for 75 percent or 80 percent of net delay minutes.

For the most part, airlines with a lot of short delays also experience more long delays and cancellations. But there are some exceptions. In the next chart, I’ve compared the share of an airline’s flights that landed between 15 and 119 minutes behind schedule against the proportion that were at least two hours late, canceled or diverted.

silver-feature-fastflight-2

Frontier and Southwest have lots of short delays but only an average number of very long delays and cancellations. So they’ll come out looking a little better in our analysis, which is based on the average number of minutes lost or saved by the airlines, rather than an arbitrary cutoff at 15 minutes.

Problem #2: Many flights are flown by regional carriers — and regional carriers are slow

Did you know you just flew from LaGuardia to Bangor, Maine, on Air Wisconsin? Probably not: You bought the ticket from US Airways, and the flight looked and felt like a US Airways flight. About half of U.S. domestic flights are like this — flown by regional carriers on behalf of major airlines.

Our method groups these regional flights together with the major airlines they’re flown for. Unfortunately, some regional carriers (including Air Wisconsin) are too small to meet the government’s reporting requirements. But we do have data for the largest three: Envoy Air, ExpressJet Airlines and SkyWest Airlines.

It’s worth going through this trouble because some of the most delayed flights are operated by regional carriers, especially when flown on behalf of United and American. Envoy flights have an average delay of 26 minutes relative to their scheduled times, for instance, as compared with 13 minutes on routes flown by American itself.

Problem #3: Airlines set their own schedules — and some pad them

Some airlines schedule more time to complete the same journey. One can view this practice cynically. About 4 percent of flights arrive somewhere between 15 and 19 minutes late, just past the threshold for what the government considers an on-time arrival. If an airline added five minutes to its scheduled flight times, all of those “late” flights would suddenly be considered “on time.”3

How much effect does this have in practice? I’ve calculated which airlines pad their schedules the most and what effect this has on their on-time performance. The calculations are made by a regression analysis wherein I compare scheduled flight times against the distance and direction of travel and the airports that an airline flies into and out of. I’m being circumspect about describing the method since it will come into play again later, as you’ll see. But the basic idea is to account for how much time an airline adds or subtracts from its schedule relative to other airlines on the same routes.

silver-feature-fastflight-4

Frontier operates the tightest schedules, subtracting about six minutes relative to other airlines on the same routes. United runs the slackest, adding about two minutes per flight.

That’s not a huge difference, but it has a noticeable effect on on-time performance. Relative to the schedule each airline publishes, United flies on time 72 percent of the time and Frontier 74 percent of the time by the government’s definition.4 If you standardize their schedules,5 Frontier’s on-time percentage would rise to 79 percent while United’s would fall to 70 percent.

From on-time percentage to fastest flights

So what’s our solution? We don’t look at airlines’ scheduled flight times at all. Instead, we compare them against the distances they fly and adjust for the airports they fly into and out of.

First, we define something called target time, which is based on only the distance and direction of travel. The direction of travel matters because of the jet stream, which flows from west to east and therefore makes eastbound flights faster — usually somewhere between 45 minutes and an hour faster on a transcontinental route like LAX to JFK.

The more important thing: Target time is calibrated such that if flights hit their target, they’ll run exactly on time on average relative to airlines’ published schedules.6 But flights run late, on average: about 14 minutes late, relative to their published schedules. So likewise, they run 14 minutes late relative to target time. The question is whether to blame the airports or the airlines for these delays.

Here’s our approach — I hinted at it before. We run a regression analysis where the dependent variable is the difference between the actual time a flight took, including delays, and its target time. The independent variables are a series of dummy variables indicating the origin and destination airports and the airline that flew the route.7 This allows us to apportion blame for delays between the airports and the airlines. (We run the regression one month at a time, which allows us to remove potential effects due to the seasonal timing of flights.8)

By default, we blame the airports. As I mentioned, the average flight runs 14 minutes behind schedule, so we assign seven minutes of that to the origin airport and the other seven to the destination airport.

But some airports are much worse than others, of course. The chart below lists the time added on departure and arrival for the 30 largest U.S. airports:

silver-feature-fastflight-7

To the surprise of nobody worth her salt in salted peanuts, the three major New York airports and Chicago O’Hare contribute substantially to delays. Besides the New York trio, other large airports on the Eastern Seaboard, like Philadelphia (PHL) and Washington-Dulles (IAD), are pretty bad. Chicago-Midway (MDW) is a little better than O’Hare, but not much. West Coast airports are a lot better on the whole, although foggy San Francisco (SFO) is an exception.

On the flip side, a handful of airports are so efficient that they actually subtract minutes from target time.9 Honolulu (HNL) is the most prominent example among large airports.

We’re almost done. The last step in the calculation is what we call typical time, which is target time plus the delays associated with the origin and destination airports.

Take that flight from O’Hare to Dallas, for example. The target time on this route is 2:22. American flies it, on average, in 2:45, while United flies it in 2:53, both well above the target.

But O’Hare is a pretty awful airport to fly out of (it adds 16 minutes as an origin airport), and DFW isn’t a great one to fly into (it adds 11 minutes as a destination). That makes the typical time on the route 2:49 instead. American is four minutes faster than the typical time, and United is four minutes slower than it.

We perform this calculation for every route in our database; it yields a plus-minus statistic we call time added. In this case, negative numbers are good. For instance, American’s time added is -4 minutes on the ORD-DFW route. It’s not a particularly fast flight, but it’s saving you time relative to the typical (bad) conditions at O’Hare and DFW.

What happens when you aggregate the calculation over every route an airline flies?10 You get the results you see in the table below — what we think is the best overall measure of airlines’ on-time performance.

silver-feature-fastflight-8

We’ve gone a long way toward reducing the differences between the airlines. For example, as compared to target times, Hawaiian Airlines is about 40 minutes faster than United on average. But when you account for the fact that it’s much easier to fly in Hawaii than out of some of United’s hubs — O’Hare, Newark, San Francisco — the difference is reduced to nine minutes.

Nonetheless, United and American rate as the slowest airlines, in that order. It may be that it’s a dumb idea to have a hub at O’Hare. Perhaps you can forgive them for flying late out of O’Hare itself — everyone does. But if your flight from Denver to Phoenix is late because the plane that was supposed to take you is still stuck at the hub in Chicago, you can’t make as many excuses for the airline. It could have put its hub at a more efficient airport.

Besides, other airlines do perfectly well despite flying year-round out of some really busy airports. US Airways and Delta rate well by our method. So does Virgin America, which flies almost exclusively out of busy, coastal airports. In 2014, Virgin subtracted seven minutes from the average flight, ranking it as the fastest airline in the U.S.

We’ll be tracking these rankings and how they change over time. JetBlue’s numbers improved over the course of the year, and it should move up in the rankings once its disastrous January 2014 rolls out of the sample. US Airways’ performance became somewhat worse as the year wore on, perhaps an early sign that its ongoing merger with American will slow it down. And Spirit Airlines will begin reporting its data to the government for the first time; it’s a good bet to finish somewhere near the bottom of the table, as it does in many customer-service categories.

Footnotes

  1. By the government’s definition of arriving at least 15 minutes behind schedule.

  2. In fact, there’s a strong correlation in how the airlines perform in each category. Among the 10 airlines we track, the percentage of flights with a “no fault” delay has a 0.7 correlation with the percentage of flights with an “airline’s fault” delay.

  3. There might also be some downside to this. When booking tickets, some passengers might be sensitive to the listed arrival times — on the margin, preferring a flight that claims it will land at 10:57 instead of 11:02. And airline crews rely on those schedules to turn flights around; if the schedule is padded, their actual operations might eventually slow too.

  4. Note that I include regional carriers that fly on behalf of United in United’s total, while the government does not.

  5. That is, if you add six minutes to the scheduled time for each Frontier flight and subtract two minutes for each United flight, counteracting their padding.

  6. More specifically, target time is calibrated so that it’s the same as the scheduled time on average for all routes in the U.S. It won’t necessarily match the scheduled time for any particular route.

  7. The approach is computationally intensive — there are several hundred dummy variables in the regression — but with more than 6 million observations, we can afford it.

  8. If an airline flies from New York to Bangor only in the summer, it’ll look a lot better than if it flies year-round and has to deal with Maine winters. Our method adjusts for this. Specifically, average flight times are “seasonally adjusted” relative to the average of all carriers flying the same route. For example, an airline that flies the Bangor route only in the summer will have its flight time adjusted upward relative to one that flies it during the winter, too.

  9. Of course, everything is relative. If every airport were as fast as Honolulu, airlines would operate on tighter schedules.

  10. In calculating an airline’s overall plus-minus score, the calculation is weighted based on the number of flights on each route.

Nate Silver founded and was the editor in chief of FiveThirtyEight.

Comments