Within the last month, King County Metro quietly released the Fall 2010 route performance report, an annual public report that attempts to objectively distill the performance of a route down to a handful of numbers, reasonably understandable to the lay person. While the intent of this year’s report is the same as before, the metrics and categories used have changed significantly, and in this post I’m going to examine these changes in depth.

The simplest and perhaps most significant change is that the concept of “subarea” does not exist. Rather than the old East/South/West division, the performance tables are now divided according to whether a route does or does not serve the “Seattle core”, defined to include the CBD, First Hill, Capitol Hill, Uptown and the U-District. This does not imply that service is going to be allocated according to some arbitrary formula based on this criteria; rather, this division allows more reasonable comparisons and ranking among Metro’s many routes, as service that avoids the densely urbanized (and traffic-snarled) Seattle core will typically exhibit very different performance characteristics from other service, however well or poorly those routes are designed.

The old reports, such as this 2009 report, relied upon a smorgasbord of individual metrics:

Rides/revenue hour: A “revenue hour” is an hour that a bus is in service and accepting passengers; contrast this to a “platform hour” which is any hour when the bus is outside the base. This metric heavily favors dense urban areas, where short trips and a constant churn of passengers are the norm. Because it considers revenue hours only, it fails to account for the efficiency or inefficiency of the route design, a flaw that a rides/platform hour metric would not have suffered.

A “revenue hour” is an hour that a bus is in service and accepting passengers; contrast this to a “platform hour” which is any hour when the bus is outside the base. This metric heavily favors dense urban areas, where short trips and a constant churn of passengers are the norm. Because it considers revenue hours only, it fails to account for the efficiency or inefficiency of the route design, a flaw that a rides/platform hour metric would not have suffered. Farebox revenue/operating expense (FR/OE): Fairly self-explanatory. This metric does consider the full cost of operation, but conflates it with the amount paid by riders and expresses it in a way that makes it hard to compare directly along with other metrics based on hours.

Fairly self-explanatory. This metric does consider the full cost of operation, but conflates it with the amount paid by riders and expresses it in a way that makes it hard to compare directly along with other metrics based on hours. Passenger miles/revenue hour: Just as rides/revenue hour is skewed to urban service, this metric tells as much about the character of the route as the quality of the design; favoring long freeway-running suburban commuter routes that load up downtown, then drive far and fast before unloading at a park & ride.

Just as rides/revenue hour is skewed to urban service, this metric tells as much about the character of the route as the quality of the design; favoring long freeway-running suburban commuter routes that load up downtown, then drive far and fast before unloading at a park & ride. Passenger miles/platform mile: A platform mile is analogous to a platform hour: a mile traveled by a bus after leaving the base whether deadheading or in service; so this is essentially the average occupancy of the bus, including those miles when the bus is deadheading and its occupancy is presumed to be zero. This metric properly accounts for the full cost of service and efficiency of route design, as well as providing a fair comparison of routes with differing lengths and running speeds.

A platform mile is analogous to a platform hour: a mile traveled by a bus after leaving the base whether deadheading or in service; so this is essentially the average occupancy of the bus, including those miles when the bus is deadheading and its occupancy is presumed to be zero. This metric properly accounts for the full cost of service and efficiency of route design, as well as providing a fair comparison of routes with differing lengths and running speeds. Route effectiveness sum: Perhaps the most misleading of all, this number was a sum of four other numbers derived from the comparative performance of the route on each other metric. The fundamental problem here is that a bus route cannot be reduced to one “effectiveness” number: some routes, depending on the built environment and economic context they serve, will do better on some metrics than others, and that may well be perfectly okay; every route has to be evaluated with that context in mind, against an appropriate selection of peer routes.

I think it’s clear that, while the goal here is laudable, and each metric has some value, there are a number of problems with this set of metrics. They are not consistent or comparable in scope, painting a picture that is somewhat incomplete yet that also contains extraneous and distracting information. The 2010 metrics are simultaneously simpler and more complete:

Rides/platform hour. Still a metric that favors urban service, but now includes the true cost of the route including deadhead and layover time.

Still a metric that favors urban service, but now includes the true cost of the route including deadhead and layover time. Passenger miles/platform miles, identical to prior years.

identical to prior years. No effectiveness score is presented, but as in prior years, individual metrics that are in the top or bottom quartile are highlighted.

I think the results these metrics yield are very sensible. Routes on the Downtown-Belltown-Uptown corridor are still shown to be perhaps the hardest-working routes in the county. Routes known to be little-used, such as the odious Route 42, consistently show up in the bottom quartile. The reader is invited to look up their neighborhood route and comment on how accurately they feel the report quantifies its ridership. Further simplifying the discussion, the report no longer breaks out turnback and shuttle variants, a distinction that is not of interest to most readers.

In closing, I note that page 11 of the report contains an intriguing typo: Route 71 is described as “Wedgwood and U-District via Latona”, an obviously incorrect description, that nonetheless indicates where Metro’s planners might be going with this route in the next big service change.