According to Christopher Leinberger, in his excellent book The Option of Urbanism, Americans’ preferences regarding housing fall into three groups of roughly equal size: Those who have a moderate to strong preference for auto-centric suburban living, those who have a moderate to strong preference for dense, walkable urbanist environments, and those who have weak or no preferences between the two and could go either way. (There’s a generational shift toward preferring urban walkability and a falling out of love the driving and car ownership, there’s probably been a shift toward the second category since Leinberger wrote that book). so Unfortunately for the environment, public health, and the people in the second group, around 80% of our existing housing stock in this country is suited to meet the needs of the first group. This suggests a shift is in order, as it seems difficult to justify using zoning and land use law to continue to dramatically oversupply a product with more social and environmental externalities, while at the same time restricting the availability of a product that produces all manner of public goods (in addition to the obvious environmental and public health benefits, the more people living at urban scale, the more rural and natural land can be left undeveloped).

The conversion of suburban spaces to urban ones doesn’t make much sense in a lot of places. It’s difficult to produce from scratch, on demand, for a variety of reasons (such developments are expensive, and placing them where land values are low isolates them from organic urban density, making them high risk investments. And often end up weirdly off-puttingly inorganic and sterile). It makes the most sense, obviously, to focus that redevelopment on suburban-ish style living in areas already close to the same. They can piggyback on the amenities of nearby walkable neighborhoods, they’re more likely to have something close, or at least closer, to the appropriate levels of transit for such developments, and so on. Furthermore, while we might wish to designate a threshold where we designate a neighborhood one or the other, but in reality it’s an ordinal, not cardinal distinction and adding density (or, from a planning perspective, allowing it to be added) on the margins is a relatively low-intensity way of shifting the housing stock toward the preferences and needs of the society.

Proponents of land use laws to allow more people to live in and near single family areas in cities face a dizzying array of local resistance strategies, which are often quite effective. Much of the local resistance involves tool work together to form a kind of war of attrition under that flies under the banner of local democracy or neighborhood autonomy: piling on various delays, costs, and burdens to development such that developers are more likely to give up and ply their trade elsewhere. Many of these tools–design review, environmental impact statements–have entirely worthwhile rationales, but it’s difficult to design them to serve their stated purpose in a way that doesn’t also make them a tool in the anti-development toolkit.

Which brings us to “traffic studies.” The traffic impacts of new development projects are worth studying, right? Shouldn’t we plan for new development? Sounds sensible enough, but here’s how it plays out:

The bible here is the ITE’s Trip Generation handbooks, which collects vehicle trip data from various locations depending on land use, then presents the result in the form of graph, based on a rate of trip per assumed relevant characteristics, for example, per 1000 square feet of Gross Floor Area. By using that data as is, there are a flurry of assumptions that are made: All locations of a given land use with similar floor area (or other quantitative feature) will have a similar number of trips generated, it doesn’t matter if it’s a restaurant in the heart of a city or in a tiny town in the boondocks. Essentially all trips generated or attracted by the location will be made in cars. Congestion level and traffic conditions will have no effect on the number of trips made to and from that location. Now, people may point out that these assumptions aren’t always, or even frequently, true, but these still form the basis of most traffic studies. Why? Well, engineers are taught during their formation that being “conservative” with their calculations is always the best option. In this case, this has nothing to do with politics, it means always opting for more precaution, supposing higher loads than is plausible, so that the resulting design is sure to be able to withstand much bigger loads than it can be expected to face. This is rooted in structural engineering: someone who designs a bridge wants to avoid a catastrophic failure of the bridge, and so will overdesign the bridge for greater loads than necessary to make sure it doesn’t happen, so loads will be high-balled and material resistance will be low-balled. That makes plenty of sense for structural engineering, where failure destroys the structure and may even put human lives at risk, so that outcome is downright unacceptable. In that case, the cost of being conservative in one’s calculations is merely a higher construction cost, there is no externality in most cases. So overdesigning a bridge has no drawback, except for the higher cost. However, the problem comes when you apply that “conservative” mentality to traffic studies. In this case, it means assuming the worst case scenario of traffic, for instance, all different buildings have their PM peak flow at the same time, all trips are made by car, and using the Peak Hour Factor to suppose that all movements achieve their maximum 15-minute vehicle flows at the same time. It also means low-balling somewhat road capacity. So you’re designing roads to avoid congestion at higher traffic levels than the roads will usually see. This approach basically views congestion, even during the peak hour of the day, as a catastrophic failure in a way similar to a bridge falling down. It also assumes that there is no externality to overdesigning an intersection, of using longer traffic signals, of having a higher number of wider lanes, wider medians, the only drawback is the cost…

When it comes to roads and congestion, we have no choice but to bite one of the following three bullets:

1) Road space at peak travel times will be allocated to those willing to spend extra time in traffic.

2) Road space during peak travel times will be allocated by a willingness to pay a congestion fee (ala London, Singapore, Milan, Stockholm .)

3) An extraordinary quantity of public resources and high-value land will be devoted to roads, externalities galore be damned, in order to socialize the cost of peak travel time car useage.

Myself, I’d go for (2). Raise more revenue, which in actually existing cases turns out to be mostly from the rich, incentivizes devising more efficient ways to use a public resource, and perhaps most importantly internalizes at least a few of the externalities of cars. My fellow Americans seem to overwhelmingly prefer (1) to (2), to which I say fair enough. But either is preferable to (3), which is a nightmare on environmental, land use, and allocation of scarce public resource grounds.

…..for a more optimistic take, see this comment from someone who works on this kind of study in Rochester. The traffic study doesn’t have to be a disaster if the inputs are better, and it’s a relatively easy fix to make if there’s sufficient political will.