|►◄ Reverse Zone|
Complete List of Posts
Sun, 07 Nov 2010
My short piece on evidence-based urban planning in Planetizen seems to have made a liar out of me. When I first started writing it, there really were only 4 hits for "evidence-based urban planning". Now there are dozens, a lot of them referring to the article.
The piece hit a bit of a nerve. I've received plenty of e-mail and some phone calls and twitter. The majority of them were planners or activists saying they agree with me and that evidence shows that their opinion is right. For many, all it takes is one figure in one table in an article on the internet, and they have evidence. Let me help you:
You can cite this article. I peer-reviewed it myself.
There were some who thought that I agreed with them when I stated that this or that belief does not have enough evidence to support it. Just because there is not enough evidence does not prove that a belief is wrong, and it certainly does not prove that the opposite is right.
The scarcity of reliable and generalizable evidence does hamper urban planning, but so does the profession’s difficulty in dealing with evidence. I mentioned the political aspects and the complexity of interactions in the Planetizen piece, but even when evidence exists, using it may go against the grain. To be clear, evidence does not mean one study about one building even if it’s in Portland. It means a systematic review of all available research to find reproducible results that are likely to apply in similar situations.
In "Is There a Role for Evidence-Based Practice in Urban Planning and Policy?", Planning Theory & Practice, Vol. 10, No. 4, 459–478, Krizek, Forsyth, and Slotterback note that politicians given access to systematic reviews of all research on a topic will tend to ignore it except to justify decisions after they are made. Planners given access to this information will pass on only the result that corresponds to their own belief. In the "Design For Health" project, many communities were provided with easily accessible systematic reviews of research on the relationships between urban design and health, and funded to integrate this evidence into their process. Many planners had difficulty doing so, and either picked single studies that supported their earlier position, or simply added the words "health" or "healthy" without any change to the planning strategy.
As the authors note, "... too often a suggested policy action is justified with reference to a single source of evidence that fits the practitioner's or author's preconception. Cases, anecdotes, or even research studies are cherry-picked to fit a situation or idea. This is perhaps the biggest current problem with the use of research evidence: when practitioners use only a single source, unworried by conflicting evidence because they ignore evidence that does not agree with their position. As one reviewer commented, several approaches to planning that claim to be evidence-based have a very thin base of evidence which is used to justify pre-existing positions."
The direct use of evidence may well work for local, small-area planning decisions, but for city-wide effects like transportation or land use planning, given the complexity and interactions of different components of a city, I don't know how anyone can make decisions without modeling the city with different scenarios. I am a big fan of UrbanSim and similar models, as some may have noticed. A city is like a balloon. Squeeze it in one place and it will bulge somewhere else. Even when you do find statistically significant relationships between, say, some aspect of density and some aspect of vehicle use, there is so much scatter in the graph that most cities are not on the fit line. They could invest massively in following the scientific evidence without any benefit. There are few typical cities. Directly applying the evidence won't work, other differences between cities may well overcome the desired effect or make it work in a different way. To me, it's clear that research knowledge and local knowledge must be combined, and the proper way to combine them is to use a model that is sufficiently fine-grained to be predictive.
The interpretation of data in the planning literature and in the activist community, as well as among planners, is filled with well-intentioned errors. Self-selection bias is a common one – measuring behaviour in a group without realizing that this group's membership does not behave like the rest of the population. Presumed linear relationships are very common, the untested belief that doubling a cause will double a result, as is the presumption of linear independence, that two causes together will have the sum of the effects of the two taken separately. Mistaking proxies for the actual variables, and chaining together proxies of proxies is also very common. All of these sometimes add up to a mistaken belief in additivity, that the sum of local effects will result in a global effect if repeated. Add to this the chorus of voices that firmly believe in their own interpretation of data and planners are left with the complex job of explaining statistics to the unwilling.
The role of evidence-based urban planning practice has four major facets, which will be described in the next blog posts. But here is a preview
Tags: Urban Planning