►◄ Reverse Zone
 

Home

About
Reverse Zone, weblog on urban planning, sustainability, and technology.

Martin Laplante

Subscribe
to an RSS feed of this weblog.

Links
A few favourite links.

Recent posts

 2009/01
 2008
 2007
 2006
 2005
 Complete List of Posts

Top Posts

Add to Technorati Favorites

Real Estate Top Blogs

Sustainability Web Ring 
control panel

     
Sun, 07 Nov 2010

Evidence-based planning

My short piece on evidence-based urban planning in Planetizen seems to have made a liar out of me.  When I first started writing it, there really were only 4 hits for "evidence-based urban planning".  Now there are dozens, a lot of them referring to the article.

The piece hit a bit of a nerve.  I've received plenty of e-mail and some phone calls and twitter.  The majority of them were planners or activists saying they agree with me and that evidence shows that their opinion is right.  For many, all it takes is one figure in one table in an article on the internet, and they have evidence.  Let me help you:

People who think you’re right92%
Morons8%

You can cite this article.  I peer-reviewed it myself.

There were some who thought that I agreed with them when I stated that this or that belief does not have enough evidence to support it.  Just because there is not enough evidence does not prove that a belief is wrong, and it certainly does not prove that the opposite is right.

The scarcity of reliable and generalizable evidence does hamper urban planning, but so does the profession’s difficulty in dealing with evidence.  I mentioned the political aspects and the complexity of interactions in the Planetizen piece, but even when evidence exists, using it may go against the grain.  To be clear, evidence does not mean one study about one building even if it’s in Portland.  It means a systematic review of all available research to find reproducible results that are likely to apply in similar situations.

In "Is There a Role for Evidence-Based Practice in Urban Planning and Policy?", Planning Theory & Practice, Vol. 10, No. 4, 459–478, Krizek, Forsyth, and Slotterback  note that politicians given access to systematic reviews of all research on a topic will tend to ignore it except to justify decisions after they are made.  Planners given access to this information will pass on only the result that corresponds to their own belief.  In the "Design For Health" project, many communities were provided with easily accessible systematic reviews of research on the relationships between urban design and health, and funded to integrate this evidence into their process.   Many planners had difficulty doing so, and either picked single studies that supported their earlier position, or simply added the words "health" or "healthy" without any change to the planning strategy.

As the authors note, "... too often a suggested policy action is justified with reference to a single source of evidence that fits the practitioner's or author's preconception. Cases, anecdotes, or even research studies are cherry-picked to fit a situation or idea. This is perhaps the biggest current problem with the use of research evidence: when practitioners use only a single source, unworried by conflicting evidence because they ignore evidence that does not agree with their position. As one reviewer commented, several approaches to planning that claim to be evidence-based have a very thin base of evidence which is used to justify pre-existing positions."

The direct use of evidence may well work for local, small-area planning decisions, but for city-wide effects like transportation or land use planning, given the complexity and interactions of different components of a city, I don't know how anyone can make decisions without modeling the city with different scenarios.  I am a big fan of UrbanSim and similar models, as some may have noticed.  A city is like a balloon.  Squeeze it in one place and it will bulge somewhere else.  Even when you do find statistically significant relationships between, say, some aspect of density and some aspect of vehicle use, there is so much scatter in the graph that most cities are not on the fit line. They could invest massively in following the scientific evidence without any benefit.  There are few typical cities.  Directly applying the evidence won't work, other differences between cities may well overcome the desired effect or make it work in a different way. To me, it's clear that research knowledge and local knowledge must be combined, and the proper way to combine them is to use a model that is sufficiently fine-grained to be predictive.

The interpretation of data in the planning literature and in the activist community, as well as among planners, is filled with well-intentioned errors.   Self-selection bias is a common one – measuring behaviour in a group without realizing that this group's membership does not behave like the rest of the population.  Presumed linear relationships are very common, the untested belief that doubling a cause will double a result, as is the presumption of linear independence, that two causes together will have the sum of the effects of the two taken separately. Mistaking proxies for the actual variables, and chaining together proxies of proxies is also very common. All of these sometimes add up to a mistaken belief in additivity, that the sum of local effects will result in a global effect if repeated.  Add to this the chorus of voices that firmly believe in their own interpretation of data and planners are left with the complex job of explaining statistics to the unwilling.

The role of evidence-based urban planning practice has four major facets, which will be described in the next blog posts.  But here is a preview

  1. Setting objectives
    1. Objectives that are ends in themselves, not presumed means of achieving them
    2. Objectives that are measurable
    3. Objectives that can be achieved in great part through urban planning/urban design
    4. Politically legitimate objectives
  2. Measuring achievement of those objectives before and after
    1. Indicators and proxies whose relationship to the objectives are known
    2. Don't confuse these proxies with the actual objectives
    3. When a weight or combination is used in a proxy or an policy, evidence that this is the optimal weight or combination
    4. Also measure the process and its outputs
  3. Determining what factors affect achievement of those objectives
    1. More precisely, what changes will result in changes to objective variables
    2. If a systematic literature review indicates some factors, do they apply to your city?
    3. Disaggregate the factors appropriately, for instance by demographic group, district or building type
    4. Find indicators and proxies for these factors
    5. Don't forget negative indicators; for instance if you measure new housing starts also measure loss of housing stock
    6. Don't confuse these proxies with the actual factors
    7. Don't filter out the counter-intuitive ones
  4. Finding policy instruments that can be used to modify those factors
    1. What will be the response to those instruments – cooperative or reactive?
    2. Model scenarios!  Integrated transportation-land use modeling is required
    3. Equilibrium models are not sufficiently predictive
    4. Predict effect of policy instruments on predictive factors AND on objectives
    5. Predict other impacts positive and negative
    6. Ability of policy instruments to achieve objectives should affect choice of policy instruments but also of predictive factors.

    Tags:

    [] permanent link Comments: 0