disasters, lessons learned, and fantasy documents

11
Disasters, Lessons Learned, and Fantasy Documents Thomas A. Birkland School of Public and International Affairs, North Carolina State University, Campus Box 8102, Raleigh, NC 27511, USA. E-mail: [email protected] This article develops a general theory of why post-disaster ‘lessons learned’ documents are often ‘fantasy documents’. The article describes the political and organizational barriers to effective learning from disasters, and builds on general theory building on learning from extreme events to explain this phenomenon. Fantasy documents are not generally about the ‘real’ causes and solutions to disasters; rather, they are generated to prove that some authoritative actor has ‘done something’ about a disaster. Because it is difficult to test whether learning happened after an extreme event, these post-disaster documents are generally ignored after they are published. 1. Introduction A staple of crisis management and emergency response is the post-response report, often known as an ‘after action’ report or a ‘lessons learned’ document. Many of these reports are the routine product of organizational self-evaluation and are pri- marily concerned with operational or ‘tactical’ matters. Indeed, this sort of learning is known to organizational theorists as ‘single-loop learning’ (Argyris & Scho ¨n, 1996), which has very important implications for crisis management (see Moynihan’s and Deverell’s papers in this symposium). But I am more concerned with the second loop, as it were, of ‘double-loop learning’, which involves learning about the fundamental assumptions behind policy design at the strategic level. Here, the claims of ‘lessons’ and ‘learning’ have significant implica- tions for the supposed lesson learners and the broader policy system. Because social and political pressures to create such lessons learned reports are the greatest in the immedi- ate aftermath of the event, while the event’s status on the agenda is freshest, a great deal of attention is paid to ensuring that lessons really are learned, so that the worst effects of the next disaster can be avoided. These pressures also mean that lessons learned reports are usually very quickly generated. It is difficult to claim that any actual learning occurred because insufficient time has elapsed between the event, the creation of the report, and any subsequent tests of the ‘lessons’. Instead, these documents really focus on ‘lessons observed’ or, more simply, the observations that officials and experts made about the preparations before and responses to the crisis or disaster. Moreover, most of the time, these reports are narrow-bore efforts to derive meaning for a particular constituency; in the disaster field, these groups include first responders, communications experts, and public health officials. There are few comprehensive efforts to learn broader strategic lessons about the events based in sound science; this is consistent with the idea that single-loop learning is more common than double-loop learning. In this article, I borrow concepts and terminology from Lee Clarke, who coined the term ‘fantasy docu- ments’ (Clarke, 1999). I call many lessons learned documents ‘fantasy learning documents’ for the same reason that Clarke terms many pre-disaster plans ‘fan- tasy documents’: because they are created and dissemi- nated for rhetorical purposes, even if their authors somehow believe that learning has really occurred. To begin, I review the theories of focusing events and outline a theory of learning from focusing events. I then develop a general theory of why post-disaster lessons learned documents are fantasy documents. This is not true in all cases, of course, but the general trend is towards producing such documents to prove that some & 2009 Blackwell Publishing Ltd. Journal of Contingencies and Crisis Management Volume 17 Number 3 September 2009

Upload: doanthuan

Post on 29-Jan-2017

221 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Disasters, Lessons Learned, and Fantasy Documents

Disasters, Lessons Learned, andFantasy Documents

Thomas A. Birkland

School of Public and International Affairs, North Carolina State University, Campus Box 8102, Raleigh,NC 27511, USA. E-mail: [email protected]

This article develops a general theory of why post-disaster ‘lessons learned’ documents

are often ‘fantasy documents’. The article describes the political and organizational

barriers to effective learning from disasters, and builds on general theory building on

learning from extreme events to explain this phenomenon. Fantasy documents are not

generally about the ‘real’ causes and solutions to disasters; rather, they are generated to

prove that some authoritative actor has ‘done something’ about a disaster. Because it is

difficult to test whether learning happened after an extreme event, these post-disaster

documents are generally ignored after they are published.

1. Introduction

A staple of crisis management and emergency

response is the post-response report, often

known as an ‘after action’ report or a ‘lessons learned’

document. Many of these reports are the routine

product of organizational self-evaluation and are pri-

marily concerned with operational or ‘tactical’ matters.

Indeed, this sort of learning is known to organizational

theorists as ‘single-loop learning’ (Argyris & Schon,

1996), which has very important implications for crisis

management (see Moynihan’s and Deverell’s papers in

this symposium). But I am more concerned with the

second loop, as it were, of ‘double-loop learning’, which

involves learning about the fundamental assumptions

behind policy design at the strategic level. Here, the

claims of ‘lessons’ and ‘learning’ have significant implica-

tions for the supposed lesson learners and the broader

policy system.

Because social and political pressures to create such

lessons learned reports are the greatest in the immedi-

ate aftermath of the event, while the event’s status on

the agenda is freshest, a great deal of attention is paid to

ensuring that lessons really are learned, so that the

worst effects of the next disaster can be avoided.

These pressures also mean that lessons learned

reports are usually very quickly generated. It is difficult

to claim that any actual learning occurred because

insufficient time has elapsed between the event, the

creation of the report, and any subsequent tests of the

‘lessons’. Instead, these documents really focus on

‘lessons observed’ or, more simply, the observations

that officials and experts made about the preparations

before and responses to the crisis or disaster. Moreover,

most of the time, these reports are narrow-bore efforts

to derive meaning for a particular constituency; in the

disaster field, these groups include first responders,

communications experts, and public health officials.

There are few comprehensive efforts to learn broader

strategic lessons about the events based in sound

science; this is consistent with the idea that single-loop

learning is more common than double-loop learning.

In this article, I borrow concepts and terminology

from Lee Clarke, who coined the term ‘fantasy docu-

ments’ (Clarke, 1999). I call many lessons learned

documents ‘fantasy learning documents’ for the same

reason that Clarke terms many pre-disaster plans ‘fan-

tasy documents’: because they are created and dissemi-

nated for rhetorical purposes, even if their authors

somehow believe that learning has really occurred.

To begin, I review the theories of focusing events and

outline a theory of learning from focusing events. I then

develop a general theory of why post-disaster lessons

learned documents are fantasy documents. This is not

true in all cases, of course, but the general trend is

towards producing such documents to prove that some

& 2009 Blackwell Publishing Ltd.

Journal of Contingencies and Crisis Management Volume 17 Number 3 September 2009

Page 2: Disasters, Lessons Learned, and Fantasy Documents

authoritative actor has ‘learned its lessons’ about a

disaster and that, given this learning, will not replicate

its errors.

2. Overview and definitions

John Kingdon (1995) uses the term ‘focusing event’ in

his study of agenda setting and alternative selection to

describe a class of political phenomena that can cause

an issue to gain attention in the media and among

various institutions. In my work (Birkland, 1997, 1998,

2006), I further defined focusing events as events that

are sudden, that are known to policy makers and elites

simultaneously, that affect a community or a community

of interest, and that do actual harm, or that suggest the

possibility of greater future harm. My definition of the

term ‘focusing event’ is influenced by Cobb and Elder’s

(1983) work on agenda setting, in which they call

phenomena like focusing events ‘circumstantial reac-

tors’, and Baumgartner and Jones’s (1993) work on the

‘punctuated equilibrium’ model of the policy process,

in which public policies remain rather stable until

something upsets the system’s equilibrium, yielding

change. All of these works acknowledge that sudden

events are important examples of agenda drivers, but

do not go further than that. My work sought to sharpen

the idea of focusing events1 and in showing how

focusing events do not influence all policy domains in

the same way. On the other hand, my definition of

focusing events is rather more restrictive than King-

don’s; this definitional difference will not be resolved

here, but it is important to acknowledge.

Focusing events, by elevating issues on the agenda,

can, says Kingdon, open a ‘window of opportunity’ for

policy change. This window of opportunity can yield

immediate policy change, improved understanding

of the social or the natural forces that lead to a disaster,

or can be an opportunity for a variety of actors to learn

how better to argue for their policy or political

interests. Of course, these outcomes are not mutually

exclusive, and this knowledge can be accumulated

and applied after later focusing events or other change

opportunities. Peter May (1992) defines these three

types of learning as instrumental policy learning, social

policy learning, and political learning. Instrumental

policy learning involves learning about the effectiveness

of various policy tools applied to problems. Social policy

learning relates to learning about the social construc-

tion of problems and the interaction of policies with the

targets of policies. Political learning involves learning

about the effectiveness of rhetorical appeals for policy

change, and involves political strategies and tactics at

the ideological level, rather than the specifics of public

policies. This paper will be mostly concerned with

social and policy learning, although politics and political

learning are undeniably important.

Natural disasters, industrial accidents, and acts of

terrorism – what are together called ‘extreme events’ –

constitute one type of focusing event that can have local

and distant social and political effects. hurricane Katrina

was a local event for the Gulf Coast, while the distant

impacts of a focusing event are illustrated by the sig-

nificant loss of life in Thailand in the 2004 tsunami. This

disaster killed and injured a great many Swedes on

holiday, the governmental response to which had signifi-

cant consequences for Swedish politics (Naik et al., 2005;

Stromback & Nord, 2006; Widfeldt, 2007).

Because these events are undesirable, humans and

their institutions are presumably interested in mitigat-

ing them or preventing their damages from happening in

the first place. For example, the Air Accidents Investi-

gation Branch in the United Kingdom, and its counter-

part in the United States, the National Transportation

Safety Board, exists to collect a vast amount of infor-

mation on aviation incidents, ranging from minor mis-

haps to catastrophic accidents. The catastrophes are

the more focal events, but from nearly every major

aviation accident we have learned about the causes and

‘cures’ for aviation accidents (Perrow, 1999), such that

aviation safety has made remarkable gains (Cobb &

Primo, 2003).

Because learning from disasters is usually the result of

some sort of intensive investigational and study activity,

learning should not be seen as an outcome or a goal of the

process, but should be considered an ongoing activity

within the policy process. George Busenberg defines the

learning process as ‘a process in which individuals apply

new information and ideas to policy decisions’ (2001).

I accept this definition and suggest that focusing events

can provide that new information, although in a relatively

raw form. For example, the risk of a catastrophic terrorist

attack on the United States was no greater on 12

September 2001 than it was on 10 September, but the

September 11 attacks caused the public and elites to be

much more attentive to the terrorism problem. The

focusing event brings information to the attention of a

broader range of people than normally consider the issues.

However, my definition extends somewhat on Bu-

senberg’s by focusing more on the outcome of learning

than on the process – that is to say, I seek evidence of

some sort of change as a result of the new information,

while Busenberg’s definition only requires the applica-

tion of new information, regardless of the policy

decision. Policy learning can be identified if there is

prima facie evidence of policy changes that are reason-

ably linked to the causal factors that connected the

event under consideration to its harms, and if addres-

sing these factors would be likely to mitigate the

problem (Birkland, 2006). For example, we can say

that policy learning has occurred in the United States

after September 11 through a regulatory requirement

that cockpit doors be kept closed and securely locked

Disasters, Lessons Learned, and Fantasy Documents 147

& 2009 Blackwell Publishing Ltd.

Journal of Contingencies and Crisis Management

Volume 17 Number 3 September 2009

Page 3: Disasters, Lessons Learned, and Fantasy Documents

during flight (Airline Industry Information, 2001; World

Airline News, 2001). The new requirement is therefore

clearly a response to the insecurity of cockpits pre-

September 11. However, it is also true that cockpit

intrusions were nothing new, and we can speak of the

failure to learn from less catastrophic, but still worri-

some, episodes of deranged passengers seeking to

enter the flight deck (Air Safety Week, 2000; Richfield,

2000). This is an example of double-loop learning

because a small but fundamental policy change occurred

that transcended the usual regulatory adjustments that

characterized single-loop learning.

However, what looks like policy learning – that is, a

change after some sort of external shock – may not be

learning at all, for at least two reasons; first, the

‘lessons’ that may be ‘learned’ after an event may not

be related to the event at all, but, rather, the lessons

had already been ‘observed’ several times before the

event. That existing knowledge was either not taken up

by those who could have acted, or the knowledge was

available, but policy makers and implementers simply

chose not to act on that new knowledge. Examples of

this include the significant evidence of security pro-

blems in civil aviation well before September 11; it took

September 11 to drive these ideas forward on the

agenda. This is entirely consistent with Kingdon’s idea

that focusing events open the window of opportunity

for the joining of problems with pre-existing solutions,

such as better checkpoint screening, cockpit security,

and the like (Cobb & Primo, 2003; Birkland, 2004, 2006,

Chapter 3). Indeed, the oft-stated lament that ‘it takes a

disaster to change anything’ is entirely consistent with

agenda setting and focusing event theory in a wide

range of fields, from the ongoing financial crisis to

industrial accidents and natural disasters. Moreover, at

least intuitively, we know that ‘big’ events are more

likely to yield policy change than are ‘small’ events.

Second, some policy learning is ‘superstitious’ learning,

which either attempts to use ‘lesson drawing’ from other

places or times, regardless of whether the comparison is

apt (Neustadt & May, 1986), or when, in the urge to ‘do

something’, policies are adopted that have little or

nothing to do with the problem at hand. For example,

after the Columbine school shootings near Denver,

Colorado, in 1999, some policy makers sought to

more closely regulate video games and popular music,

which were said, absent sound scientific information, to

cause the sorts of behaviours that led to this disaster

(Haider-Markel & Joslyn, 2001; Lawrence & Birkland,

2004; Larkin, 2007). While no real social policy or

political learning occurred in this incident, there was

considerable evidence of political learning, in which all

manner of arguments – about popular culture, the

availability of guns, the lack of mental health services,

and so on – were honed and deployed in a battle of ideas

that, ultimately, generated more heat than light.

3. Why are disasters change andlearning opportunities?

Disasters are change and learning opportunities be-

cause they provide an opportunity for close analysis of

the things that happened before the disaster, during the

acute phase of the disaster, and in the recovery period.

The opportunities for learning and change come be-

cause these are extreme events, and therefore gain the

attention that routine events do not. These events gain

a great deal of media attention and, therefore, public

attention. If nothing else, decision makers assume that

what is on the media agenda is also high on the public’s

agenda as well. With public attention comes pressure

to do something about the event. What that ‘some-

thing’ might be is often very murky, because focusing

events not only raise an issue on the agenda; they also

elevate the manifold constructions of the issue on the

agenda. Only those constructions that somehow reso-

nate with the public or elites are elevated, even if these

constructions are, in the causal sense, wrong (Hilgartner

& Bosk, 1988; Lawrence & Birkland, 2004). Thus, after

September 11, there were many ‘new’ problems to be

addressed: border and immigration control, flight train-

ing, airline security, illicit money transfers, emergency

pre-paredness, seaport security, law enforcement, and so

on. Many of these issues were opportunistically advanced

on the agenda by interests who had sought policy change

for years; in other words, the event did not provide new

information, but provided new ways of framing an

existing set of policies to achieve a set of goals (in

particular, the advancement of the political right’s law

enforcement agenda). September 11 was an opportunity

to tie their issue to the new world of ‘homeland security’.

But it is at the ‘do something’ juncture that the

opportunity to learn is manifest, but, given the haste of

the decisions made in the wake of these events, the risk

of superstitious learning – that is, learning without some

sort of attempt to analyse the underlying problem – is

greatest. In some cases, pressure to act is so strong that

action is taken immediately, as was the case of the

enactment of the USA Patriot Act in 2001. This event

broke the pattern in the United States in which most

legislation and regulatory change followed some sort of

investigative or ‘after action’ report (Rubin et al., 2003).

The quick – or hasty – reaction to the September 11

attacks provides considerable evidence of learning, or of

political opportunism, as with the enactment of rather

stringent changes to criminal law enacted in the Patriot

Act but that have been more often used in run-of-the-

mill criminal cases than in prosecutions of terrorism.

This notion of political opportunism is not meant to

be cynical. Rather, it is a reflection of how ideas come

to the fore in both Cohen, March, and Olsen’s (1972)

‘garbage can’ model of organizational decision making,

as extended to the policy process in Kingdon’s ‘streams’

148 Thomas A. Birkland

Journal of Contingencies and Crisis Management

Volume 17 Number 3 September 2009 & 2009 Blackwell Publishing Ltd.

Page 4: Disasters, Lessons Learned, and Fantasy Documents

metaphor. After all, all groups have an ‘agenda’, which,

in American politics, at least, has come to sound like

something sinister (‘the liberal agenda’, ‘the right-wing

agenda’) but that really means the pre-existing goals

that groups seek to pursue. Clearly, if it is more

economical, in terms of political capital and the gen-

eration of public interest, to use an event as a way to

advance a group’s agenda, they will do so, such as when

environmental groups were able to use the Exxon Valdez

oil spill to advance claims that further development of

oil resources in Alaska would be environmentally

damaging (Birkland, 1997, Chapter 4).

Another type of reaction is one through which some

sort of learning (sometimes called ‘assessment’ or

‘evaluation’) process is begun, either within or outside

an agency, to assess what went well after an event, what

did not go well, and what should be improved in the

future. Such efforts, if done well, are designed to

understand the social, technological, and engineering

reasons for major failures that lead to disasters, such

as the multiple investigations of the levee failures

during hurricane Katrina conducted by expert investi-

gators. Others, who may not be as familiar with the

response as the experts, will develop ‘lessons learned’

documents that focus on particular aspects of their

concern that are based on secondary sources, and

that use the event as an exemplar. For example,

publications aimed at information technologists will

use an event to highlight lessons learned about the

physical security of computers, servers, and related

infrastructure, even though these ‘lessons’ were well

known before the event in question, and there is little

reason to believe that action as a result of these efforts

will be greater after the report than before. Indeed, we

might call all these lessons learned documents ‘lessons

observed’.

This is often well known to the participants in these

efforts, which is part of the investigatory process.

Leaders of investigative bodies pledge that their report

will not join a series of reports that ‘sit on a shelf and

collect dust’. Rather, their investigations will yield

tangible improvements in the way of policy and practical

change. Indeed, some members of the September 11

commission created the nongovernmental Public Dis-

course Project as a way to keep the recommendations

alive and in front of public officials, although this group

was disbanded at the end of 2005.

3.1. Potential patterns of ‘lessons learned’processes

There appear to be five broad patterns of ‘lessons

learned’ processes and documents:

� An event happens, and then change happens with

little or no effort devoted to learning from the

event. A major example is the USA Patriot Act,

which was enacted very soon after the September

11 attacks, without any real effort expended to see

whether the policy tools contained in that act

would really be the most effective in preventing

terrorist attacks.

� An event happens, and an investigation is under-

taken that is agency serving, is incomplete, or states

the obvious, without any evidence of a serious

attempt to learn. An example is the Executive

Office of the President’s Lessons Learned from

Katrina, the point of which is as much rhetoric as

it is real learning. Such reports simply hope to, in

Schattschneider’s (1975) terms, contain the scope

of conflict by creating the appearance of learning or

reform. Of course, there may well be some real

learning reflected in such reports, but their primary

function, ultimately, is public reassurance, not inter-

nal evaluation.

� An event happens, and an investigation is initiated,

which leads to policy change, but that policy change

cannot be linked to the investigation, or policy

changes without reference to the changes recom-

mended in the post-event investigation. For exam-

ple, there were many different attempts to

investigate September 11, but it is not clear

whether the creation of the Department of Home-

land Security was a direct outcome of these in-

vestigations, particularly given the thin evidence that

such an agency was really necessary (Tierney, 2005).

Indeed, DHS was created 2 days before the major

investigation – popularly known as the September

11 commission – was established. Its final report

was submitted in September 2004.

� An event happens, and a thorough and careful

investigation is initiated, but policy change does

not result. This may be because of cost, bureau-

cratic delay, political opposition, or any of the usual

reasons for political and policy stasis. For example,

the fruits of many NTSB investigations of airplane

crashes, including precursors to ValuJet 592, were

largely ignored for years by the Federal Aviation

Administration (Schiavo, 1997). The same is true for

aviation security problems before September 11,

where FAA moved very slowly in the face of what

was considered to be a growing threat (Birkland,

2004). However, we might still find the learning

process to be functional if the crisis was so anom-

alous that no intervention could improve policy

performance, such as the unforeseen ‘freak acci-

dent’, or if the remedy for the problem would

create more problems than the original problem

itself. For example, we know that some number of

people may be trapped in cars by seat belts in

accidents, and may perish in a fire if the car catches

fire. We also know that some very small fraction of

Disasters, Lessons Learned, and Fantasy Documents 149

& 2009 Blackwell Publishing Ltd.

Journal of Contingencies and Crisis Management

Volume 17 Number 3 September 2009

Page 5: Disasters, Lessons Learned, and Fantasy Documents

people who are vaccinated against diseases may

react badly to the vaccine, resulting in illness or

death. But we do not generally contemplate remov-

ing seat belts or halting vaccinations because the

broader social good these things do far outweighs

the small potential harms (while acknowledging, of

course, that the harms to those few injured indivi-

duals are not small).

� An event happens, and a thorough and careful

investigation is initiated, which leads to policy

change as a result of careful investigation, assess-

ment, and policy design. An example is the Colum-

bia Accident Investigation Board, which probed the

2003 space shuttle accident. There were changes at

NASA as a result of this report, including a much

closer inspection of heat shields and, in particular,

of potential damage to wings from falling foam

debris from the external fuel tank. However, one

must not make too much of ‘successful’ learning,

because these lessons can decay over time, as they

did between the loss of Challenger and Columbia. On

the other hand, the second shuttle accident has

led to fundamental rethinking about spaceship de-

sign, with new craft being simplified and designed to

put the crew far forward of the dangerous fuel

tanks; this focus on safety and survivability is a

function of double-loop learning. However, many

careful investigations yield single-loop learning that

does yield operational and regulatory change with-

out being elevated to the legislative level. An

example is the NTSB’s and the FAA’s investigation

of a series of rudder deflection incidents that

included the crash of US Airways flight 427 near

Pittsburgh in 1994. This investigation ultimately led

to the discovery and remedy of a design flaw with

the mechanism that controlled the Boeing 737-300

rudder (see http://www.ntsb.gov/events/usair427/

items.htm). Indeed, the NTSB’s work on aviation

accidents is considered a model of learning from

thousands of minor to major incidents that accu-

mulate into a vast body of operational knowledge

(Perrow, 1999).

The first four of these examples falls into a class I call

‘fantasy learning’ that generates ‘fantasy lessons learned

documents’, although the fourth example might be

more a function of bureaucratic delay rather than

of rhetoric. Only one of these scenarios – the fifth –

is an example of sound instrumental learning. While

this sort of rational, experience-, and evidence-based

learning is considered by the public and many actors

to be a desirable outcome of such events, and describes

what we might consider the classical model of learning,

this sort of learning is rare. There are many reasons,

then, for the production of fantasy lessons learned

documents:

4. A model of event-related policychange

The logic model in Figure 1 depicts the ideal process of

event-related learning, which can be used to test the

patterns of lessons learned processes. In this model, if

certain actions occur at points after a focusing event

occurs, learning becomes more likely, and policy change

as a result of this learning becomes more likely. This

model also suggests that after an event, it is possible for

learning without policy change to occur after one event,

or for policy change to result from mimicking or

‘superstitious’ learning. This learning is the result of

pressure to ‘do something’ after an event, and where

issuing a ‘lessons learned’ document is taken to be

evidence of at least the beginning of an effort to tackle

the failures revealed by the event. Finally, the model

acknowledges that not every event will lead to policy

change, but that events may contribute to a base of

experience that may promote learning from subsequent

events as knowledge accumulates, as noted in the

feedback arrow. In other words, not all events do

involve explicit acknowledgement of lesson learning.

In this model, I operationalize learning in this way:

first, I adopt Busenberg’s process-based definition but

stipulate that focusing events, consistent with Kingdon’s

streams metaphor, and Cohen March and Olsen’s ‘gar-

bage can’ (Cohen et al., 1972) model, on which Kingdon

relies, that definition of learning as ‘a process in which

individuals apply new information and ideas to policy

decisions’. However, I modify this definition slightly to

define learning as a process in which individuals apply

combined new information that may be revealed by a

disaster with and ideas, or new and preexisting information

and ideas elevated on the agenda by a recent event, to actual

policy change, policy decisions. This redefinition takes

into account two factors: the ebb and flow of ideas on

the agenda and the accumulation of ideas over time, even

as those ideas are not uniformly translated into policy.

I do not claim to be able to measure ‘learning’ at the

individual level based on behavioural or cognitive science.

Rather, I focus on the apparent lessons of these events,

and ask whether it appears that the clear lessons of these

events have been learned, as reflected in the policy-

making process. In particular, we can say that there is

prima facie evidence of learning if policy changes in a way

that is reasonably likely to mitigate the problem revealed by

the focusing event. This operationalization of learning

cedes a great deal of judgement to the researcher making

the claim of learning. This is why clear criteria and coding

frames are necessary to any detailed study of learning.

4.1. Drivers of the learning process

What is the motive force that advances the learning

process? I identify three drivers of this process, all of

150 Thomas A. Birkland

Journal of Contingencies and Crisis Management

Volume 17 Number 3 September 2009 & 2009 Blackwell Publishing Ltd.

Page 6: Disasters, Lessons Learned, and Fantasy Documents

which can either promote learning or lead to dysfunc-

tional learning. The first driver is the desire to learn,

quickly, why a bad thing happened so as to prevent

its recurrence. These pressures create hasty attempts

to learn from events, which can induce pre-mature

attribution of causes, such as the early claims by Jack

Kallstrom, the FBI’s New York bureau chief, that TWA

flight 800 was brought down by a bomb in 1996; it

turned out that a careful analysis found that the plane

exploded due to an abundance of explosive vapours in a

fuel tank. The news media are notoriously prone to

both warning against speculation and then speculating

about the causes of airplane crashes, sometimes in the

same story.

Self-interest is not simply about attempting to inocu-

late an agency or a group against criticism. The mirror

image of the self-promoting ‘lessons learned’ process is

a wildly critical effort that seeks to find fault with

everything that everyone did in an event. Few reports

are this critical, but the legislative branch is often

tempted, for partisan or institutional reasons, to focus

on failures and ignore successes. Sometimes, these

failures are overstated or personalized, as in Congress’s

grilling of former FEMA director Michael Brown after

An EventOccurs

IncreasedAgenda

Attention

GroupMobilization’

Discussionof Ideas

New PoliciesAdopted

Yes

No

PossibleInstrumental orSocial Learning

Yes

Yes

Yes

No

No

No

Little or nolearning

Little or nolearning

New PoliciesAdopted?

Yes

Possible superstitiouslearning or mimicking

Possible politicalor social learning,could be appliedto future events

Possibleaccumulation of

learning for futurepolicy making

Accumulatedexperience from

prior events

No

Figure 1. A Model of Event-Related Policy Learning.

Disasters, Lessons Learned, and Fantasy Documents 151

& 2009 Blackwell Publishing Ltd.

Journal of Contingencies and Crisis Management

Volume 17 Number 3 September 2009

Page 7: Disasters, Lessons Learned, and Fantasy Documents

hurricane Katrina, which attempted to attribute

many of the problems encountered in hurricane Katrina

to one person’s purported incompetence, not to

systemic failures.

On the other hand, the political and time pressure

created by a crisis may create a sense of purpose and

urgency that would not otherwise exist without the

crisis having happened. The investigations of the losses

of space shuttles Challenger and Columbia were driven

by the very fact that they led to loss of life (and, less

publicly, by the significant costs of losing these space-

craft). Urgency can therefore be a productive or a

distorting force.

A second driver of the learning process is individuals’

or groups’ self-interest. The choice to call a document a

‘lessons learned’ document can be strategic and rheto-

rical, and is revealed by the policy prescriptions to which

the report leads. For example, the American Highway

Users Alliance commissioned a study (American High-

way Users Alliance, 2006) to demonstrate the ‘need’ for

better evacuation planning using private automobiles and

over the road buses to allow entire cities to evacuate

because of what was ‘learned’ about the ‘failed evacua-

tion’ of New Orleans. While this study was triggered by

hurricane Katrina, this study was based almost entirely

on industry self-interest, was methodologically deficient,

and failed to take into account the largely successful

evacuation of New Orleans and its environs (Roig-

Franzia & Hsu, 2005; Wolshon et al., 2006; Derthick,

2007). The report’s credibility was further undermined

by its authorship by a consultant with a strong pro-

automobile, anti-transit, and anti-planning bias.

A third driver is the human tendency, under bounded

rationality, to attempt to find simple or monocausal

explanations for very complex social and political pheno-

mena. Focusing on one or a few aspects of a disaster will

not often get to the heart of the problem. For example,

the concentration of attention on New Orleanians’

choices to live in the parts of the city resting below

sea level seemed to create a causal story that focused on

the ‘poor decisions’ of the people who live there, which

is another version of ‘operator error’ rather than of

systemic error. The implicit lesson is that people should

be discouraged from living in vulnerable areas, but this

construction of vulnerability fails to account for a wide

range of things that create vulnerability. These include

complex socioeconomic and demographic factors, the

political economy of the region, the physical landform,

the roles of other actors (the Orleans Parish Levee

Board, the Corps of Engineers, the city and state

governments), and so on (Cooper & Block, 2006). The

blaming of individuals for the failure, such as the afore-

mentioned criticisms of Michael Brown, is yet another

example of monocausal attribution, as are ‘operator

error’ causes in complex systems accidents, such as

aircraft or nuclear power (Perrow, 1999).

4.2. Propositions about event-driven learning

The goal of the logic model is to generate propositions

about after disaster learning. These propositions also

suggest the data needed to understand the phenom-

enon of interest. I do not claim that these are hypoth-

eses, because further model development and theory

building is required. But I advance these as guidance for

future research.

The first proposition is that a few events will gain the

most attention. The distribution of damage and deaths in

disasters and accidents is not statistically normal;

rather, the distribution has a long ‘tail’, where a large

number of relatively small events garner little attention,

and a few events gain a great deal of attention. For

example, many tropical storms or hurricanes that can

strike the nation during the hurricane season, but only

the very few largest storms, on the scale of hurricanes

Katrina or Andrew, receive the most attention and can

have the greatest influence on learning. Smaller inci-

dents do not gain attention because they place less

strain on existing organizations and policies; in other

words, they are ‘routine’ disasters to organizations

designed to respond to such events. Hurricane Katrina

receive more attention than did all four of the hurri-

canes that struck Florida in 2004 because the response

to the Florida hurricanes was generally perceived as

adequate, and because no individual storm was cata-

strophic, while Katrina was a catastrophe that over-

whelmed the national emergency management system.

The disaster–catastrophe distinction is important,

because we can think of a disaster as affecting a

relatively small area whose emergency response may

be strained, but not overwhelmed, while a catastrophe

entirely overwhelms the ability of a community or its

region to respond (Quarantelli, 2005), as was evident in

hurricane Katrina. This distinction is important because

it reflects the greater scale of the catastrophe. In

English, this distinction is much more pronounced

than in, for example, French, where catastrophe naturelle

usually translates to ‘natural disaster’ in English.

The second proposition is that most, if not all,

participants in a policy domain want to address or

solve the problems revealed by a focusing event, but

that the proposed solutions will likely vary with the

interests and motivations of the various participants.

This reflects the idea that nearly all participants in a

domain are goal oriented (Jones, 2001). No legitimate

actor in any policy domain wants to see planes hijacked

or people displaced due to natural disasters. But the

policy instruments with, which problems will be pre-

vented or mitigated will differ from participant to

participant in the policy process, because the depiction

of how problems come to be, and therefore solved, will

be different based on each participant’s ideological and

organizational commitments.

152 Thomas A. Birkland

Journal of Contingencies and Crisis Management

Volume 17 Number 3 September 2009 & 2009 Blackwell Publishing Ltd.

Page 8: Disasters, Lessons Learned, and Fantasy Documents

The third proposition, related to the second, is that

group mobilization is linked in time to a particular focusing

event. In particular, the activities of groups – or the

representatives of such groups – will become more

evident in news accounts of the crisis or disaster as it

unfolds. In congressional hearings (or parliamentary

inquiries), particular groups’ representatives will be

heard from more often.

The fourth proposition is that group mobilization will be

accompanied by an increased discussion of policy ideas.

These will include theories about the causes and poten-

tial solutions of the problem, and, as such, are primarily

social and instrumental policy learning matters. I assume,

therefore, that events drive group mobilization, which

drives the discussion of policy ideas, again consistent

with the ‘garbage can’ model of decision making (Cohen

et al., 1972). Evidence of political learning may also exist,

but such evidence may be less apparent, given that this

learning happens internally within organizations in the

policy domain or advocacy coalitions. In any case, policy

learning is much less likely without the mobilization of

tangible ideas, and ideas are unlikely to come to the fore

without some sort of group mobilization.

Thus, the fifth proposition is that there is a relationship

between ideas and policy change. In particular, change is

more likely when there are ideas triggered because of

events, compared with when there are no ideas gener-

ated by an event or elevated to a higher position on the

agenda. Policy change can occur without ideas, but we

can assume that such policy change does not happen

because of careful debate of ideas and therefore does

not result from learning; instead, it is mimicking or

copying without learning (May, 1992). Table 1 shows the

types of evidence one would use to illustrate learning as

conceptualized in these propositions.

The sixth proposition is that it is possible for the

lessons learned to decay over time. While policy change

may result from an event, the time that intervenes

between one focusing event and another, and the

demands placed on policy makers in that intervening

period, may cause participants in the policy process to

‘forget’ the lessons that they learned. The effect of

hurricane Katrina, and the fumbled federal, state, and

local response to the event, suggested that the putative

lessons of hurricane Andrew were not fully learned,

forgotten over time, or were influenced by the inter-

action between the natural hazards and the ‘homeland

security’ domains. Kingdon calls these interactions

between policy domains ‘spillovers’, and such spillovers

can theoretically reinforce learning, or can retard it.

The focus on homeland security had a corrosive

influence on the nation’s pre-paredness for natural

disasters (Tierney, 2005). None of this is to suggest a

normative claim that lessons should not decay over

time; rather, it is to acknowledge that any lesson will

necessarily decay unless it is fully institutionalized into

the law, from legislation through regulation to the

standard operating procedures of regulations.

5. Interim observations on the modeland propositions

This article started with the idea of the lessons learned

document as a ‘fantasy document’. The paper then

proceeded to explain a model of crisis-spurred policy

learning, including its main drivers and key propositions

that derive from the model. Clearly, the entire concept

of ‘fantasy learning’ is broader than the actual document

itself. Rather, I describe a process where the production

of a document is a final or even an interim step along a

much longer timeline, where the document might signal

the end of a period of significant reflection, or may

mark the beginning of further controversy over what

was claimed to have been learned. I focus on the

document as a key feature of the analysis because the

thinking that often goes into such documents reflects

both the functional and the dysfunctional features of the

learning process I outline here. The functional features

include improved policy that yields improved perfor-

mance; the dysfunctional features involve features that

impede learning, or that would, for whatever reason,

prevent what was learned from being put into practice.

Of course, by contrasting ‘functional’ and ‘dysfunc-

tional’ aspects of learning, I appear to adopt a function-

alist perspective on the entire policy process. But

scholars of public policy have long known that most

policy problems are socially constructed and are em-

bedded in long-standing ideas, norms, and practices.

Framing of problems and their solutions is a key part of

this process. It is important to acknowledge that the

learning described in this article is about lessons that

may already be well known, or that were ‘learned

Table 1. Typical Evidence of Learning in the Policy Process

Organization orinstitution Evidence of learning

News media Stories about the problemChanges in the nature of news coverage(people quoted, substance of news cover-age).

Interest groups Change in appearances at congressionalhearings.Increased attention from news media(generated by the group).

Congress Legislative change.Change in the substance of debate.Change in the topic areas of hearings.

Regulatory andimplementingagencies

Issuance of new and proposed regulations.Change in the nature and substance of theregulations being issued. Change in proce-dures and in the interpretation and imple-mentation of statutes and regulations.

Disasters, Lessons Learned, and Fantasy Documents 153

& 2009 Blackwell Publishing Ltd.

Journal of Contingencies and Crisis Management

Volume 17 Number 3 September 2009

Page 9: Disasters, Lessons Learned, and Fantasy Documents

before’ but that become dormant between events, and

the very nature of the lesson-learning process will

depend on how the original policy failure – the problem

itself – is framed. Considerable contention can result

when there are different interpretations of the pro-

blem, because these different interpretations and claims

will greatly influence the claims about what the ‘lessons’

should be. In such an environment, even the claim of

‘fantasy’ learning is contested, because, after all, who is

to say that the learning process is ‘real’ vs. ‘fantastic’?

This paper suggests, however, that there are important

distinctions between learning that is functional in the

sense that it yields policy change and improvement, and

dysfunctional ‘fantasy’ learning that may be driven by

poor causal theory or by narrow self-interest.

In working through this model of policy learning, and

accounting for the special conditions of learning from

crises and disasters, there are important avenues for

future research and for refining this model. After all, it

is a tall order to expect that a policy network will

experience a disaster, will take the necessary steps to

learn from it, and then will put those lessons into effect.

The first issue deserving of attention is the combined

question of time pressures and the overwhelming

publicity that surrounds crises and disasters. Indeed,

the most relevant feature of large disasters is that they

are so huge that their harmful nature is immediately

clear to all in the disaster area, and to those who learn

of the disaster through the news. Containing the scope

and scale of the disaster is the main goal of decision

makers during a crisis, but they must work very quickly

to achieve this end. They do not have a great deal of

time to be reflective and, instead, must often improvise

to find good interim solutions to problems that were

unanticipated, or to problems that cannot be amelio-

rated through standard operating procedures in routine

times, or even routine emergencies such as a small

chemical spill or a relatively minor hurricane.

The second issue is the question of single- vs. double-

loop learning. Single-loop learning is generally learning

about tactics or operations, and is therefore not a key

feature of my model of the policy learning process. I

am more concerned with broader strategic learning

about the usefulness and appropriateness of policy

tools. These policy tools are presumed to have failed

in a crisis, and the crisis is, therefore, an opportunity to

learn and to improve our knowledge of problem solving

at the instrumental level (the policy tool) or at the

social level, involving better understandings of cause

and effect relationships, rhetoric, or the tractability of

public problems. But the line between the types of

learning is blurry, at best. Learning about policy tools,

even at the legislative level, certainly invokes opera-

tional issues. The learning I am most interested in this

paper therefore suggests some sort of fundamental

rethinking about policy besides its operational aspects.

This is why I put the ‘fantasy document’ at the start of

my investigation of learning in this article; such docu-

ments are the end point of an ongoing process. But the

real point is less the document than it is the process

that yielded the ultimate document. We might there-

fore wish to test the process from its outset, by asking

whether the process was an ‘honest’ attempt to learn,

or whether the process was a public relations activity

or a ‘whitewash’ intended to burnish the image of an

organization, or to absolve it of responsibilities for

failures. One might approach this question by finding

out whose office was ultimately responsible for compil-

ing and disseminating any ‘lessons learned’. If we learn

that the public relations staff developed such reports,

one might approach the entire process much more

sceptically than if one knew that the report was created

by a serious internal effort, and external review body,

or some combination of the two.

Indeed, this points out a flaw in the idea that there is

‘one’ lessons learned document. Future research should

look into the range of ‘lessons’ documents that are

produced after a crisis or a disaster. These include

anything from changes to standard operating proce-

dures to major statutory changes, as well as internal

reports and analyses. There may be some divergence

between the public face of an organization and its

private deliberations, particularly under conditions of

extreme attention and time pressure.

6. Conclusion

To call a ‘lessons learned’ document a fantasy document

is to call the entire process by which the document was

created a fantasy exercise. This is not true, of course,

in all cases – there have been many earnest efforts

to improve performance after a crisis or a disaster,

and some – but by no means all – of these efforts

have improved performance. But, in many cases, when

viewed from a political perspective, learning processes

are often not ‘serious’ in the sense that they are

intended to extract lessons from experience and apply

them to current and future problems. Instead, many of

these documents and the processes that create them

are mere reflections of a group’s or interest’s preferred

social construction of a problem and its ‘target popula-

tions’. Often, these groups will resist serious lesson-

learning processes by either resisting the creation of

such investigations, or will, once the investigation is

complete, deny the lessons on cost, feasibility, or other

grounds, or will simply ignore them. For these reasons,

learning is not as common as one might think, even if

the participants in these processes sincerely believe

that the process in which they are engaged is intended

to learn something. Many of these participants learn

that they have to communicate ex cathedra if their ideas

154 Thomas A. Birkland

Journal of Contingencies and Crisis Management

Volume 17 Number 3 September 2009 & 2009 Blackwell Publishing Ltd.

Page 10: Disasters, Lessons Learned, and Fantasy Documents

are to gain attention in future policy debates. More

often, these processes simply result in reports that fail

to address the real problems revealed by an event or a

series of events. The challenge for democracies is to

create the sort of public pressure necessary to make

learning processes more realistic and responsive to the

problems and to the needs of the organizations, com-

munities, regions, and nations in which these events

occur. Because many political systems contain features

that prevent rather than promote policy change, such

learning efforts are doubly challenged, and a great deal of

energy is necessary to overcome systemic inertia. But, in

some cases, learning can exist, and we can ‘learn’ from

these processes how to structure organizations and

policy systems that bring serious learning to the fore.

Note

1. In sharpening the definition, I acknowledge that I also

narrowed the definition substantially, thereby ignoring the

influence of personal experience among decision makers,

among other factors, as type of focusing events. There is

likely some sort of typology of focusing events, which is

beyond the scope of this paper.

References

Airline Industry Information (2001), ‘Cockpit Doors will

Remain Locked Throughout All Canadian Flights, Airline

Industry Information,’ 18 September, via LexisNexis.

Air Safety Week (2000), ‘Bar the Door, Air Safety Week,’ 14

(12), via LexisNexis.

American Highway Users Alliance (2006), Emergency Evacuation

Report Card 2006, American Highway Users Alliance,

Washington, DC, http://www.highways.org/pdfs/evacuation_

report_card2006.pdf (accessed 1 May 2008).

Argyris, C. and Schon, D. (1996), Organizational Learning,

Addison-Wesley, Reading.

Baumgartner, F.R. and Jones, B.D. (1993), Agendas and Instabil-

ity in American Politics, University of Chicago Press, Chicago.

Birkland, T.A. (1997), After Disaster: Agenda Setting, Public

Policy and Focusing Events, Georgetown University Press,

Washington, DC.

Birkland, T.A. (1998), ‘Focusing Events, Mobilization, and

Agenda Setting’, Journal of Public Policy, Volume 18, Number

3, pp. 53–74.

Birkland, T.A. (2004), ‘Learning and Policy Improvement After

Disaster: The Case of Aviation Security’, American Beha-

vioral Scientist, Volume 48, Number 3, pp. 341–364.

Birkland, T.A. (2006), Lessons of Disaster, Georgetown

University Press, Washington, DC.

Busenberg, G.J. (2001), ‘Learning in Organizations and Public

Policy’, Journal of Public Policy, Volume 21, Number 2, pp.

173–189.

Clarke, L.B. (1999), Mission Improbable: Using Fantasy Documents

to Tame Disaster, University of Chicago Press, Chicago.

Cobb, R.W. and Elder, C.D. (1983), Participation in American

Politics: The Dynamics of Agenda-Building, Johns Hopkins

University Press, Baltimore.

Cobb, R.W. and Primo, D.M. (2003), The Plane Truth: Airline

Crashes, the Media, and Transportation Policy, Brookings

Institution, Washington.

Cohen, M.D., March, J.G. and Olsen, J.P. (1972), ‘A Garbage

Can Model of Organizational Choice’, Administrative Science

Quarterly, Volume 17, pp. 1–25.

Cooper, C. and Block, R. (2006), Disaster: Hurricane Katrina and

the failure of Homeland Security, Times Books, New York.

Derthick, M. (2007), ‘Where Federalism Didn’t Fail’, Public

Administration Review, Volume 67, Number s1, pp. 36–47.

Haider-Markel, D.P. and Joslyn, M.R. (2001), ‘Gun Policy,

Opinion, Tragedy and Blame Attribution: The Conditional

Influence of Issue Frames’, Journal of Politics, Volume 63,

Number 2, pp. 520–543.

Hilgartner, J. and Bosk, C. (1988), ‘The Rise and Fall of Social

Problems: A Public Arenas Model’, American Journal of

Sociology, Volume 94, Number 1, pp. 53–78.

Jones, B.D. (2001), Politics and the Architecture of Choice:

Bounded Rationality and Governance, University of Chicago

Press, Chicago.

Kingdon, J.W. (1995), Agendas, Alternatives and Public Policies,

Harper Collins: New York.

Larkin, R.W. (2007), Comprehending Columbine, Temple Uni-

versity Press, Philadelphia.

Lawrence, R.G. and Birkland, T.A. (2004), ‘Guns, Hollywood,

and Criminal Justice: Defining the School Shootings Pro-

blem across Public Arenas’, Social Science Quarterly, Volume

85, Number 5, pp. 1193–1207.

May, P.J. (1992), ‘Policy Learning and Failure’, Journal of Public

Policy, Volume 12, Number 4, pp. 331–354.

Naik, G., Sprothen, V. and Crawford, D. (2005), Scandinavians’

Sense of Order is Shattered by Asian Disaster, Wall Street

Journal, New York, 7 January, via LexisNexis.

Neustadt, R.E. and May, E.R. (1986), Thinking in Time: The Uses

of History for Decision Makers, Free Press, New York.

Perrow, C. (1999), Normal Accidents: Living with High-Risk

Technologies, Princeton University Press, Princeton, NJ.

Quarantelli, E.L. (2005), Catastrophes are different from disasters:

some implications for crisis planning and managing drawn from

Katrina, http://understandingkatrina.ssrc.org/Quarantelli

(accessed 21 October 2005).

Richfield, P. (2000), ‘Secret Knock Not So Secret, Business

and Commercial Aviation,’ 36 August, via LexisNexis.

Roig-Franzia, M. and Hsu, S. (2005), Many Evacuated, but

Thousands Still Waiting, White House Shifts Blame to State

and Local Officials, Washington Post, Washington, DC,

4 September, A1, via LexisNexis.

Rubin, C.B., Cumming, W.B., Tanali, I.R. and Birkland, T.A.

(2003), Major Terrorism Events and their U.S. Outcomes (1988–

2001), Natural Hazards Research and Applications Informa-

tion Center, Institute of Behavioral Science, University of

Colorado, Boulder CO, http://www.colorado.edu/hazards/

wp/wp107/wp107.html (accessed 1 December 2008).

Schattschneider, E.E. (1975), The Semisovereign People, The

Dryden Press, Hinsdale, IL.

Schiavo, M. (1997), Flying Blind, Flying Safe, Avon Books,

New York.

Disasters, Lessons Learned, and Fantasy Documents 155

& 2009 Blackwell Publishing Ltd.

Journal of Contingencies and Crisis Management

Volume 17 Number 3 September 2009

Page 11: Disasters, Lessons Learned, and Fantasy Documents

Stromback, J. and Nord, L.W. (2006), ‘Mismanagement,

Mistrust and Missed Opportunities: A Study of the 2004

Tsunami and Swedish Political Communication’, Media,

Culture and Society, Volume 28, Number 5, pp. 789–800.

Tierney, K.J. (2005), The red pill. http://understandingkatrina.

ssrc.org/Tierney/ (accessed 12 November, 2008).

Widfeldt, A. (2007), ‘Sweden’, European Journal of Political

Research, Volume 46, Number 7/8, pp. 1118–1126.

Wolshon, B., Catarella-Michel, A. and Lambert, L. (2006),

‘Louisiana Highway Evacuation Plan for Hurricane Katrina:

Proactive Management of a Regional Evacuation’, Journal of

Transportation Engineering, Volume 132, Number 1, pp.

1–10.

World Airline News (2001), ‘International Bodies Take Action

To Enhance Security Programs’, World Airline News, 14

September, via LexisNexis.

156 Thomas A. Birkland

Journal of Contingencies and Crisis Management

Volume 17 Number 3 September 2009 & 2009 Blackwell Publishing Ltd.