was three mile island a ‘normal accident’?

8
Was Three Mile Island a ‘Normal Accident’? Andrew Hopkins* Perrow’s normal accident theory suggests that some major accidents are inevitable for technological reasons. An alternative approach explains major accidents as resulting from management failures, particularly in relation to the communication of information. This latter theory has been shown to be applicable to a wide variety of disasters. By contrast, Perrow’s theory seems to be applicable to relatively few accidents, the exemplar case being the Three Mile Island nuclear power station accident in the U.S. in 1979. This article re-examines Three Mile Island. It shows that this was not a normal accident in Perrow’s sense and is readily explicable in terms of management failures. The article also notes that Perrow’s theory is motivated by a desire to shift blame away from front line operators and that the alternative approach does this equally well. Introduction In 1984, Charles Perrow published an analysis of the Three Mile Island nuclear power station accident in the US. His argument was that the Three Mile Island incident was a distinctive kind of accident, a ‘normal’ accident, and his theory has become known as Normal Accident Theory. Perrow’s book became an almost instant classic and has had a profound influence on social science research on major accidents. One indicator of its status is the fact that it has just been re-issued (1999), fifteen years after its original publication. Ironic testimony to the influence of normal accident theory is the emergence of a contrasting school of thought known as High Reliability Theory (e.g., La Porte and Consolini, 1991). A lively debate between these contrasting perspec- tives has ensued (Sagan, 1993; La Porte and Rochlin, 1994; Perrow, 1994). This article is not intended as a contribution to the debate between these two perspectives. It critiques Normal Accident Theory from a dif- ferent vantage point, namely, Turner’s (1978) work on man-made disasters, to be outlined below. I have dealt elsewhere with some of the limitations of Normal Accident Theory (Hopkins, 1999a); the present article takes the critique a step further. But before we can begin the critique we need a clear statement of Perrow’s argument. Perrow on Normal Accidents Perrow outlines what he means by a ‘normal’ accident in the following passage. It is termed normal because it is inherent in the characteristics of tightly coupled, complex sys- tems and cannot be avoided. A tightly coupled system is highly interdependent; each part is linked to many other parts, so that a failure of one can rapidly affect the status of others. A malfunctioning part cannot be easily isolated either, because there is insufficient time to close it off or because its failure affects too many other parts, even if the failure does not happen rapidly. A normal accident occurs in a complex and tightly coupled system when there are unantici- pated multiple failures in the equipment, design, or operator actions . . . The crucial point about a normal accident is that unexpected, multiple failures occur. As a result, for some critical period of time, the nature of the accident is incompre- hensible to those seeking to control it. In addition to being unforeseeable, incompre- hensible and not amenable to knowledgeable inter- vention, the normal accident cannot be prevented because it is not possible to create faultless systems. . . . [T]he [Three Mile Island] accident was unex- pected, incomprehensible, uncontrollable and un- avoidable; such accidents had occurred before in nuclear plants, and would occur again, regardless of how well they were run (1982: 174, 176, emphasis added). This passage has been quoted at length so that there can be no doubt about Perrow’s argument: certain technologies make major accidents inevitable, no matter how well managed an operation may be. This is an unashamedly technological determinist argument. Turner: Warnings Plus Sloppy Management Standing in sharp contrast to Perrow is Turner’s analysis, first published in his book Man-Made * Department of Sociology, Faculty of Arts, Australian National University, Canberra, 0200, Australia, E-mail: [email protected]. ß Blackwell Publishers Ltd 2001, 108 Cowley Road, Oxford OX4 1JF, UK and 350 Main Street, Malden, MA 02148, USA. Volume 9 Number 2 June 2001 WAS THREE MILE ISLAND A ‘NORMAL ACCIDENT’? 65

Upload: andrew-hopkins

Post on 15-Jul-2016

225 views

Category:

Documents


6 download

TRANSCRIPT

Page 1: Was Three Mile Island a ‘Normal Accident’?

Was Three Mile Island a `NormalAccident'?

Andrew Hopkins*

Perrow's normal accident theory suggests that some major accidents are inevitable fortechnological reasons. An alternative approach explains major accidents as resulting frommanagement failures, particularly in relation to the communication of information. This lattertheory has been shown to be applicable to a wide variety of disasters. By contrast, Perrow'stheory seems to be applicable to relatively few accidents, the exemplar case being the ThreeMile Island nuclear power station accident in the U.S. in 1979. This article re-examines ThreeMile Island. It shows that this was not a normal accident in Perrow's sense and is readilyexplicable in terms of management failures. The article also notes that Perrow's theory ismotivated by a desire to shift blame away from front line operators and that the alternativeapproach does this equally well.

Introduction

In 1984, Charles Perrow published an analysis ofthe Three Mile Island nuclear power stationaccident in the US. His argument was that theThree Mile Island incident was a distinctive kindof accident, a `normal' accident, and his theoryhas become known as Normal Accident Theory.Perrow's book became an almost instant classic

and has had a profound influence on social scienceresearch on major accidents. One indicator of itsstatus is the fact that it has just been re-issued(1999), fifteen years after its original publication.Ironic testimony to the influence of normalaccident theory is the emergence of a contrastingschool of thought known as High ReliabilityTheory (e.g., La Porte and Consolini, 1991). Alively debate between these contrasting perspec-tives has ensued (Sagan, 1993; La Porte andRochlin, 1994; Perrow, 1994).This article is not intended as a contribution to

the debate between these two perspectives. Itcritiques Normal Accident Theory from a dif-ferent vantage point, namely, Turner's (1978)work on man-made disasters, to be outlinedbelow. I have dealt elsewhere with some of thelimitations of Normal Accident Theory(Hopkins, 1999a); the present article takes thecritique a step further. But before we can beginthe critique we need a clear statement ofPerrow's argument.

Perrow on Normal Accidents

Perrow outlines what he means by a `normal'accident in the following passage.

It is termed normal because it is inherent in thecharacteristics of tightly coupled, complex sys-tems and cannot be avoided. A tightly coupledsystem is highly interdependent; each part islinked to many other parts, so that a failure of onecan rapidly affect the status of others. Amalfunctioning part cannot be easily isolatedeither, because there is insufficient time to closeit off or because its failure affects too many otherparts, even if the failure does not happen rapidly.

A normal accident occurs in a complex andtightly coupled system when there are unantici-pated multiple failures in the equipment, design, oroperator actions . . . The crucial point about anormal accident is that unexpected, multiplefailures occur. As a result, for some critical periodof time, the nature of the accident is incompre-hensible to those seeking to control it.

In addition to being unforeseeable, incompre-hensible and not amenable to knowledgeable inter-vention, the normal accident cannot be preventedbecause it is not possible to create faultless systems.

. . . [T]he [Three Mile Island] accident was unex-pected, incomprehensible, uncontrollable and un-avoidable; such accidents had occurred before innuclear plants, andwould occur again, regardless of howwell they were run (1982: 174, 176, emphasis added).

This passage has been quoted at length so thatthere can be no doubt about Perrow's argument:certain technologies make major accidentsinevitable, no matter how well managed anoperation may be. This is an unashamedlytechnological determinist argument.

Turner: Warnings Plus SloppyManagement

Standing in sharp contrast to Perrow is Turner'sanalysis, first published in his book Man-Made

* Department of Sociology,Faculty of Arts, AustralianNational University, Canberra,0200, Australia, E-mail:[email protected].

ß Blackwell Publishers Ltd 2001, 108 Cowley Road, Oxford OX4 1JF, UK and350 Main Street, Malden, MA 02148, USA. Volume 9 Number 2 June 2001

WAS THREE MILE ISLAND A `NORMAL ACCIDENT'? 65

Page 2: Was Three Mile Island a ‘Normal Accident’?

Disasters, in 1978, and augmented in a significantarticle in 1994. According to Turner, prior tolarge-scale accidents there are nearly alwayswarning signs that are missed, overlooked orignored and that, if acted on, would haveaverted the accident. The central question thenis sociological rather than technological: `whatstops people acquiring and using appropriateadvance warning information, so that large-scaleaccidents and disasters can be prevented?'(Turner and Pidgeon, 1997: 162). His answer, inits simplest form, is sloppy management (Turner,1994). Sloppy management fails to put in placeadequate information gathering systems. More-over, sloppy management allows various pro-cesses to operate that may nullify warnings: thenormalisation of deviance (Vaughan, 1996),group think (Janis, 1982), cultures of denial(Hopkins, 1999b) and so on (Turner, 1994: 218).Good management would have systemsdesigned to override these tendencies and tohighlight and respond to warning signs; thefailure to establish such systems is a managementfailure.Turner showed in his 1978 book that his

analysis was apposite for a large number ofdisasters. Analyses of various disasters since thattime confirm the applicability of his approach(Appleton, 1994; Hopkins, 1999b; Vaughan,1996).

Reconciling the two perspectives?

Here, then, are two very different explanationsof large-scale accidents, apparently in com-petition with each other. How shall we makesense of this situation? One possibility is toassume that they apply in different circumstancesand are therefore not in competition. After all,not all systems are characterised by tightcoupling and complexity. And in complex andtightly coupled systems, accidents may occur fordifferent reasons. Perrow himself suggests thatthe scope of his explanation is limited in thisway. The gas leak from a chemical plant thatkilled thousands at Bhopal in India, the fierydestruction of the space shuttle, Challenger, theSoviet nuclear reactor accident at Chernobylfrom which people are still dying, the ExxonValdez tanker oil spill in Alaska, an undoubtedenvironmental disaster if not a human one ±none of these is a normal or system accident,according to Perrow (1994: 218). They are nomore than `component failure accidents', whichcannot be analysed in system terms. `They arealarmingly banal examples of organisationalelites not trying very hard', he says (Perrow,1994: 218). Even Seveso, the chemical accident inItaly, which precipitated a new genre ofregulation in Europe for dealing with major

hazards, was merely a component failureaccident in Perrow's (1984: 295) judgement. Inother words, many of the most publiciseddisasters of our time are not explicable in termsof Normal Accident Theory. The theory is thusnot as useful as might at first have been thought.But this does not invalidate it, since, presumably,there is still a class of major accidents causedinevitably by tight coupling and complexity.The paradigm case of such an accident is,presumably, the Three Mile Island incident.This is the resolution that Turner and his

associates seem to have accepted; namely thatthere are some major accidents to which Turner'stheory does not apply and that are onlyexplicable as `normal accidents'. Thus Turner(1994: 218) writes

As our technological systems become increasinglyextensive and complex, the possibility grows ofsome accidents arising from the properties of thesystem as a whole, often as a result of unforeseeninteractions which involve several organisations.To understand some disasters, therefore, it isnecessary to recognise that some highly complexsystems generate `normal accidents'.

Pidgeon seems also to concede some validityto Perrow's argument, although he notes that`the concepts of complexity and coupling haveturned out to be difficult to use analytically' andthat the account is overly deterministic (Turnerand Pidgeon, 1997: 179).But is this concession really necessary? Are

there really cases of major accidents to whichTurner's theory of ignored warnings does notapply ± accidents that were inevitable for purelytechnological reasons?In seeking to answer this question it is no use

taking as yet unanalysed cases, even in high-techindustry, and showing that Turner's theoryapplies, since this does not dent the claim thatat least some major accidents are normal inPerrow's sense. The most effective strategy is tofocus on the exemplar case, Three Mile Island.Most commentators writing about majoraccidents have implicitly accepted Perrow'sown account of this accident. What I proposeto do here is to re-analyse this case and showthat there was nothing technologically inevitableabout the incident. I want to show that itconformed to a remarkable extent to the Turnermodel of ignored warnings and sloppymanagement.

The Issue of Multiple Failures

Three Mile Island was a water-cooled nuclearreactor. On 28 March, 1979, a chain of eventsbegan that resulted in the escape of the coolingwater (a `loss of coolant accident' in the jargon of

66 JOURNAL OF CONTINGENCIES AND CRISIS MANAGEMENT

Volume 9 Number 2 June 2001 ß Blackwell Publishers Ltd 2001

Page 3: Was Three Mile Island a ‘Normal Accident’?

the industry). The reactor came very close to ameltdown, but in the event no nuclear radiationescaped, and no appreciable consequence to thehealth of nearby residents occurred. However,the reactor was damaged beyond repair, clean uptook more than a decade and cost $1 billion andno reactors have since been built in the US (Rees,1994: 11).There were no fewer than four quite separate

malfunctions in the accident sequence, followedby a crucial error by control room operators(Perrow, 1984: 18). The issue of `operator error'is of course controversial, and I shall return to itlater. But the point to be stressed here is thatthese malfunctions did not constitute a series offailures following consequentially from a singleoriginal cause. Consequential failures areperfectly comprehensible and quite predictable.For example, poor roof design may lead to suchan accumulation of snow that the load limits forthe building are exceeded, causing it to collapse.In this situation the failure of a well-designedsupport structure is predictable given the initialroof design failure (hypothetical exampleinspired by Pidgeon, 1997: 2). The malfunctionsat Three Mile Island were not consequential onsome original failure in this way: they wereindependent of each other. No-one could haveforeseen the way in which these four discretefailures interacted to produce the loss of coolantaccident. The precise configuration of failureswas unprecedented and unpredictable.So far so good. But Perrow concludes from

this that the accident was inevitable and notpreventable. This is a logical error. It is notnecessary to predict the entire accident sequenceto avoid such an accident. The point is that hadany one of the malfunctions not occurred, theloss of coolant accident would not haveoccurred. The accident sequence was highlysusceptible to interruption, unlike accidentsequences that involve consequential failures.Perhaps the best way to make this point is to

draw on Reason's (1997: 9) concept of an`accident trajectory'. He argues that majortechnological hazards are managed by con-structing a series of `defences in depth'. Anyone of these defences, if it works, will terminatethe trajectory of a potential accident. However,defences often have `holes' in them; Reasondescribes this as the Swiss cheese metaphor. If apotential accident trajectory passes through oneof the holes in the first defence, it will encounterthe second defence. For an accident to occur, theholes in all defences must line-up, that is, thedefences in depth must all fail simultaneously.According to the Swiss cheese model, if only

one of the defences had been effective theaccident would not have occurred. It makessense, therefore, to ask of every defence thatfailed why it failed and whether its failure was

predictable. If even one of these failures waspredictable, then we can reasonably concludethat the accident itself was preventable. In short,it is not necessary to be able to predict theprecise trajectory of an accident in order to beable to prevent it (for a related analysis, see Toftand Reynolds, 1994).We could go further and argue that the

simultaneous failure of all defences in depth isnot evidence that accidents are inevitable, butrather that the system of defences is inadequate.One is reminded of Lady Bracknell's comment inThe Importance of Being Earnest: `To lose oneparent may be regarded as a misfortune, to loseboth seems like carelessness' (Wilde, 1980: 30).

The Failures

In order to be able to consider whether themalfunctions and failures were predictable, andtherefore preventable, we will need to describethe process in more detail. Anyone who seeks todescribe the Three Mile Island accident confrontsthe problem of how much detail needs to beconveyed to the reader and just how to conveythose details judged to be necessary. Perrow(1984: 15), himself, struggled with this and in theend resolved it as follows:

What I wish to convey is the interconnectednessof the system and the occasions for bafflinginteractions. This will be the most demandingtechnological account in the book, but even ageneral sense of the complexity will suffice if onewishes to merely follow the drama rather than thetechnical evolution of the accident.

For present purposes it is necessary thatreaders understand at least some of the detail,but I shall strive to present it as parsimoniouslyas possible.Nuclear reactors generate enormous heat and

at Three Mile Island this was removed from thereactor core by a primary, internal circuit ofcoolant. The heat from the coolant in thisinternal circuit was transferred via a heatexchanger to water in a secondary, externalcircuit, turning it into steam. This, in turn, drovesteam turbines, located in the external circuit,which generated electricity. Two of the fourmalfunctions occurred in the external circuit andtwo in the internal circuit. I deal with these inturn.

1) Water used in the steam turbines must bepure, and the external circuit thereforecontained a water purifier that needed regularmaintenance. It was this maintenance workthat triggered the accident sequence. A leakyvalve set off a chain of events that caused themain pumps in the external circuit to closedown. This loss of flow in the external circuit

WAS THREE MILE ISLAND A `NORMAL ACCIDENT'? 67

ß Blackwell Publishers Ltd 2001 Volume 9 Number 2 June 2001

Page 4: Was Three Mile Island a ‘Normal Accident’?

meant that heat would no longer be removedfrom the internal circuit.

2) To overcome this danger, emergency pumpswere supposed to come into action tomaintain the flow of water in the externalcircuit. But they had been blocked off twodays earlier for maintenance and, by mistake,the blocks had not been removed, renderingthe pumps inoperative. The external circuittherefore ceased removing heat from theinternal circuit.The failure of the external circuit would

not have been critical if automatic safetydevices in the internal circuit had functionedas intended. But they did not.

3) The heat was no longer being removed fromthe internal cooling circuit. As a result, thereactor core began to overheat and thereactor closed down, as it was designed todo. Predictably, however, it continued toemit `decay heat'. Pressure in the internalcircuit built up to the point where a reliefvalve in the circuit opened, as it was designedto do. Coolant escaped and the pressurereturned to normal. But the value mal-functioned, failing to close properly, andcoolant continued to escape.

4) Because of a design flaw, a light on thecontrol panel indicated that the valve hadclosed even though it was open. Thus,operators did not know that they wereexperiencing a loss of coolant accident.

5) Because of the continuing loss of coolant,pressure in the internal circuit dropped to alevel that meant that it would be unable tocontinue conducting heat away from thecore. Unless the circuit could be re-pressurised, the core would melt and largeamounts of radioactive material might bereleased. To avoid this danger, anotherautomatic safety device kicked in: highpressure injection pumps came on, forcingadditional water into the internal circuit tomake up for the loss of coolant. Had thepumps been allowed to continue operating,the accident could still have been avoided,but operators, seeing the pressure rising andknowing that this could damage the system ifit continued, manually throttled these pumpsback. They were still unaware that a majorloss of coolant was occurring and thatcoolant was not reaching the core atsufficiently high pressure. It was two hoursand 20 minutes before operators discoveredtheir error. They immediately reactivated thehigh pressure injection system and floodedthe core with cold water. By this time majorand irreversible damage had been done to thecore, but by good fortune the situation wasbrought under control before there had beenany release of radioactive material.

Perrow's claim is that this accident sequencewas so complex and the information available tooperators so flawed that there was no way theycould have been expected to understand whatwas going on and react in an effective manner.Indeed, he provides rather more detail than hasbeen presented here as to why operators wereconfused:

110 alarms were sounding; key indicators wereinaccessible; repair-order tags covered the warninglights of nearby controls; the data printout on thecomputer was running behind (eventually by anhour an a half); key indicators malfunctioned; theroom was filling with experts; several pieces ofequipment were out of service or suddenlyinoperative (1982: 180).

We can agree with Perrow that, given thesituation the operators found themselves in,there was no way they could have avoided theaccident. But it does not follow that the accidentwas unavoidable. We can legitimately ask: whydid operators find themselves in this position?And more pointedly: could management haveprevented this accident? This latter question isnot seriously raised by Perrow. Had he done sohe might have arrived at rather differentconclusions about the avoidability of the ThreeMile Island accident.

The Warnings

Although the particular sequence of events atThree Mile Island was unprecedented, sections ofthe event sequence had occurred previously.There had, in short, been warnings. Had thesewarnings been properly attended to, the ThreeMile Island accident would not have occurred.Consider first the initial failure of the pumps in

the external circuit, triggered by maintenancework on the water purifier. During this work, aflow of water through a leaky valve resulted in aspurious electrical signal that automaticallyclosed certain other valves. This interruptedthe flow in the external circuit to the mainpumps, which then shut down as they weredesigned to do in these circumstances.The Report of the Office of the Chief Counsel

assisting the Presidential Commission notes that`virtually every detail of this . . . sequence hadbeen duplicated in an incident 17 months earlieron 19 October, 1977' (Gorinson, 1979: 157).Company investigators at the time recognisedthe potential for this accident to escalate andwrote a memorandum that `recommended ninesteps that should be acted on to preclude arecurrence'. This memorandum was summarisedas it passed up the company hierarchy and thefull significance of the event was not appreciatedby senior management who therefore failed toprovide a satisfactory response. The Chief

68 JOURNAL OF CONTINGENCIES AND CRISIS MANAGEMENT

Volume 9 Number 2 June 2001 ß Blackwell Publishers Ltd 2001

Page 5: Was Three Mile Island a ‘Normal Accident’?

Counsel notes that had the company `givencareful follow-up attention to the 19 October,1977 incident, the . . . [purifier] might not havemalfunctioned on 28 March, 1979 and theaccident sequence might never have had achance to begin' (Gorinson, 1979: 158).Although the accident sequence began with

failures in the external circuit, the malfunctionsand failures in the internal coolant circuit (thecircuit through the reactor core) were far morecritical, as these failures ultimately allowed theloss of coolant to occur and the reactor to heatup dangerously. Consideration of these failuresfollows.The reactor was manufactured and supplied to

the utility company by another firm, Babcockand Wilcox, which retained a contractualresponsibility for its product. The PresidentialCommission found that there had been 11previous relief valve failures in Babcock andWilcox reactors, nine of them being failures inthe open position. Each of these nine failuresamounted to a loss of coolant accident that,fortunately, had been prevented from escalating.One of these failures had occurred at the ThreeMile Island reactor a year before the majoraccident. Babcock and Wilcox was aware ofthese failures and in some cases recommendedremedial action, but it did not systematicallynotify all its utility customers. Chief Counsel forthe Commission was critical of the company fornot recommending `additional training ofoperators to ensure that they were aware of alikelihood of a relief valve failure and knew howto identify quickly the resulting small-break lossof coolant accident' and how to respondeffectively (Gorinson, 1979: 151). It was alsocritical of the fact that Babcock and Wilcox hadnever recommended a modification of the valueposition indicator to ensure that it workedreliably (Gorinson, 1979: 151).But far more disturbing than these repeated

failures of the relief valve is that almost theentire sequence of events that occurred at ThreeMile Island had occurred at another Babcock andWilcox reactor owned by another utilitycompany, Davis-Besse, 18 months earlier(Gorinson, 1979: 130). This event sequenceinvolved:

● a total loss of circulation in the external circuit;● a relief valve that failed in the open position

in the internal circuit;● a failure of the operators to recognise that a

loss of coolant accident was occurring;● the premature termination by operators of the

high pressure injection of water tocompensate for loss of coolant.

This was precisely what occurred at ThreeMile Island. According to the Chief Counselthere were only two significant differences

between the two accidents. First, operators atDavis-Besse realised after 20 minutes that therelief valve had stuck open, whereas at ThreeMile Island this was not understood for twohours and 20 minutes. Second, Davis-Besse wasoperating at the time at 9 percent power whileThree Mile Island was operating at 97 percent.One month later operators at Davis-Besse

again made the mistake of throttling back thehigh pressure injection in circumstances thatmight have proved disastrous (Gorinson,1979: 130). Two engineers at Babcock andWilcox were concerned about these events andone wrote a memo to his senior management inwhich, after describing the events, he made thefollowing observation:

Since these are accidents which require thecontinuous operation of the high pressureinjection system, I wonder what guidance, if any,we should be giving to our customers on whenthey can safely shut the system down followingan accident? (Gorinson, 1979: 130)

There was no response to this memo, so threemonths later the second engineer wrote a morestrongly worded memorandum to his superiorsentitled `operator interruption of high pressureinjection', in which he said in part:

I believe it fortunate that Davis-Besse was atextremely low power Had this event occurred in areactor at full power. . . . it is possible, perhapsprobable, that core uncovery and possible fueldamage would have resulted (Gorinson,1979: 132).

Here was a clear warning of reactor meltdown.But senior management at Babcock and Wilcoxdid not attend to this warning and nothing wasdone to retrain reactor operators or to changetheir work practices in regard to the terminationof high pressure injection. Chief Counselconcluded as follows:

(The) Davis-Besse (incident) revealed thatoperators had been provided with inadequateprocedures for termination of high pressureinjection following a Davis-Besse-type loss ofcoolant accident. At Babcock and Wilcoxemployees sought to get the company to correctthat error. Yet through neglect and bureaucraticmistakes that information was never conveyed toBabcock and Wilcox customers . . . (Gorinson,1979: 130).

It is true that Babcock and Wilcox periodicallysent a bulletin to its utility customers and thatthe Davis-Besse incident was reported in one ofthese bulletins. But it was reported only as a lossof coolant accident caused by a failure of therelief valve. No mention was made of the factthat the high pressure injection system, meant toprotect against the consequences of loss ofcoolant, had been terminated prematurely by

WAS THREE MILE ISLAND A `NORMAL ACCIDENT'? 69

ß Blackwell Publishers Ltd 2001 Volume 9 Number 2 June 2001

Page 6: Was Three Mile Island a ‘Normal Accident’?

operators. In the words of the Chief Counsel, thebulletin `did not emphasise the real importanceof the event ± namely that operator terminationof high pressure injection . . . could result in coreuncovery and fuel damage (i.e. meltdown)'(Gorinson, 179: 149)Apart from the Davis-Besse incident, there

was one other detailed warning. Two yearsbefore the Three Mile Island accident anengineer at another utility had carried out ananalysis of the circumstances that could lead tosuch an accident. He predicted that operatorswere likely to terminate the high pressureinjection system prematurely and explainedprecisely why this could be expected. Theanalysis was forwarded to Babcock and Wilcox,who effectively did nothing about it. Accordingto Chief Counsel, `by the end of May 1978 (10months before Three Mile Island) Babcock andWilcox knew that concern about operatorinterruption of high pressure injection . . . hadbeen expressed from three sources (only twomentioned here) . . . Still nothing happened'(1979: 155).

Why Were Warnings Ignored?

It is clear that there were plenty of warnings ofwhat might happen at Three Mile Island. Almostevery bit of the accident sequence had occurredpreviously and the Davis-Besse incident wasvirtually a dress rehearsal for what happened atThree Mile Island. Several people had explicitlyand persistently warned of the dangers. Whywere all these warnings to no avail? The answeris again provided by the Chief Counsel: none ofthe companies concerned had adequateorganisational procedures for attending to pastexperience.Take Babcock and Wilcox first. This company

had no individual or department responsible foranalysing accidents occurring to its reactors.Moreover, the company received only abbrevi-ated summaries of accidents occurring in itsreactors rather than the full text prepared by theutility concerned. Again, Babcock and Wilcoxdid not incorporate accidents from other nuclearplants or even its own plants into trainingexercises it ran for plant operators (Gorinson,1979: 125).Consider, next, the utility that operated Three

Mile Island, Met Ed. The only personresponsible for reviewing the experience ofother plants was the training manager. He toldthe President's Commission that he spent aboutone tenth of his time reading reports of otheraccidents. Nine months before the accident, MetEd's parent company, General Public Utilities,expressed concern `that Met Ed's internal systemwas not digesting information received about

experiences at other plants' (Gorinson,1979: 123). But there was no effective responseby Met Ed to this concern and General PublicUtilities did not push the point. The president ofGeneral Public Utilities concluded later that `tome . . . one of the most significant learnings ofthe whole accident is the degree to which theinadequacies of that experience feedback loop . . .significantly contributed to making us and theplant vulnerable to this accident' (Gorinson,1979: 125).Part of the reason for this complacency was

what has since been described at the fossil fuelmentality of the power generating industry (seeRees, 1994: 15). The utilities had traditionallyproduced their power using the well-establishedtechnologies of coal and oil. This was tried, trueand safe, and management felt no need to attendto technical details of plant operation. Theycould afford to remove themselves fromtechnical matters and focus their attention onmarketing, cost cutting and profit maximising.Despite changed circumstances, this mentalitypersisted until the Three Mile Island accident.Utility management simply had not come toterms with the fact that they were nowgenerating power using a relatively untriedtechnology involving hazards of anunprecedented nature that required a far morediligent approach to safety. According to Rees(1994), this is now well understood in theindustry, in part because it is obvious thatanother accident of the type which occurred atThree Mile Island would mean the end of thenuclear power industry in the United States.

The Applicability of Turner's Analysis

It is now clear that the accident at Three MileIsland conforms beautifully to Turner's account.The exemplar case of a normal accident turns outto be just another case of sloppy management. Itis a story of ignored warnings, inadequatecommunication, and failure by management tofocus its attention on safety. This was not anaccident rendered inevitable by technologicalcomplexity and tight coupling; it was not anaccident so different from other major accidentsas to require a wholly new explanatoryparadigm, the paradigm of the normal accident.Turner's earlier noted concession that someaccidents may not be explicable in terms ofsloppy management and may need to beunderstood as normal accidents, in Perrow'ssense, now appears to be premature. I notedearlier that the category of accidents to whichNormal Accident Theory applies is severelylimited. The implication of the present discussionis that it may well be impossible to identify anymajor accidents to which the theory applies.

70 JOURNAL OF CONTINGENCIES AND CRISIS MANAGEMENT

Volume 9 Number 2 June 2001 ß Blackwell Publishers Ltd 2001

Page 7: Was Three Mile Island a ‘Normal Accident’?

Perrow's Motivation

All this raises the question of why Perrow choseto analyse the Three Mile Island accident in theway he did. He was certainly aware of all theevidence of warnings and sloppy managementthat has been presented above and refers to it atvarious points in his book (1984: 20, 24, 48, 57).As he notes at one point:

Time and again (in the story of Three Mile Island)warnings are ignored, unnecessary risks taken,sloppy work done, deception and downright lyingpractised (1984: 10).

Perrow appears at one point to play down thesignificance of warnings. They are ignored, hesays, because `signals are simply viewed asbackground `noise' until their meaning is disclosedby an accident' (Perrow, 1982: 175). It is only withhindsight that they appear as warnings.This account has an initial plausibility, but it is

ultimately unsatisfactory. The key to under-standing why, is to note Perrow's use of thepassive voice in the above quotation thatimplicitly draws attention away from the agentof the act. Suppose we focus on the agent andask: who viewed the signals as background noise?Clearly not the various engineers who issued thewarnings about the possibility that high pressureinjection might be terminated prematurely. Tothese people, the events to which they werereacting were not noise ± they were deeplytroubling and dramatic indications of just how amajor accident might occur. And the memorandathey wrote were quite explicit and incapable ofbeing dismissed as noise by anyone who readthem. The failure of more senior management torespond stemmed not from an inability todistinguish noise from relevant signals but fromthe inadequacy of organisational procedures forattending to past experience.But it is not this alleged problem of `noise',

which accounts for Perrow's decision to passover the evidence of ignored warnings, as heexplains in following passage.

. . . warnings occurred well before TMI. Abureaucratic tale worthy of Franz Kafka cameout of the investigation of TMI and the warnings,which we shall forego telling so we can stick with thevillains of the piece, according to most reports: thehapless operators (Perrow, 1984: 24, emphasisadded)

Perrow's ultimate purpose was to challengethe finding of the President's Commission thatthe major cause of the accident was operatorerror. His principal argument against the thesisof operator error is that the situation confrontingthe operators at Three Mile Island was socomplex and opaque that they could notpossibly be expected to understand what washappening or what actions they should take to

deal with the problem. Normal Accident Theorywas elaborated in order to demonstrate,theoretically, why this was the case. In complex,tightly coupled technological systems, hetheorised, `for some critical time period thenature of the accident is incomprehensible tothose seeking to control it' (Perrow, 1982: 174).The operators, therefore, are not to blame.From the perspective of the late 1990s

Perrow's strategy for combating the theory ofoperator error as the cause of major accidentsseems dated and unnecessary. The dominantaccident causation models of today systemati-cally down-play the significance of operatorerror. They emphasise management failures asthe root cause of accidents and, in particular, theroot cause of any operator error that may havecontributed to an accident (Hale, Baram andHovden, 1998: 3). It is noteworthy, too, thatReason's (1990) influential model of latent andactive failures focuses attention on latent failures,which are necessarily a management responsi-bility (Chapter 7). These models provide anobvious explanation of the Three Mile Islandaccident; one that does not blame the operators.But at the time Perrow conceived his theory of

normal accidents, in 1979, explanations ofaccidents in terms of management systemfailures were not well known. (For instance,Turner's 1978 account, which stressed communi-cation failure as the cause of accidents, was notreferenced in Perrow's 1984 book.) In thesecircumstances, Perrow's theory of normalaccidents provided an innovative and effectiveway to shift the blame for accidents away fromfrontline operators. Today's theories ofmanagement system failure achieve exactly thesame effect. Perrow's theory is therefore nolonger necessary from this point of view. It muststand on whatever other merits it may possess.At this juncture, two points of clarification are

in order. First, some readers of an earlier draft ofthis article have viewed this section as an attackon Perrow's motivation. That is not myintention. In fact I applaud his purpose. Thecommon tendency to blame frontline operatorsfor accidents is, in my view, unhelpful and unfair.My point is simply that Perrow's purpose isbetter achieved in other ways.Second, some readers have objected to this

account of Normal Accident Theory on thegrounds that the theory is broader than I haveallowed and encompasses questions of manage-ment and management failure. This is animportant issue that I have canvassed in detailelsewhere (Hopkins, 1999a). It is not appropriateto repeat that discussion here. My conclusion,however, is that attempts to interpret NormalAccident Theory as including questions ofmanagement failure end up undermining thetheory rather than enriching it.

WAS THREE MILE ISLAND A `NORMAL ACCIDENT'? 71

ß Blackwell Publishers Ltd 2001 Volume 9 Number 2 June 2001

Page 8: Was Three Mile Island a ‘Normal Accident’?

Conclusion

So was Three Mile Island a normal accident? Theaccident certainly occurred in a complex andtightly coupled system. But that did not make itinevitable and unavoidable. It was sloppymanagement and failure to attend to warningsthat allowed this accident to occur. It was in noway dissimilar in this respect to the manyaccidents about which Turner theorised in hisbook, Man-Made Disasters. Three Mile Islandwas not a normal accident in the specialised sensein which Perrow uses this term. It is doubtful ifany accident is. If accidents are normal, whatmakes them so is sloppy management, notcomplexity and tight coupling. And sloppymanagement of major hazards is what gives usthe potential for disaster.This is not just a theoretical debate. There are

practical consequences for the way we go aboutaccident prevention. Normal accident theorysuggests a technological approach: reduce com-plexity and coupling. The alternative approach isto make organisational changes designed toimprove flows of information, decision-makingprocesses and so on. It is this latter approach thatis likely to be more useful for accidentprevention.

References

Appleton, B. (1994), `Piper Alpha', in Kletz, T. (Ed.),Lessons from Disaster: How Organisations Have NoMemory and Accidents Recur, Institute of ChemicalEngineers, London, pp. 174±184.

Gorinson, S. et al. (1979), Report of the Office of ChiefCounsel on the Role of the Managing Utility and itsSuppliers, Washington.

Hale, A., Baram, M. and Hovden, J. (1998),`Perspectives on Safety Management', in Hale,A. and Baram, M. (Eds), Safety Management: TheChallenge of Change, Pergamon, Oxford, pp. 1±18.

Hopkins, A. (1999a), `The Limits of Normal AccidentTheory', Safety Science, Volume 32, Number 2, pp.93±102.

Hopkins, A. (1999b), Managing Major Hazards: TheLessons of the Moura Mine Disaster, Allen andUnwin, Sydney.

Janis, I. (1982), Groupthink: Psychological Studies ofPolicy Decisions and Fiascoes, Houghton Mifflin,Boston.

La Porte, T. and Consolini, P. (1991), `Working inPractice But Not in Theory: TheoreticalChallenges of `High-Reliability Organisations'',Journal of Public Administration Research and Theory,Volume 1, Number 1, pp. 19±47.

La Porte, T. and Rochlin, G. (1994), `A Rejoinder toPerrow', Journal of Contingencies and CrisisManagement, Volume 2, Number 4, pp. 221±227.

Perrow, C. (1982), `The President's Commission andthe Normal Accident', in Sils, D., Wolf, C. andShelanski, V. (Eds), Accident at Three Mile Island:The Human Dimensions, Westview, Boulder, pp.173±184.

Perrow, C. (1984), Normal Accidents: Living WithHigh-Risk Technologies, Basic Books, New York.

Perrow, C. (1994), `The Limits of Safety: TheEnhancement of a Theory of Accidents', Journalof Contingencies and Crisis Management, Volume 4,Number 2, pp. 212±220.

Perrow, C. (1999), Normal Accidents: Living with High-Risk Technologies, Princeton University Press,Princeton.

Pidgeon, N. (1997), `The Limits to Safety? Culture,Politics, Learning and Man-Made Disasters',Journal of Contingencies and Crisis Management,Volume 5, Number 1, pp. 1±14.

Reason, J. (1990), Human Error, Cambridge UniversityPress, Cambridge.

Reason, J. (1997), Managing the Risks of OrganisationalAccidents, Ashgate, Aldershot.

Rees, J. (1994), Hostages of Each Other: TheTransformation of Nuclear Safety Since Three MileIsland, University of Chicago Press, Chicago.

Sagan, S. (1993), The Limits of Safety, PrincetonUniversity Press, Princeton.

Toft, B. and Reynolds, B. (1994), Learning fromDisasters, Butterworth Heinemann, Oxford.

Turner, B. (1978), Man-Made Disasters, Wykeham,London.

Turner, B. (1994), `Causes of Disaster: SloppyManagement', British Journal of Management,Volume 5, Number 3, pp. 215±219.

Turner, B. and Pidgeon, N. (1997), Man-MadeDisasters, Butterworth Heinemann, Oxford.

Vaughan, D. (1996), The Challenger Launch Decision:Risky Technology, Culture and Deviance at NASA,University of Chicago Press, London.

Wilde, O. (1980), The Importance of Being Earnest,(Edited by R. Jackson), Ernest Benn, London.

72 JOURNAL OF CONTINGENCIES AND CRISIS MANAGEMENT

Volume 9 Number 2 June 2001 ß Blackwell Publishers Ltd 2001