understanding adverse events: human...

10
Quality in Health Care 1995;4:80-89 Understanding adverse events: human factors James Reason A decade ago, very few specialists in human factors were involved in the study and prevention of medical accidents. Now there are many. Between the 1940s and 1980s a major concern of that community was to limit the human contribution to the conspicuously catastrophic breakdown of high hazard enter- prises such as air, sea, and road transport; nuclear power generation; chemical process plants; and the like. Accidents in these systems cost many lives, create widespread environ- mental damage, and generate much public and political concern. By contrast, medical mishaps mostly affect single individuals in a wide variety of health- care institutions and are seldom discussed publicly. Only within the past few years has the likely extent of these accidental injuries become apparent. The Harvard medical practice study found that 4% of patients in hospital in New York City in 1984 sustained unintended injuries caused by their treatment. For New York state this amounted to 98 600 injuries in one year and, when extrapolated to the entire United States, to the staggering figure of 1-3 million people harmed annually- more than twice the number injured in one year in road accidents in the United States.' 2 Since the mid-i 980s several interdisciplinary research groups have begun to investigate the human and organizational factors affecting the reliability of healthcare provision. Initially, these collaborations were focused around the work of anaesthetists and intensiviStS,3 4 partly because these professionals' activities shared much in common with those of more widely studied groups such as pilots and operators of nuclear power plants. This commonality existed at two levels. * At the "sharp end" (that is, at the immediate human-system or doctor-patient interface) common features include uncertain and dynamic environments, multiple sources of concurrent information, shifting and often ill defined goals, reliance on indirect or inferred indications, actions having immediate and multiple consequences, moments of intense time stress interspersed with long periods of routine activity, advanced technologies with many redundancies, complex and often confusing human-machine interfaces, and multiple players with differing priorities and high stakes.5 * At an organisational level these activities are carried on within complex, tightly coupled institutional settings and entail multiple interactions between different professional groups.6 This is extremely important for understanding not only the character and aetiology of medical mishaps but also for devising more effective remedial measures. More recently, the interest in the human factors of health care has spread to a wide range of medical specialties (for example, general practice, accident and emergency care, obstetrics and gynaecology, radiology, psychiatry, surgery, etc). This burgeoning concern is reflected in several recent texts and journal articles devoted to medical accidents7`9 and in the creation of incident monitoring schemes that embody leading edge thinking with regard to human and organisational contributions.9 One of the most significant consequences of the collab- oration between specialists in medicine and in human factors is the widespread acceptance that models of causation of accidents devel- oped for domains such as aviation and nuclear power generation apply equally well to most healthcare applications. The same is also true for many of the diagnostic and remedial measures that have been created within these non-medical areas. I will first consider the different ways in which humans can contribute to the break- down of complex, well defended technologies. Then I will show how these various contri- butions may be combined within a generic model of accident causation and illustrate its practical application with two case studies of medical accidents. Finally, I will outline the practical implications of such models for improving risk management within the health- care domain. Human contribution A recent survey of published work on human factors disclosed that the estimated con- tribution of human error to accidents in hazardous technologies increased fourfold between the 1960s and '90s, from minima of around 20% to maxima of beyond 90%. 10 One possible inference is that people have become more prone to error. A likelier explanation, however, is that equipment has become more reliable and that accident investigators have become increasingly aware that safety-critical errors are not restricted to the "sharp end." Figures of around 90% are hardly surprising considering that people design, build, operate, maintain, organism, and manage these systems. The large contribution of human error is more a matter of opportunity than the result of excessive carelessness, ignorance, or reckless- ness. Whatever the true figure, though, human behaviour - for good or ill - clearly dominates the risks to modern technological systems - medical or otherwise. Not long ago, these human contributions would have been lumped together under the catch all label of "human error." Now it is apparent that unsafe acts come in many forms - slips, lapses and mistakes, errors and Department of Psychology, University of Manchester, Manchester M13 9PL James Reason, professor 80 on 11 May 2018 by guest. Protected by copyright. http://qualitysafety.bmj.com/ Qual Health Care: first published as 10.1136/qshc.4.2.80 on 1 June 1995. Downloaded from

Upload: tranliem

Post on 07-Mar-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Understanding adverse events: human factorsqualitysafety.bmj.com/content/qhc/4/2/80.full.pdftribution of human error to accidents in hazardous technologies increased fourfold between

Quality in Health Care 1995;4:80-89

Understanding adverse events: human factors

James Reason

A decade ago, very few specialists in humanfactors were involved in the study andprevention ofmedical accidents. Now there aremany. Between the 1940s and 1980s a majorconcern of that community was to limit thehuman contribution to the conspicuouslycatastrophic breakdown of high hazard enter-prises such as air, sea, and road transport;nuclear power generation; chemical processplants; and the like. Accidents in these systemscost many lives, create widespread environ-mental damage, and generate much public andpolitical concern.By contrast, medical mishaps mostly affect

single individuals in a wide variety of health-care institutions and are seldom discussedpublicly. Only within the past few years has thelikely extent of these accidental injuries becomeapparent. The Harvard medical practice studyfound that 4% of patients in hospital in NewYork City in 1984 sustained unintendedinjuries caused by their treatment. For NewYork state this amounted to 98 600 injuries inone year and, when extrapolated to the entireUnited States, to the staggering figure of 1-3million people harmed annually- more thantwice the number injured in one year in roadaccidents in the United States.' 2

Since the mid-i 980s several interdisciplinaryresearch groups have begun to investigate thehuman and organizational factors affectingthe reliability of healthcare provision. Initially,these collaborations were focused around thework of anaesthetists and intensiviStS,3 4 partlybecause these professionals' activities sharedmuch in common with those of more widelystudied groups such as pilots and operatorsof nuclear power plants. This commonalityexisted at two levels.* At the "sharp end" (that is, at the immediatehuman-system or doctor-patient interface)common features include uncertain anddynamic environments, multiple sources ofconcurrent information, shifting and often illdefined goals, reliance on indirect or inferredindications, actions having immediate andmultiple consequences, moments of intensetime stress interspersed with long periodsof routine activity, advanced technologieswith many redundancies, complex and oftenconfusing human-machine interfaces, andmultiple players with differing priorities andhigh stakes.5

* At an organisational level these activities arecarried on within complex, tightly coupledinstitutional settings and entail multipleinteractions between different professionalgroups.6 This is extremely important forunderstanding not only the character andaetiology of medical mishaps but also fordevising more effective remedial measures.

More recently, the interest in the humanfactors ofhealth care has spread to a wide rangeof medical specialties (for example, generalpractice, accident and emergency care, obstetricsand gynaecology, radiology, psychiatry, surgery,etc). This burgeoning concern is reflected inseveral recent texts and journal articles devotedto medical accidents7`9 and in the creation ofincident monitoring schemes that embodyleading edge thinking with regard to humanand organisational contributions.9 One of themost significant consequences of the collab-oration between specialists in medicine and inhuman factors is the widespread acceptancethat models of causation of accidents devel-oped for domains such as aviation and nuclearpower generation apply equally well to mosthealthcare applications. The same is also truefor many of the diagnostic and remedialmeasures that have been created within thesenon-medical areas.

I will first consider the different ways inwhich humans can contribute to the break-down of complex, well defended technologies.Then I will show how these various contri-butions may be combined within a genericmodel of accident causation and illustrate itspractical application with two case studies ofmedical accidents. Finally, I will outline thepractical implications of such models forimproving risk management within the health-care domain.

Human contributionA recent survey of published work on humanfactors disclosed that the estimated con-tribution of human error to accidents inhazardous technologies increased fourfoldbetween the 1960s and '90s, from minima ofaround 20% to maxima of beyond 90%. 10 Onepossible inference is that people have becomemore prone to error. A likelier explanation,however, is that equipment has become morereliable and that accident investigators havebecome increasingly aware that safety-criticalerrors are not restricted to the "sharp end."Figures of around 90% are hardly surprisingconsidering that people design, build, operate,maintain, organism, and manage these systems.The large contribution of human error is morea matter of opportunity than the result ofexcessive carelessness, ignorance, or reckless-ness. Whatever the true figure, though, humanbehaviour - for good or ill - clearly dominatesthe risks to modern technological systems -

medical or otherwise.Not long ago, these human contributions

would have been lumped together under thecatch all label of "human error." Now it isapparent that unsafe acts come in many forms- slips, lapses and mistakes, errors and

Department ofPsychology,University ofManchester,Manchester M13 9PLJames Reason, professor

80

on 11 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

Qual H

ealth Care: first published as 10.1136/qshc.4.2.80 on 1 June 1995. D

ownloaded from

Page 2: Understanding adverse events: human factorsqualitysafety.bmj.com/content/qhc/4/2/80.full.pdftribution of human error to accidents in hazardous technologies increased fourfold between

Understanding adverse events: human factors

violations - each having different psychologicalorigins and requiring different counter-measures. Nor can we take account of onlythose human failures that were the proximalcauses of an accident. Major accident inquiries(for example those for Three Mile Islandnuclear reactor accident, Challenger (spaceshuttle) explosion, King's Cross undergroundfire, Herald of Free Enterprise capsizing, PiperAlpha explosion and fire, Clapham rail disaster,Exxon Valdez oil spill, Kegworth air crash, etc)make it apparent that the human causes ofmajor accidents are distributed very widely,both within an organisation as a whole andover several years before the actual event. Inconsequence, we also need to distinguishbetween active failures (having immediateadverse outcomes) and latent or delayed actionfailures that can exist for long periods beforecombining with local triggering events topenetrate the system's defences.Human errors may be classified either by

their consequences or by their presumedcauses. Consequential classifications are alreadywidely used in medicine. The error is describedin terms of the proximal actions contributingto a mishap (for example, administration of awrong drug or a wrong dose, wrong intubation,nerve or blood vessel unintentionally severedduring surgery, etc). Causal classifications,on the other hand, make assumptions aboutthe psychological mechanisms implicated ingenerating the error. Since causal or psycho-logical classifications are not widely used inmedicine (though there are notable exceptions,see Gaba,4 Runciman et al9) a brief descriptionof the main distinctions among types oferrors and their underlying rationale is givenbelow.

Psychologists divide errors into two causallydetermined groups (see Reason"), as sum-marised in figure 1.

SLIPS AND LAPSES VERSUS MISTAKES: THE FIRST

DISTINCTIONError can be defined in many ways. For mypresent purpose an error is the failure ofplanned actions to achieve their desired goal.There are basically two ways in which thisfailure can occur, as follows.* The plan is adequate, but the associated

actions do not go as intended. The failuresare failures of execution and are com-monly termed slips and lapses. Slips relateto observable actions and are associatedwith attentional failures. Lapses are moreinternal events and relate to failures ofmemory.

Slips, lapses, trips,* and fumbles:l Execution failures

Errors

Mistakes: planning1 or problem solving

failures

Fig 1 Distinguishing slips, lapses, and mistakes

* The actions may go entirely as planned, butthe plan is inadequate to achieve its intendedoutcome. These are failures of intention,termed mistakes. Mistakes can be furthersubdivided into rule based mistakes andknowledge based mistakes (see below).

All errors involve some kind of deviation. In thecase of slips, lapses, trips and fumbles, actionsdeviate from the current intention. Here thefailure occurs at the level of execution. Formistakes, the actions may go entirely asplanned but the plan itself deviates from someadequate path towards its intended goal. Herethe failure lies at a higher level: with the mentalprocesses involved in planning, formulatingintentions, judging, and problem solving.

Slips and lapses occur during the largelyautomatic performance of some routine task,usually in familiar surroundings. They arealmost invariably associated with some form ofattentional capture, either distraction from theimmediate surroundings or preoccupation withsomething in mind. They are also provoked bychange, either in the current plan of actionor in the immediate surroundings. Figure 2shows the further subdivisions of slips andlapses; these have been discussed in detailelsewhere."

Mistakes can begin to occur once a problemhas been detected. A problem is anything thatrequires a change or alteration of the currentplan. Mistakes may be subdivided into twogroups, as follows.* Rule based mistakes, which relate to prob-lems for which the person possesses someprepackaged solution, acquired as the resultof training, experience, or the availabilityof appropriate procedures. The associatederrors may come in various forms: the mis-application of a good rule (usually because ofa failure to spot the contraindications), theapplication of a bad rule, or the non-application of a good rule.

* Knowledge based mistakes, which occur innovel situations where the solution to a prob-lem has to be worked out on the spot withoutthe help of preprogrammed solutions. Thisentails the use of slow, resource-limitedbut computationally-powerful consciousreasoning carried out in relation to what isoften an inaccurate and incomplete "mentalmodel" of the problem and its possiblecauses. Under these circumstances the humanmind is subject to several powerful biases, ofwhich the most universal is confirmationbias. This was described by Sir Francis Baconmore than 300 years ago. "The human mind

i *|~~~0 Recognitio n failures|

f | *L Attentionalfailures |

Slips and lapses*| Memory failures

A Selection failures

Fig 2 Varieties ofslips and lapses

81

on 11 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

Qual H

ealth Care: first published as 10.1136/qshc.4.2.80 on 1 June 1995. D

ownloaded from

Page 3: Understanding adverse events: human factorsqualitysafety.bmj.com/content/qhc/4/2/80.full.pdftribution of human error to accidents in hazardous technologies increased fourfold between

Reason

when it has once adopted an opinion drawsall things else to support and agree withit."'2 Confirmation bias or "mindset" is par-ticularly evident when trying to diagnosewhat has gone wrong with a malfunctioningsystem. We "pattern match" a possible causeto the available signs and symptoms and thenseek out only that evidence that supports thisparticular hunch, ignoring or rationalisingaway contradictory facts. Other biases havebeen discussed elsewhere."

ERRORS VERSUS VIOLATIONS: THE SECONDDISTINCTIONViolations are deviations from safe operatingpractices, procedures, standards, or rules.Here, we are mostly interested in deliberateviolations, in which the actions (thoughnot the possible bad consequences) wereintended.

Violations fall into three main groups.* Routine violations, which entail cuttingcorners whenever such opportunities presentthemselves

* Optimising violations, or actions taken tofurther personal rather than strictly taskrelated goals (that is, violations for "kicks" orto alleviate boredom)

* Necessary or situational violations that seemto offer the only path available to getting thejob done, and where the rules or proceduresare seen to be inappropriate for the presentsituation.

Deliberate violations differ from errors inseveral important ways.* Whereas errors arise primarily frominformational problems (that is, forgetting,inattention, incomplete knowledge, etc)violations are more generally associated withmotivational problems (that is, low morale,poor supervisory example, perceived lack ofconcern, the failure to reward complianceand sanction non-compliance, etc)

* Errors can be explained by what goes on inthe mind of an individual, but violationsoccur in a regulated social context

* Errors can be reduced by improving thequality and the delivery of necessary infor-mation within the workplace. Violationsrequire motivational and organizationalremedies.

ACTIVE VERSUS LATENT HUMAN FAILURES: THETHIRD DISTINCTIONIn considering how people contribute toaccidents a third and very importantdistinction is necessary - namely, that betweenactive and latent failures. The differenceconcerns the length of time that passes beforehuman failures are shown to have an adverseimpact on safety. For active failures thenegative outcome is almost immediate, but forlatent failures the consequences of humanactions or decisions can take a long time to bedisclosed, sometimes many years.The distinction between active and latent

failures owes much to Mr Justice Sheen'sobservations on the capsizing of the Heraldof Free Enterprise. In his inquiry report, hewrote:

At first sight the faults which led to this disaster werethe ... errors of omission on the part of the Master,the Chief Officer and the assistant bosun ... But afull investigation into the circumstances of thedisaster leads inexorably to the conclusion that theunderlying or cardinal faults lay higher up in theCompany ... From top to bottom the body corporatewas infected with the disease of sloppiness."Here the distinction between active and latentfailures is made very clear. The active failures- the immediate causes of the capsize - werevarious errors on the part of the ships' officersand crew. But, as the inquiry disclosed, theship was a "sick" ship even before it sailed fromZeebrugge on 6 March 1987.To summarise the differences between active

and latent failures:* Active failures are unsafe acts (errors and

violations) committed by those at the "sharpend" of the system (surgeons, anaesthetists,nurses, physicians, etc). It is the people at thehuman-system interface whose actions can,and sometimes do, have immediate adverseconsequences

* Latent failures are created as the result ofdecisions, taken at the higher echelons ofan organisation. Their damaging conse-quences may lie dormant for a long time,only becoming evident when they combinewith local triggering factors (for example,the spring tide, the loading difficulties atZeebrugge harbour, etc) to breach thesystem's defences.

Thus, the distinction between active and latentfailures rests on two considerations: firstly, thelength of time before the failures have abad outcome and, secondly, where in anorganisation the failures occur. Generally,medical active failures are committed by thosepeople in direct contact with the patient, andlatent failures occur within the higher echelonsof the institution, in the organisational andmanagement spheres. A brief account of amodel showing how top level decisions createconditions that produce accidents in the work-place is given below.

Aetiology of "organisational" accidentsThe technological advances of the past 20years, particularly in regard to engineeredsafety features, have made many hazardoussystems largely proof against single failures,either human or mechanical. Breaching the"defences in depth" now requires the unlikelyconfluence of several causal streams.Unfortunately, the increased automationafforded by cheap computing power alsoprovides greater opportunities for the insidiousaccumulation of latent failures within thesystem as a whole. Medical systems and itemsof equipment have become more opaque to thepeople who work them and are thus especiallyprone to the rare, but often catastrophic,organizationall accident." Tackling theseorganisational failures represents a majorchallenge in medicine and elsewhere.

Figure 3 shows the anatomy of an organis-ational accident, the direction of causalitybeing from left to right. The accident sequencebegins with the negative consequences of

82

on 11 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

Qual H

ealth Care: first published as 10.1136/qshc.4.2.80 on 1 June 1995. D

ownloaded from

Page 4: Understanding adverse events: human factorsqualitysafety.bmj.com/content/qhc/4/2/80.full.pdftribution of human error to accidents in hazardous technologies increased fourfold between

Understanding adverse events: human factors

Corporateculture

Managementdecisions

andorganizational

processes

Local climate

Error-producingconditions

Violation-producingconditions

SituationTask

Errors

ViolationsIH

DefencesBarriers

Latent failures in defenses

Fig 3 Stages ofdevelopment oforganizational accident

organisational processes (that is, decisions con-

cerned with planning, scheduling, forecasting,designing, policy making, communicating,regulating, maintaining, etc). The latent failuresso created are transmitted along variousorganizational and departmental pathways tothe workplace (the operating theatre, the ward,etc), where they create the local conditionsthat promote the commission of errors andviolations (for example, understaffing, highworkload, poor human equipment interfaces,etc). Many of these unsafe acts are likely to becommitted, but only very few of them willpenetrate the defences to produce damagingoutcomes. The fact that engineered safetyfeatures, standards, controls, procedures, etc,can be deficient due to latent failures as well

as active failures is shown in the figure by thearrow connecting organizational processes

directly to defences.The model presents the people at the sharp

end as the inheritors rather than as theinstigators of an accident sequence. This may

seem as if the "blame" for accidents has beenshifted from the sharp end to the system

managers. But this is not the case for thefollowing reasons.* The attribution of blame, though oftenemotionally satisfying, hardly ever translatesinto effective countermeasures. Blame impliesdelinquency, and delinquency is normallydealt with by exhortations and sanctions.But these are wholly inappropriate if theindividual people concerned did not chooseto err in the first place, nor were notappreciably prone to error.

* High level management and organisationaldecisions are shaped by economic, political,and operational constraints. Like designs,decisions are nearly always a compromise. Itis thus axiomatic that all strategic decisionswill carry some negative safety consequences

for some part of the system. This is not to saythat all such decisions are flawed, thoughsome of them will be. But even thosedecisions judged at the time as being goodones will carry a potential downside.Resources, for example, are rarely allocatedevenly. There are nearly always losers. Injudging uncertain futures some of the shotswill inevitably be called wrong. The crux ofthe matter is that we cannot prevent thecreation of latent failures; we can only maketheir adverse consequences visible beforethey combine with local triggers to breach thesystem's defences.

These organizational root causes are furthercomplicated by the fact that the health-care system as a whole involves manyinterdependent organizations: manufacturers,government agencies, professional and patientorganizations, etc. The model shown in figure3 relates primarily to a given institution, but thereality is considerably more complex, with thebehaviour of other organizations impingingon the accident sequence at many differentpoints.

Applying the organizational accidentmodel in medicine: two case studiesTwo radiological case studies are presentedto give substance to this rather abstracttheoretical framework and to emphasise some

important points regarding the practice of hightech medicine. Radiological mishaps tend tobe extensively investigated, particularly in theUnited States, where these examples occurred.But organisational accidents should not beassumed to be unique to this specialty. Anentirely comparable anaesthetic case studyhas been presented elsewhere.'4 15 Generally,though, medical accidents have rarely beeninvestigated to the extent that their systemicand institutional root causes are disclosed, so

the range of suitable case studies is limited.The box describes details of the first case

study.

Case 1: Therac-25 accident at East Texas Medical Centre(1986)A 33 year old man was due to receive his ninth radiation treatment aftersurgery for the removal of a tumour on his left shoulder. The radiotherapytechnician positioned him on the table and then went to her adjoining controlroom. The Therac-25 machine had two modes: a high power "x ray" modeand a low power "electron beam" mode. The high power mode was selectedby typing an "x" on the keyboard of the VT100 terminal. This put themachine on maximum power and inserted a thick metal plate between thebeam generator and the patient. The plate transformed the 25 million voltelectron beam into therapeutic x rays. The low power mode was selected bytyping "e" and was designed to deliver a 200 rad beam to the tumour.The intention on this occasion was to deliver the low power beam. But

the technician made a slip and typed in an "x" instead of an "e." Sheimmediately detected her error, pressed the "up" arrow to select the editfunctions from the screen menu and changed the incorrect "x" commandto the desired "e" command. The screen now confirmed that the machinewas in electron beam mode. She returned the cursor to the bottom of thescreen in preparation for the "beam ready" display showing that the machinewas fully charged. As soon as the "beam ready" signal appeared she depressedthe "b" key to activate the beam.What she did not realism - and had no way of knowing - was that an

undetected bug in the software had retracted the thick metal protege plate(used in the x ray mode) but had left the power setting on maximum. Assoon as she activated the "b" command, a blast of 25 000 rads was deliveredto the patient's unprotected shoulder. He saw a flash ofblue light (Cherenkovradiation), heard his flesh frying, and felt an excruciating pain. He called outto the technician, but both the voice and video intercom were switched off.Meanwhile, back in the control room, the computer screen displayed a

"malfimction 54" error signal. This meant little to the technician. She tookit to mean that the beam had not fired, so reset the machine to fire again.Once again, she received the "malfunction 54" signal, and once more shereset and fired the machine. As a result, the patient received three, 25 000rad blasts to his neck and upper torso, although the technician's displayshowed that he had only received a tenth of his prescribed treatment dose.The patient died four months later with gaping lesions on his upper body.His wry comment was: "Captain Kirk forgot to put his phaser on stun."A very similar incident occurred three weeks later. Subsequently,

comparable overdoses were discovered to have been administered in threeother centres using the same equipment.

83

on 11 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

Qual H

ealth Care: first published as 10.1136/qshc.4.2.80 on 1 June 1995. D

ownloaded from

Page 5: Understanding adverse events: human factorsqualitysafety.bmj.com/content/qhc/4/2/80.full.pdftribution of human error to accidents in hazardous technologies increased fourfold between

Reason

Several latent failures contributed to thisaccident.* The Canadian manufacturer had not con-sidered it possible that a technician couldenter that particular sequence of keyboardcommands within the space of eight secondsand so had not tested the effects of theseclosely spaced inputs

* The technician had not been trained tointerpret the error signals

* It was regarded as normal practice to carryout radiation treatment without video orsound communication with the patient

* Perhaps most significantly, the technicianwas provided with totally inadequate feed-back regarding the state of the machine andits prior activity.

This case study provides a clear example ofwhat has been called "clumsy auto-mation."3 16 17 Automation intended toreduced errors created by the variability ofhuman performance increases the probabilityof certain kinds of mistakes by making thesystem and its current state opaque to thepeople who operate it. Comparable problemshave been identified in the control rooms ofnuclear power plants, on the flight decks ofmodem airliners, and in relation to con-temporary anaesthetic work stations.17 Auto-mation and "defence in depth" mean that thesecomplex systems are largely protected againstsingle failures. But they render the workings ofthe system more mysterious to its humancontrollers. In addition, they permit the subtlebuild up of latent failures, hidden behindhigh technology interfaces and within theinterdepartmental interstices of complexorganisations.The second case study has all the causal

hallmarks of an organizational accident butdiffers from most medical mishaps inhaving adverse outcomes for nearly 100people. The accident is described in detailelsewhere. 18

The accident occurred as the result of acombination of procedural violations (resultingin breached or ignored defences) and latentfailures.

Active failures* The area radiation monitor alarmed severaltimes during the treatment but was ignored,partly because the doctor and techniciansknew that it had a history of false alarms

* The console indicator showed "safe" and theattending staff mistakenly believed the sourceto be fully retracted into the lead shield

* The truck driver deviated from companyprocedures when he failed to check thenursing home waste with his personalradiation survey meter.

Latentfailures* The rapid expansion of high dose ratebrachytherapy, from one to ten facilities inless than a year, had created serious weak-nesses in the radiation safety programme

* Too much reliance was placed on unwrittenor informal procedures and workingpractices

* There were serious inadequacies in thedesign and testing of the equipment

* There was a poor organisational safetyculture. The technicians routinely ignoredalarms and did not survey patients, the after-loader, or the treatment room after high doserate procedures.

* There was weak regulatory oversight. TheNuclear Regulatory Commission did notadequately address the problems and dangersassociated with high dose rate procedures.

This case study illustrates how a combinationof active failures and latent systemic weak-nesses can conspire to penetrate the manylayers of defences which are designed toprotect both patients and staff. No one personwas to blame; each person acted accordingto his or her appraisal of the situation, yetone person died and over 90 people wereirradiated.

Principled risk managementIn many organisations managing the humanrisks has concentrated on trying to preventthe recurrence of specific errors and violationsthat have been implicated in particular localmishaps. The common internal response tosuch events is to issue new procedures thatproscribe the particular behaviour; to deviseengineering "retro-fixes" that will prevent suchactions having adverse outcomes; to sanction,exhort, and retrain key staffin an effort to makethem more careful; and to introduce increasedautomation. This "anti-personnel" approachhas several problems.(1) People do not intend to commit errors. It

is therefore difficult for others to controlwhat people cannot control for themselves.

(2) The psychological precursors of an error

(that is, inattention, distraction, pre-occupation, forgetting, fatigue, and stress)are probably the last and least manageablelinks in the chain of events leading to anerror.

Case 2: Omnitron 2000 accident at Indiana Regional CancerCentre (1992)An elderly patient with anal carcinoma was treated with high dose rate(HDR) brachytherapy. Five catheters were placed in the tumhout An iridium-192 source (4-3 cune, 1 6 E + 11 becquerel) was intended to be located invarious positions within each catheter, using a remotely controlled Omnitron2000 afterloader. The treatment was the first of three treatments planned bythe doctor, and the catheters were to remain in the patient for the subsequenttreatments,The iridium source wire was placed in four of the catheters without

apparent faculty, but after several unsuccessfid attempts to insert the sourcewire into the fifth catheter, the treatment was terminated. In fact, a wire hadbroken, leaving an iridium source inside one of the first four catheters. Fourdays later the catheter Containing source came loose a. eventuallfellout of t patient. It wUs picked up adplaced inma storagerotmbyb a memberof staff of the nursing home, who did not realise it was radioactive. Five dayslater a truck picked up the waste bag containing the source. As part of thedriver's normal routine the bag was-then driven to the depot and remainedthere fo a day (dirig7Thanksgiving) before being d d to a meIalwaste incinerator wherebye sourcewas detected'y fixed r on moniesat the site. It was left over the weekend but was then traced to the nursighome. It was retrieved nearly three weeks after the original treatment. Thepatient had died five days after the treatment session, and in the ensuingweeks over 90 people had been irradiated in varying degrees b the idmsource.

84

on 11 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

Qual H

ealth Care: first published as 10.1136/qshc.4.2.80 on 1 June 1995. D

ownloaded from

Page 6: Understanding adverse events: human factorsqualitysafety.bmj.com/content/qhc/4/2/80.full.pdftribution of human error to accidents in hazardous technologies increased fourfold between

Understanding adverse events: human factors

(3) Accidents rarely occur as the result ofsingle unsafe acts. They are the product ofmany factors: personal, task related, situ-ational, and organisational. This has twoimplications. Firstly, the mere recurrence

of some act involved in a previous accidentwill probably not have an adverse outcomein the absence of the other causal factors.Secondly, so long as these underlyinglatent problems persist, other acts - not

hitherto regarded as unsafe - can alsoserve to complete an incipient accidentsequence.

(4) These countermeasures can create a falsesense of security.3 Since modem systemsare usually highly reliable some time islikely to pass between implementing thesepersonnel related measures and the nextmishap. During this time, those who haveinstituted the changes are inclined tobelieve that they have fixed the problem.But then a different kind of mishap occurs,

and the cycle of local repairs begins all overagain. Such accidents tend to be viewedin isolation, rather than being seen as

symptomatic of some underling systemicmalaise.

(5) Increased automation does not cure thehuman factors problem, it simply changesits nature. Systems become more opaque totheir operators. Instead of causing harm byslips, lapses, trips and fumbles, people are

now more prone to make mistaken judge-ments about the state of the system.

The goal of effective risk management is notso much to minimise particular errors andviolations as to enhance human performance atall levels of the system.3 Perhaps paradoxically,most performance enhancement measures are

not directly focused at what goes on insidethe heads of single individuals. Rather, theyare directed at team, task, situation, andorganisational factors, as discussed below.

TEAM FACTORS

A great deal of health care is delivered bymultidisciplinary teams. Over a decade ofexperience in aviation (and, more recently,marine technology) has shown that measures

designed to improve team management andthe quality of the communications betweenteam members can have an enormous impacton human performance. Helmreich (one of thepioneers of crew resource management) andhis colleagues at the University of Texasanalysed 51 aircraft accidents and incidents,paying special attention to team relatedfactors.'9 The box summarises their findings,where the team related factors are categorisedas negative (having an adverse impact uponsafety and survivability) or positive (acting toimprove survivability). The numbers given ineach case relate to the number of accidents or

incidents in which particular team relatedfactors had a negative or a positive role.

This list offers clear recommendations forthe interactions of medical teams just as muchas for aircraft crews. Recently, the aviationpsychologist Robert Helmreich and the anaes-

thetist Hans-Gerhard Schaefer studied team

performance in the operating theatre of a Swissteaching hospital.20 They noted that "inter-personal and communications issues are

responsible for many inefficiencies, errors, andfrustrations in this psychologically and organ-

isationally complex environment."8 They alsoobserved that attempts to improve institutionalperformance largely entailed throwing money

at the problem through the acquisition of newand ever more advanced equipment whereasimprovements to training and team per-

formance could be achieved more effectively at

a fraction of this cost. As has been clearly

Team related factors and role in 51aircraft accidents and incidents*Team concept and environment for opencommunications established (negative 7;positive 2)Briefings are operationally thorough, interesting,and address crew coordination and planning forpotential problems. Expectations are set for howpossible deviations from normal operations areto be handled (negative 9; positive 2)Cabin crew are included as part of the team inbriefings, as appropriate, and guidelines areestablished for coordination between flight deckand cabin (negative 2)Group climate is appropriate to operationalsituation (for example, presence of socialconversation). Crew ensures that non-operational factors such as social interaction donot interfere with necessary tasks (negative 13;positive 4)Crew members ask questions regarding crewactions and decisions (negative 1 1; positive 4)Crew members speak up and state theirinformation with appropriate persistence untilthere is some clear resolution or decision(negative 14; positive 4)Captain coordinates flight deck activities toestablish proper balance between commandauthority and crew member participation andacts decisively when the situation requires it(negative 18; positive 4)Workload and task distribution are clearlycommunicated and acknowledged by crewmembers. Adequate time is provided for thecompletion of tasks (negative 12; positive 4)Secondary tasks are prioritised to allow sufficientresources for dealing effectively with primaryduties (negative 5; positive 2)Crew members check with each other duringtimes of high and low workload to maintainsituational awareness and alertness (negative 3;positive 3)Crew prepares for expected contingencysituations (negative 28; positive 4)Guidelines are established for the operation anddisablement of automated systems. Duties andresponsibilities with regard to automatedsystems are made clear. Crew periodically reviewand verify the status of automated systems.Crew verbalises and acknowledges entries andchanges to automated systems. Crew allowssufficient time for programming automatedsystems before manoeuvres (negative 14)When conflicts arise the crew remains focusedon the problem or situation at hand. Crewmembers listen actively to ideas and opinionsand admit mistakes when wrong (negative 2)*After Helmreich et all 9

85

on 11 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

Qual H

ealth Care: first published as 10.1136/qshc.4.2.80 on 1 June 1995. D

ownloaded from

Page 7: Understanding adverse events: human factorsqualitysafety.bmj.com/content/qhc/4/2/80.full.pdftribution of human error to accidents in hazardous technologies increased fourfold between

Reason

shown for aviation, formal training in teammanagement and communication skills canproduce substantial improvements in humanperformance as well as reducing safety-criticalerrors.

TASK FACTORS

Tasks vary widely in their liability to promoteerrors. Identifying and modifying tasks andtask elements that are conspicuously prone tofailure are essential steps in risk management.The following simple example is represen-

tative of many maintenance tasks. Imagine abolt with eight nuts on it. Each nut is codedand has to be located in a particular sequence.Disassembly is virtually error free. There isonly one way in which the nuts can be removedfrom the bolt and all the necessary knowledgeto perform this task is located in the world (thatis, each step in the procedure is automaticallycued by the preceding one). But the task ofcorrect reassembly is immensely more difficult.There are over 40 000 ways in which thisassemblage of nuts can be wrongly located onthe bolt (factorial 8). In addition, the knowl-edge necessary to get the nuts back in the rightorder has to be either memorised or read fromsome written procedure, both of which arehighly liable to error or neglect. Such anexample may seem at first sight to be farremoved from the practice of medicine, butmedical equipment, like any other sophisticatedhardware, requires careful maintenance -

and maintenance errors (particularly omittingnecessary reassembly steps) constitute one ofthe greatest sources ofhuman factors problemsin high technology industries."

Effective incident monitoring is an in-valuable tool in identifying tasks prone toerror. On the basis of their body of nearly4000 anaesthetic and intensive care incidents,Runciman et al at the Royal Adelaide Hospital(see Runciman et al9 for a report of the first2000 incidents) introduced many inexpensiveequipment modifications guaranteed toenhance performance and to minimiserecurrent errors. These include colour codedsyringes and endotracheal tubes graduated tohelp non-intrusive identification of endo-bronchial intubation.2'

SITUATIONAL FACTORS

Each type of task has its own nominal errorprobability. For example, carrying out a totallynovel task with no clear idea of the likelyconsequences (that is, knowledge basedprocessing) has a basic error probability of0 75. At the other extreme, a highly familiar,routine task performed by a well motivated andcompetent workforce has an error probabilityof 0 0005. But there are certain conditionsboth of the individual person and his or herimmediate environment that are guaranteed toincrease these nominal error probabilities(table 1). Here the error producing conditionsare ranked in the order of their known effectsand the numbers in parentheses indicate therisk factor (that is, the amount by which thenominal error rates should be multiplied underthe worst conditions). Notably, three of the

Table 1 Summary of errorproducing conditions ranked inorder ofknown effect (after Williams2)Condition Risk factor

Unfamiliarity with the task (X 17)Time shortage (X I1)Poor signal:noise ratio (X 10)Poor human system interface (x 8)Designer user mismatch (X 8)Irreversibility of errors (X 8)Information overload (X 6)Negative transfer between tasks (X 5)Misperception of risk (X 4)Poor feedback from system (X4)Inexperience - not lack of training (X 3)Poor instructions or procedures (X 3)Inadequate checking (X 3)Educational mismatch of person with task (X 2)Disturbed sleep patterns (X 1-6)Hostile environment (X 1 2)Monotony and boredom (X 1)

best researched factors - namely, sleepdisturbance, hostile environment, and bore-dom - carry the least penalties. Also, thoseerror producing factors at the top of the list arethose that lie squarely within the organisationalsphere of influence. This is a central elementin the present view of organisational accidents.Managers and administrators rarely, if ever,have the opportunity to jeopardise a system'ssafety directly. Their influence is more indirect:top level decisions create the conditions thatpromote unsafe acts.For convenience, error producing conditions

can be reduced to seven broad categories: highworkload; inadequate knowledge, ability orexperience, poor interface design; inadequatesupervision or instruction; stressful environ-ment; mental state (fatigue, boredom, etc); andchange. Departures from routine and changesin the circumstances in which actions arenormally performed constitute a major factorin absentminded slips of action.23Compared to error producing conditions,

the factors that promote violations are lesswell understood. Ranking their relative effectsis not possible. However, we can make aninformed guess at the nature of these vtolation-producing conditions, as shown in table 2,although in no particular order of effect.

Again, for causal analysis this list can bereduced to a few general categories: lack ofsafety culture, lack of concern, poor morale,norms condoning violation, "can do" attitudes,and apparently meaningless or ambiguousrules.

Table 2 Violation producing conditions, unranked

Conditions

Manifest lack of organisational safety cultureConflict between management and staffPoor moralePoor supervision and checkingGroup norms condoning violationsMisperception of hazardsPerceived lack of management care and concernLittle elan or pride in workCulture that encourages taking risksBeliefs that bad outcomes will not happenLow self esteemLearned helplessnessPerceived licence to bend rulesAmbiguous or apparently meaningless rulesRules inapplicable due to local conditionsInadequate tools and equipmentInadequate trainingTime pressureProfessional attitudes hostile to procedures

86

on 11 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

Qual H

ealth Care: first published as 10.1136/qshc.4.2.80 on 1 June 1995. D

ownloaded from

Page 8: Understanding adverse events: human factorsqualitysafety.bmj.com/content/qhc/4/2/80.full.pdftribution of human error to accidents in hazardous technologies increased fourfold between

Understanding adverse events: human factors

ORGANISATIONAL FACTORS

Quality and safety, like health and happiness,have two aspects: a negative aspect disclosedby incidents and accidents and a positiveaspect, to do with the system's intrinsic resist-ance to human factors problems. Whereasincidents and accidents convert easily intonumbers, trends, and targets, the positiveaspect is much harder to identity and measure.

Accident and incident reporting proceduresare a crucial part of any safety or qualityinformation system. But, by themselves, theyare insufficient to support effective quality andsafety management. The information theyprovide is both too little and too late for thislonger term purpose. To promote proactiveaccident prevention rather than reactive "localrepairs" an organisation's "vital signs" shouldbe monitored regularly.When a doctor carries out a routine medical

check he or she samples the state of severalcritical bodily systems: the cardiovascular,pulmonary, excretory, neurological systems,and so on. From individual measures of bloodpressure, electrocardiographic activity, choles-terol concentration, urinary contents, reflexes,and so on the doctor makes a professionaljudgement about the individual's general stateof health. There is no direct, definitive measureof a person's health. It is an emergent propertyinferred from a selection of physiological signsand lifestyle indicators. The same is also truefrom complex hazardous systems. Assessing an

organisation's current state of "safety health,"as in medicine, entails regular and judicioussampling of a small subset of a potentially largenumber of indices. But what are the dimen-sions along which to assess organisational"safety health?"

Several such diagnostic techniques are

already being implemented in variousindustries.24 The individual labels for theassessed dimensions vary from industry toindustry (oil exploration and production,tankers, helicopters, railway operations, andaircraft engineering), but all of them have beenguided by two principles. Firstly, they try toinclude those organizational "pathogens" thathave featured most conspicuously in welldocumented accidents (that is, hardwaredefects, incompatible goals, poor operatingprocedures, understaffing, high workload,inadequate training, etc). Secondly, they seekto encompass a representative sampling of

those core processes common to all tech-nological organizations (that is, design, build,operate, maintain, manage, communicate, etc).

Since there is unlikely to be a single universalset of indicators for all types of hazardousoperations one way of communicating howsafety health can be assessed is simply to listthe organisational factors that are currentlymeasured (see table 3). Tripod-Delta, com-

missioned by Shell International and currentlyimplemented in several of its exploration andproduction operating companies, on Shelltankers, and on its contracted helicopters in theNorth Sea, assesses the quarterly or half yearlystate of 11 general failure types in specificworkplaces: hardware, design, maintenancemanagement, procedures, error enforcingconditions, housekeeping, incompatible goals,organizational structure, communication,training, and defences. A discussion of therationale behind the selection and measure-

ment of these failure types can be foundelsewhere.25Tripod-Delta uses tangible, dimension

related indicators as direct measures or

"symptoms" of the state of each of the 11

failure types. These indicators are generated bytask specialists and are assembled into check-lists by a computer program (Delta) for eachtesting occasion. The nature of the indicatorsvaries from activity to activity (that is, drilling,seismic surveys, transport, etc) and from testto test. Examples of such indicators fordesign associated with an offshore platformare listed below. All questions have yes/noanswers.* Was this platform originally designed to beunmanned?

* Are shut-off valves fitted at a height of morethan 2 metres?

* Is standard (company) coding used for thepipes?

* Are there locations on this platform wherethe deck and walkways differ in height?

* Have there been more than two unscheduledmaintenance jobs over the past week?

*Are there any bad smells from the lowpressure vent system?

Relatively few of the organizational andmanagerial factors listed in table 3 are specificto safety; rather, they relate to the quality of theoverall system. As such, they can also be usedto gauge proactively the likelihood of negativeoutcomes other than coming into damaging

Table 3 Measures oforganisational health used in different industrial settings

Oil exploration and production Railways Aircraft maintenance

Hardware Tools and equipment Organisational structureDesign Materials People managementMaintenance management Supervision Provision and quality of tools and equipmentProcedures Working environment Training and selectionError enforcing conditions Staff attitudes Commercial and operational pressuresHousekeeping Housekeeping Planning and schedulingIncompatible goals Contractors Maintenance of buildings and equipmentOrganisation Design CommunicationCommunication Staff communicationTraining Departmental communicationDefences Staffing and fostering

TrainingPlanningRulesManagementMaintenance

87

on 11 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

Qual H

ealth Care: first published as 10.1136/qshc.4.2.80 on 1 June 1995. D

ownloaded from

Page 9: Understanding adverse events: human factorsqualitysafety.bmj.com/content/qhc/4/2/80.full.pdftribution of human error to accidents in hazardous technologies increased fourfold between

Reason

contact with physical hazards, such as loss ofmarket share, bankruptcy, and liability tocriminal prosecution or civil law suits.The measurements derived from Tripod-

Delta are summarised as bar graph profiles.Their purpose is to identify the two or threefactors most in need of remediation and totrack changes over time. Maintaining adequatesafety health is thus comparable to a long termfitness programme in which the focus ofremedial efforts switches from dimension todimension as previously salient factors improveand new ones come into prominence. Like life,effective safety management is "one thing afteranother." Striving for the best attainable levelof intrinsic resistance to operational hazards islike fighting a guerrilla war. One can expect noabsolute victories. There are no "Waterloos" inthe safety war.

Summary and conclusions(1) Human rather than technical failures now

represent the greatest threat to complexand potentially hazardous systems. Thisincludes healthcare systems.

(2) Managing the human risks will never be100% effective. Human fallibility can bemoderated, but it cannot be eliminated.

(3) Different error types have different under-lying mechanisms, occur in different partsof the organisation, and require differentmethods of risk management. The basicdistinctions are between:* Slips, lapses, trips, and fumbles (ex-ecution failures) and mistakes (planningor problem solving failures). Mistakesare divided into rule based mistakes andknowledge based mistakes

* Errors (information-handling problems)and violations (motivational problems)

* Active versus latent failures. Activefailures are committed by those in directcontact with the patient, latent failuresarise in organizational and managerialspheres and their adverse effects maytake a long time to become evident.

(4) Safety significant errors occur at all levelsof the system, not just at the sharp end.Decisions made in the upper echelons ofthe organisation create the conditions inthe workplace that subsequently promoteindividual errors and violations. Latentfailures are present long before an

accident and are hence prime candidatesfor principled risk management.

(5) Measures that involve sanctions andexhortations (that is, moralistic measuresdirected to those at the sharp end) haveonly very limited effectiveness, especiallyso in the case of highly trainedprofessionals.

(6) Human factors problems are a product ofa chain of causes in which the individualpsychological factors (that is, momentaryinattention, forgetting, etc) are the lastand least manageable links. Attentional"capture" (preoccupation or distraction)is a necessary condition for the com-mission of slips and lapses. Yet itsoccurrence is almost impossible to predict

or control effectively. The same is true ofthe factors associated with forgetting.States of mind contributing to error arethus extremely difficult to manage; theycan happen to the best of people at anytime.

(7) People do not act in isolation. Theirbehaviour is shaped by circumstances.The same is true for errors and violations.The likelihood of an unsafe act beingcommitted is heavily influenced by thenature of the task and by the local work-place conditions. These, in turn, arethe product of "upstream" organizationalfactors. Great gains in safety can beachieved through relatively small modifi-cations of equipment and workplaces.

(8) Automation and increasingly advancedequipment do not cure human factorsproblems, they merely relocate them.In contrast, training people to workeffectively in teams costs little, but hasachieved significant enhancements ofhuman performance in aviation.

(9) Effective risk management dependscritically on a confidential and preferablyanonymous incident monitoring systemthat records the individual, task, situ-ational, and organizational factors associ-ated with incidents and near misses.

(10) Effective risk management means thesimultaneous and targeted deployment oflimited remedial resources at differentlevels of the system: the individual orteam, the task, the situation, and theorganisation as a whole.

1 Brennan TA, Leape LL, Laird NM, Herbert L, Localio AR,Lawthers AG, et al. Incidence of adverse events andnegligence in hospitalized patients: results from theHarvard medical practice study 1. New Engl J Med1991 ;324:370-6.

2 Leape LL, Brennan TA, Laird NM, Lawthers AG,Localio AR, Barnes BA, et al. The nature of adverseevents in hospitalized patients: results from the Harvardmedical practice study II. New Engl J Med 1991;324:377-84.

3 Cook RI, Woods DD. Operating at the sharp end: thecomplexity of human error. In: Bogner MS, ed. Humanerrors in medicine. Hillsdale, New Jersey: Erlbaum,1994:255-310.

4 Gaba DM. Human error in anesthetic mishaps. IntAnesthesiol Clin 1989;27:137-47.

5 Gaba DM. Human error in dynamic medical domains.In: Bogner MS, ed. Human errors in medicine. Hillsdale,New Jersey: Erlbaum, 1994:197-224.

6 Perrow C. Normal accidents. New York: Basic Books, 1984.7 Vincent C, Ennis M, Audley RJ. Medical accidents. Oxford:

Oxford University Press, 1993.8 Bogner MS. Human error in medicine. Hillsdale, New Jersey:

Erlbaum, 1994.9 Runciman WB, Sellen A, Webb RK, Williamson JA,

Currie M, Morgan C, et al. Errors, incidents andaccidents in anaesthetic practice. Anaesth Intensive Care1993;21:506-19.

10 Hollnagel E. Reliability of cognition: foundations of humanreliability analysis. London: Academic Press, 1993.

11 Reason J. Human error. New York: Cambridge UniversityPress, 1990.

12 Bacon F. In: Anderson F, ed. The new Organon.Indianapolis: Bobbs-Merrill, 1960. (Originally published1620.)

13 Sheen. MV Herald ofFree Enterprise. Report ofcourt No 8074formal investigation. London: Department of Transport,1987.

14 Eagle CJ, Davies JM, Reason JT. Accident analysis of largescale technological disasters applied to an anaestheticcomplication. Canadian Journal of Anaesthesia 1992;39:118-22.

15 Reason J. The human factor in medical accidents. In:Vincent C, Ennis M, Audley R, eds. Medical accidents.Oxford: Oxford University Press, 1993:1-16.

16 Wiener EL. Human factors of advanced technology ("glasscockpit") transport aircraft. Moffett Field, California:NASA Ames Research Center, 1989. Technical report117528.

88

on 11 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

Qual H

ealth Care: first published as 10.1136/qshc.4.2.80 on 1 June 1995. D

ownloaded from

Page 10: Understanding adverse events: human factorsqualitysafety.bmj.com/content/qhc/4/2/80.full.pdftribution of human error to accidents in hazardous technologies increased fourfold between

Understanding adverse events: human factors

17 Woods DD, Johannesen JJ, Cook RI, Sarter NB. Behindhuman error: cognitive systems, computers, and hindsight.Wright-Patterson Air Force Base, Ohio: Crew SystemsErgonomics Information Analysis Center, 1994.(CSERIAC state of the art report.)

18 NUREG. Loss of an iridium-192 source and therapymisadministration at Indiana Regional Cancer Center,Indiana, Pennsylvania, on November 16, 1992.Washington, DC: US Nuclear Regulatory Commission,1993. (NUREG-1480.)

19 Helmreich RL, Butler RA, Taggart WR, Wilhem JA.Behavioral markers in accidents and incidents: reference list.Austin, Texas: University of Texas, 1994. (Technicalreport 94-3; NASA/University of Texas FAA AerospaceCrew Research Project.)

20 Helmreich RL, Schaefer H-G. Team performance in theoperating room. In: Bogner MS, ed. Human errors inmedicine. Hillsdale, New Jersey: Erlbaum, 1994.

21 Runciman WB. Anaesthesia incident monitoring study. In:Incident monitoring and risk management in the health care

sector. Canberra: Commonwealth Department of HumanServices and Health, 1994:13-5.

22 Williams J. A data-based method for assessing and reducinghuman error to improve operational performance. In:Hagen W, ed. 1988 IEEE Fourth Conference on HumanFactors and Power Plants. New York: Institute of Electricaland Electronic Engineers, 1988:200-31.

23 Reason J, Mycielska K. Absent-minded? The psychology ofmental lapses and everyday errors. Englewood Cliffs, NewJersey: Prentice-Hall, 1982.

24 Reason J. A systems approach to organisation errors.Ergonomics (in press).

25 Hudson P, Reason J, Wagenaar W, Bentley P, Primrose M,Visser J. Tripod Delta: proactive approach to enhancedsafety. Journal of Petroleum Technology 1994;46:58-62.

89

on 11 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

Qual H

ealth Care: first published as 10.1136/qshc.4.2.80 on 1 June 1995. D

ownloaded from