Transcript
Page 1: 1-6 What Have We Learned About Learning From Accidents Post Disasters Reflections

Safety Science 51 (2013) 441–453

Contents lists available at SciVerse ScienceDirect

Safety Science

journa l h omepage: ww w.e l sevier .com/loca t e /ssci

Review

What have we learned about learning from accidents? Post-disasters reflectionsJean Christophe Le Coze ⇑

Institut National de l’environnement industriel et des risques, Parc Alata, 60550 Verneuil en Halatte, France

a r t i c l e i n f o

Article history:Received 30 September 2011Received in revised form 14 May 2012Accepted 31 July 2012Available online 12 October 2012

a b s t r a c t

The disasters of the past years in different high risk industries (e.g. aviation, offshore, nuclear) push for a moment of reflexivity about learning from accidents. In the aftermath of these events, one wonders whether learning from accidents remains a viable endeavour for companies and states or whether recurring technological disasters such as these seriously and definitely undermine any attempt to prove the feasibility of learning. Progress has certainly been made in the past, but apparently not enough so to be able to reach the highest safety levels, even in systems with dedicated resources. As a result of the current situation, some have been able to argue that ‘we don’t learn about disasters’. Although appealing and right, this is a very generic statement. There are many studies addressing aspects of learning from accidents which are in a position to bring insights about the drawbacks of learning. But this wealth of research is also part of the problem. When one wants to step back and to look broadly at the topic, to understand the reason why ‘we don’t learn’, one is left with a fragmented scientific literature covering a very large spectrum of interests and views on the subject. This paper tackles this problem by first designing a framework to organise the diversity of studies and second, by extracting four lessons on learning from accidents, putting together for this purpose works in psychology, sociology and politicalscience.

2012 Elsevier Ltd. All rights reserved.

Contents

1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4421.1. Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4421.2. Why ‘still a young field’? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4431.3. A constructivist view of learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443

2. A fragmented literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4442.1. A high diversity of studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4442.2. A diversity of interests and angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444

2.2.1. Steps and industries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4452.2.2. Actors and scientific disciplines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4452.2.3. Countries and intensity of events considered . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4452.2.4. An organising framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445

3. Political insight into reporting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4474. A psychological/cognitive view of the selection step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4475. Sociological insight into the investigation step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4486. A political view of prevention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4507. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4518. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452

⇑ Tel.: +33 (0)3 44 55 62 04.E-mail address: [email protected]

0925-7535/$ - see front matter 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.ssci.2012.07.007

Page 2: 1-6 What Have We Learned About Learning From Accidents Post Disasters Reflections

4 J.C. Le Coze / Safety Science 51 (2013) 441–J.C. Le Coze / Safety Science 51 (2013) 441– 41. Introduction

In these times of post technological disasters in different high- risk industries, including the Fukushima (2011) nuclear power plant and Deepwater Horizon oil rig (2010) explosions, or the AF447 flight crash (2009) and, a little earlier, the BP Texas City (2005) and the Columbia shuttle (2003) explosions, one has to reflect. In a wave that clearly recalls the series of the eighties, i.e. Bhopal (1984), Chernobyl (1986), Challenger (1986) and Piper Alpha (1988), that created the impulse for increased research and efforts in the field of safety (e.g. Rasmussen and Batstone, 1989), one wonders about ‘learning from accidents’. This paper is thus triggered in the aftermath of this wave of disasters, and by the reactions that one could find in the media from researchers with voices in the field because of their broad philosophical or sociolog- ical perspectives on the issues of modernity, risk, science, technol- ogy and democracy. Two examples are Beck and Stengers (Beck,2011; Stengers, 2011). Beck is a sociologist, known for his input in the 1980s on ‘risk society’ (Beck, 1992). This author will be discussed in more detail below. Stengers is a philosopher, known for a contribution in association with Prigogine about changing times in the scientific realm, based on the principles of self-organi- sation (Prigogine and Stengers, 1978). She has been exploring the relationship between science and society since (Stengers, 1993), and the place of catastrophes in (post)modern times (Stengers,2009).

What is striking in these reactions is the level of discussion that remains at what could be described as a ‘macro-level’, not very in- formed by empirical studies on learning from accidents in different high risk industries. This ‘macro-level’ views lend themselves to critics. Let us illustrate with Fressoz (2011). This author, a histori- an, in a stimulating article published in the months to follow Fuku- shima, challenges Beck’s ‘risk society’. Fressoz, based on his more general historical thesis of technological disasters in relation to modernity (Fressoz, 2012), argues that Beck’s theory remains part of a teleological discourse of progress. The reflexive modernity of Beck would somehow lead to a heightened consciousness of the limits and risks of society’s own techno-scientific developments.‘Since the 1980s, social theory has treated technological disasters as symbols or precursors of an immense historical break: a break with the project of technical mastery of the world, with the idea of progress, with the disregard of nature, with consumerism – in short, a break with everything characteristic of modernity’.

For Fressoz, quite the contrary, Fukushima’s nuclear accident demonstrates once again that we are far from this expected next stage of ‘reflexive modernity’. We ‘do not learn’ from the past be- cause technological disasters continue to reoccur. ‘The more disas- ters there are, the less we seem able to learn from them. Our faith in progress and our concern for economic efficiency make it clear that, contrary to postmodernist claims, we have not escaped from the illusions of modernity.’ (Fressoz, 2011). The ‘We’ in Fressoz’s thesis is clearly very broad, and includes regulators, industry and civil society all together. But one could argue that Fressoz’s statement about reflexive modernity, although appealing, lends itself to criticism. Not only for its content, but because of its nature: it is also a macro statement. Of course, it must be seen within the wider debates about ‘postmodernity’ and the implications for a notion as important as the notion of ‘progress’. Although the claim that ‘we don’t learn’ sounds right, it is not based on in-depth empirical studies about some of the real limits and constraints of learning as described in high-risk systems.

What should one think about this? What do these recent disas- ters demonstrate in terms of learning from accidents? What do we know today about the limits of this activity that could help to shed light on these recent disasters? One of Sagan’s main

conclusions, following Perrow’s normal accident (Perrow, 1984), was not to

Page 3: 1-6 What Have We Learned About Learning From Accidents Post Disasters Reflections

4 J.C. Le Coze / Safety Science 51 (2013) 441–J.C. Le Coze / Safety Science 51 (2013) 441– 4expect too much from learning (Sagan, 1993). A lot has been written since on the topic. One problem is nevertheless that asking such a question requires different strands of work to be put to- gether. One is faced indeed with a wide range of approaches, inter- ests and outcomes on this topic. No tentative overview of this diversity can be found, with the exception of a few, so far limited, attempts (Lindberg et al., 2010) that this article wishes to pursue.

It definitely seems to be a problem, as for any scientific field, not to take a step back from time to time.1 This is especially true when a topic is very active and has been growing steadily in the past years. Learning from accidents in high-risk industries is indeed still a young field. Although a pillar of safety management, it is rather scattered, and, as noted by Lindberg et al. (2010), p. 714 ‘the scientific literature on experience feedback from accidents has grown significantly in the last few decades. However, this literature is still rather fragmented, and much remains to be done to develop a unified and integrated approach to learning from accident that integrates knowledge and experience from different disciplines and fields of application.’ This paper sub- scribes to this statement and acknowledges that when one wants to look broadly at the topic, there is currently no framework or synthesis available to do so.

Three points must be made in this introduction of the paper be- fore going further. One pertains to the definition of learning, the second about the relative youth of the field and the third is about the constructivist view of learning that the paper rests on.

1.1. Learning

When one introduces the question of learning, the question of the definition of ’learning’ immediately arises. There is obviously no easy answer to such a question, as it is approached from many different disciplinary angles, and this is a very broad field of investi- gation. Learning about learning is as old as the first treatise about how humans produce (reliable) knowledge of the world around them. The Greek philosophers are probably the place to start. Plato and Aristotle (if one leaves aside the pre-Socratic philosophers and thinkers of other parts of the world) are philosophers formulating questions and developing answers to the question what is ‘learning’.

By questioning how humans could know nature beyond the myths explanations, these philosophers provided a first literature for a definition of learning. Taking a giant leap forward into history centuries later with the advent of ‘modern’ science, the names of Popper (1936), Khun (1962) or Latour (1987), come to mind for the twentieth century. Questioning induction and deduction, the experimental and mathematical side of scientific theories or their paradigmatic dimensions, is an excellent approach for defining and providing examples of learning about (scientific) learning that extend the contributions of ancient philosophers. Studying science through a philosophical, historical or sociological mode of investi- gation provides a perfect field of research to refer to for a definition of learning.

However, the field of learning is obviously not limited to the study of science(s). Although the scientific way of learning has very often been seen as the normative reference to compare other types of learning with, learning is approached in many different fields. Learning has also been explored in the last decades for different objects/subjects from biological, ethological, psychological, organ- isational, anthropo-social and political viewpoints and even from an engineering perspective with the attempts to design ‘intelligent’ systems, e.g. ‘self’-autonomous robots.

1 See for example, among others, Miller’s comments on the fragmented approach to cognition, a problem that he associates with many other scientific fields ‘This favoritism for analytic theories is not peculiar to experimental psychologists. All scientists share it. Analysis is the scientific reflex: when you want to understand something, take it apart.’ (Miller, 1986).

Page 4: 1-6 What Have We Learned About Learning From Accidents Post Disasters Reflections

4 J.C. Le Coze / Safety Science 51 (2013) 441–J.C. Le Coze / Safety Science 51 (2013) 441– 4Learning in biology is thus closely linked to the ability of

organ- isms, in a very broad sense, to adapt to their environment. Learning in ethology refers to cognitive skills of animals in their environ- ments. Learning in developmental psychology explores how children go through different learning stages as they grow. Devel- opmental psychology meets behaviourists as well as cognitive psychologists in the study of learning, including, to various degrees between these fields, issues of conditioning, memory, causal reasoning, etc.

Learning in sociology (or in anthropology) can be seen in relation to stages of socialisation of individuals in the course of their lives under the influences of different settings including insti- tutions (e.g. family, education systems, work), which could also be seen as including a learning aspect when these evolve under social and political changes. Learning in political studies can be seen through many different angles including the ability for public pol- icies, as well as political parties or politicians themselves, to adapt to new situations. The field of learning in organisations covers a wide range of approaches representing psychological, cognitive, managerial or sociological disciplines.2

As is clear for many of these examples (e.g. biology, sociology),learning can be seen both at individual and collective levels through a circular or relational process. From a sociological view point, individuals learn in a collective context, without which they would not be able to do so, but, at the same time, they provide their own inputs that transform the collective level.

1.2. Why ‘still a young field’?

Because of this background, it may seem very surprising for some to read that learning from accident is still a young field and it is necessary for me to explain why I follow Lindberg et al. (2010) in their assessment. One approach is to consider that the maturity of a scientific field can be characterised by its level of institutionalisation. Examples of young fields are ergonomics and management. They began to establish themselves through professional organisations, dedicated journals, conferences and university positions followed (researchers, professors) as well as industry laboratories after Second World War.

If one accepts, for the purpose of this paper a very highly simplified, sketchy, historical account, setting aside differences be- tween countries, here are some of these contextual elements for both fields. Managerial positions in organisations became promi- nent throughout the 20th century as the contemporary type of organisations as we know them today took shape. As a result busi- ness schools were created to respond to new training and research needs in those areas. Ergonomics established itself as a way to meet the need to improve both the performance and working conditions of workers in very different industrial areas expanding towards cognitive ergonomics as computers began to colonise industries and businesses in the last decades of the 20th century.

2 ‘Organisational learning’ has been a field of action research for several decades now, including the issue of single and double loop learning (Argyris, 1993). Double loop learning goes to a deeper level of investigation than single loop learning to address underlying ‘causal’ models directing interpretations and actions. Authors such as Senge (1990) have relied on system dynamics to indicate ways of reaching this type of double loop learning. Although it is a very fruitful research tradition, this‘organisational learning’ framework will not be used for structuring this paper. There at least two reasons for this. First, the emphasis in this paper is on descriptive empirical works, not managerial or action research ones. Second, the ‘organisation- learning’ framework cannot integrate the variety of concepts introduced by various disciplines applied in the field of learning from accidents (e.g. psychology, sociology, political sciences), as will be shown in this paper. The idea that depth of investigations, is a key approach to prevent accident (e.g. ‘single/double loops’), remains however, strongly in the background of this paper (it is sometimes referred to as investigating ‘root causes’).

Both management science and ergonomics were accompanied in the second half of the 20th century by a body of theories, meth- ods and concepts established to help delineate the boundaries of these ‘disciplines’, although it still remains a complex question to determine what is inside and outside the boundaries of the discipline of disciplinary frontiers. This is largely a historical and socio-political but also cognitive issue. Both in management and ergonomics, associations, journals, and university positions in fol- lowed. As such, because of their histories as very briefly sketched, ergonomics and management are younger ‘disciplines’ compared to, say, psychology or sociology.

Sociology and psychology took shape in the 19th century, with different orientations, but were established before ergonomics or management. The judgement as to whether or not a field is young is therefore seen here as incorporating an institutional dimension. Based on this rationale, I consider that learning from accidents (in high risk industries) is still a young field. It is not an independent discipline as such; it is represented by different disciplinary contri- butions, without an independent status, with associated journals, university positions, etc.

One could argue that learning from accidents is part of the wider field of ‘safety science’, but the status of ‘safety science’ could also be characterised as a young and scattered one as it is ap- proached by many different disciplines for instance including, ergonomics, psychology, sociology or management, although it is certainly more established institutionally than learning from acci- dent may appear to be. This status has always been a significant problem for the issue of modelling major accident dynamics, a point that was well coined a decade ago by Rasmussen (1995,1997), and a problem that applies similarly to learning from accidents. ‘Complex, cross-disciplinary issues, by nature, require an extended time horizon. It takes considerable time to be familiar with the paradigms of other disciplines and often time consuming field studies are required.’ (Rasmussen, 1995).

Whether or not learning from accidents will remain scattered or will be more integrated and autonomous in the future will remain an open question at the end of this paper that derives quite directly from the more general argument. However, and as a result, this pa- per subscribes to the previous statement by Lindberg et al. (2010) and acknowledges that when one wants to look broadly at the to- pic, there is currently no dedicated framework and synthesis avail- able to do so. Learning from accidents is scattered. This certainly contributes to the low visibility of this field outside a circle of spe- cialists, and also, although indirectly, contributes to its ability to play a role in the way high risk systems are operated.

1.3. A constructivist view of learning

This paper is written from a constructivist position (Le Coze,2012). This position considers that knowledge can be seen at both individual and social levels, but that knowledge is always to be sit- uated in a specific historical context, in relation to the areas of experience of the individuals, their backgrounds, purposes, and lo- cated within communities and networks of practice. This approach clearly complicates the matter as learning in this case is always to be understood from a specific point of view, and not from a neutral, external and objective perspective. Therefore when I use the word‘we’ in the title of the paper ‘What have we learned about learning?’, I could just as easily have used ‘I’ instead. But at the same time, knowledge is a collective process, and this paper draws on other experiences of researchers to attempt to provide an example of a more integrated view on learning. So the ‘I’ is also inseparably a‘We’. To put it differently, in Latourian words, the Cartesian ‘cogito’is also a ‘cogitamus’ (Latour, 2011).

The title of this paper: ‘What have we learned about learning?’should therefore be read as ‘What do I know from empirical studies

Page 5: 1-6 What Have We Learned About Learning From Accidents Post Disasters Reflections

4 J.C. Le Coze / Safety Science 51 (2013) 441–J.C. Le Coze / Safety Science 51 (2013) 441– 4carried out by authors with different disciplinary backgrounds about how learning operates at different steps and levels of the learning processes in different high risk industries, that I can select and put together through my experience and background in order to extract les- sons and share them with other interested people in the field of learning from accidents that read the journal ‘safety science’, in the light of a ser- ies of major disasters in the past 10 years that questions the limits of learning from accidents?’. This would be a much too long a title indeed.

2. A fragmented literature

2.1. A high diversity of studies

A search in the recent literature reveals without a doubt a very wide range of studies on the topic. Methodologically, the papers selected for this section have been gathered from various journals covering disciplines such as engineering, psychology, ergonomics, management, sociology and political sciences, and published in the past 5–10 years. This list of journals is not to be considered exhaustive but indicative instead of the disciplinary diversity of views on learning from accidents (Table 1). In itself it is an indica- tion of the scattered nature of this topic.

The lines below dedicated to introducing these papers already reveal an implicit framework. Before explaining them more fully, here are its features. Let us illustrate it simply with the first selected paper: ‘There is, for example a study by a political scientist about how investigations of disasters (industrial, natural) shape (or not) changes in public policies in the US on the basis of interactions between interest groups, government leaders, policy entrepreneurs, the news media or members of the public (Birkland, 2009)’. In this sentence, there is first, a scientific discipline (i.e. political science), second, a con- cerned field (i.e. industrial, natural disasters); third, there is a de- gree of intensity of an event (i.e. disasters); and fourth, specific actors (i.e. interest groups, government leaders, policy entrepre- neurs, the news media or members of the public). Fifth, there is learning step (i.e. investigations) and sixth, a nation (i.e. U.S.). This framework derives from reading these papers and the literature on learning from accidents. Here is thus, a selection, in no particular order, of the papers one can find in these many different journals.

There is for example a study by a political scientist about how disasters (industrial, natural) shape (or otherwise) changes in public policies in the US on the basis of interactions between inter- est groups, government leaders, policy entrepreneurs, the news media or members of the public (Birkland, 2009, in journal of con- tingencies and crisis management). There is a survey about the type of models used by investigators in different organisations (investigation bureaux and a private company) performing accident investigations in Sweden, through the analysis of their manuals (Lundberg et al., 2009, in safety science).

One can read a sociological perspective on the different investi- gation logics, between individual and organisational interpreta-

Table 1A diversity of journals covering learning from accidents topics.

Accident Analysis and PreventionErgonomics Human Factors Human RelationsJournal of Contingencies and Crisis ManagementJournal of Hazardous Materials Journal of Safety Research Organization SciencePolicy and SocietyPublic Administration ReviewReliability Engineering and System SafetySafety Science

tions of accidents across domains, and the difficulty for the latter to be applied in real life contexts (Catino, 2008, in journal of con- tingencies and crisis management). There is a descriptive, manage- rial/sociological type of study about accident investigation performed by companies in the petroleum industry in Norway (Okstad et al., 2011, in safety science).

There is an article about the use of a specific type of investiga- tion model (M-T-O) used by investigators for incident and acci- dents in the Swedish nuclear industry in the past 20 years (Rollenhagen, 2011, in safety science). One can read a study by a political scientist interested in the relationship between different agencies and private organisations in the process of reporting near misses and incidents in the aviation sector in the US (Tamuz, 2001, in administration and society).

One finds a psychological insight into the way in which profes- sional experts treat data on near misses and incidents in private companies in the aviation industry in the UK and Australia (Mac- Rae, 2007, 2009, in London School of Economics and Political Sci- ence, Centre for Analysis of Risk and Regulation). There is a paper with a managerial orientation introducing the content and princi- ples of investigators training for the railway investigation board in the UK (Watson, 2004, in safety science).

One can find a presentation of the development of an investiga- tion methodology for safety officers in Denmark (Jorgensen, 2011, in safety science). There is a cognitive scientist providing guiding principles regarding the interpretation of ‘errors’ in investigations performed by professional investigators in transport (Dekker,2002a, in journal of safety research).

There are reviews of different methodologies and models for investigating incidents or accidents from a managerial viewpoint (Sklet, 2004, in journal of hazardous material), of which some focus more specifically, from both a managerial and sociological perspec- tive, on modelling rationales and purposes behind models (Le Coze,2008, in safety science). These papers are not restricted to specific industrial domain or specific actors. There is an article about how managers in organisations, across domains, learn (or fail to learn) from the investigations of rare events (Starbuck, 2009, in organisa- tion science).

There are articles by political scientists about the flaws in the findings of the Columbia Accident Investigation Board (CAIB) re- port, more specifically about the status of NASA’s organisation with regards to HRO (high reliability organisation) standards (Boin and Schulman, 2009, in public administration review, Boin and Fishb- acher-Smith, 2011 in policy and society). Another study provides an anthropo-sociological view on the cultural issue of error report- ing in the railway industry, in Japan, based on an interpretation of the findings of a train crash (Chikudate, 2009, in human relations).

It is also the cultural dimension that Strauch discusses in a pa- per looking into the influence of cultural differences within teams on the likelihood of accidents, to be taken into account in accident investigations performed by professional (Strauch, 2010, in human factors). Of course, there are also papers on the results of (often major accident) investigations, for instance in the railways (Lawton and Ward, 2005, in accident analysis and prevention) or in the pyrotechnics industry (Le Coze, 2010 in safety science), with a sys- temic perspective.

2.2. A diversity of interests and angles

Quite clearly, all these recent studies depend on the interest of the researcher(s), on his/her/their theoretical (and sometimes practical) orientations. One cannot help being surprised by the diversity of material available. They address some, and not all, as- pects of learning from accidents. How can this variety be organ- ised? The rationale from which an organising framework can be

Page 6: 1-6 What Have We Learned About Learning From Accidents Post Disasters Reflections

4 J.C. Le Coze / Safety Science 51 (2013) 441–J.C. Le Coze / Safety Science 51 (2013) 441– 4 derived is now explained in a bit more detail.

Page 7: 1-6 What Have We Learned About Learning From Accidents Post Disasters Reflections

4 J.C. Le Coze / Safety Science 51 (2013) 441–J.C. Le Coze / Safety Science 51 (2013) 441– 42.2.1. Steps and industries

There is the proposal by Lindberg et al. (2010) to retain ‘report- ing, selection, investigation, dissemination, prevention’ as the basic steps of the learning process. It is on this basis that they offer their overview of the field. This often how learning from accidents is de- scribed, from a formal or managerial point of view. However, one could argue that the way these steps are implemented across industries reveals distinct stages of development, and therefore different kinds of learning configurations. One can easily under- stand that between the aviation and the petroleum industry, there today exists a gap in this matter. Some have even seen this gap as an opportunity for the latter to learn from the former, using in this case the near miss management system as a point of comparison (Hopkins, 2009). The aviation industry has developed one of the most resourced and advanced learning from accidents systems (Pidgeon and O Leary, 2000). However, distinguishing only indus- tries and steps would fail to raise awareness of the diversity of available studies, as the selected papers have shown.

2.2.2. Actors and scientific disciplinesBecause for the same industry and for the same step (for exam-

ple investigating) one can focus on different actors and levels of so- cio-technical systems, it is necessary to distinguish them (e.g. Rasmussen, 1997; Moray, 1994; Evan and Manion, 2002). For in- stance, investigations performed by state entities, whether through professional investigators (e.g. US Chemical Safety Board) or through an inspector from a control authority (e.g. OSHA), are for instance of a different kind for the same event than investigations performed by private companies, or by scholars. Scholars from var- ious scientific backgrounds have indeed maintained a strong inter- est in accident models and accident investigations for the past50 years, with a different perspective than other actors (with for instance a strong concern for theorising from events).

Yet, once these three distinctions have been made (steps, indus- tries, actors), another factor remains to be considered: the dominant scientific discipline of the study. There are many scientific disci- plines liable to provide an interesting point of view on learning from accidents, including, as seen above, engineering, cognitive sciences, ergonomics, sociology, (safety) management science, political sci- ence. Each discipline entails a specific interpretative angle. A politi- cal scientist therefore looks differently into the issue of learning from accidents than a cognitive scientist. Although complementary in some ways, all these disciplines nevertheless remain interested in different aspects and types of data, of phenomena, and approach the learning process differently, at a distinctive time and place.

2.2.3. Countries and intensity of events consideredTwo other dimensions are worth distinguishing. One is the na-

tion in which the learning takes place and the other is the intensity of events studied. In the selected papers, it appears clearly that learning from accidents takes place in specific contexts of various countries, where history and institutions have shaped the way accidents are dealt with. The history of boards of investigation throughout the world (Stoop and Roed-Larsen, 2011) is a good indication of the national context influencing certain types of learning resources and constraints. These boards have been de- signed differently, including the relationship with states and industry. The specific relationship between justice and investiga- tions is another of these features shaping the learning context. Depending on the country, a study on learning from accidents can be expected to meet specifics not found in other countries. Generalising across national boundaries without caution could therefore be misleading.

Finally, the intensity of the event indicates whether a study fo- cuses on weak signals and near misses, incidents and accidents, or disasters, as indicated in papers summary. Learning situations ana-

Page 8: 1-6 What Have We Learned About Learning From Accidents Post Disasters Reflections

4 J.C. Le Coze / Safety Science 51 (2013) 441–J.C. Le Coze / Safety Science 51 (2013) 441– 4lysed at weak signal or near miss stages are obviously of a different kind to disaster situations. First, detecting a disaster is not a major problem, while detecting and selecting relevant weak signals and near misses is far more of a challenge. Secondly, both cases differ in the number of actors involved when it comes to investigation. Thirdly, the extent of damages in a disaster and liabilities involved creates constraints but also opportunities for investigation, for in- stance the resources available as well as access to data, that are not available following a ‘simple’ near-miss or incident.

2.2.4. An organising frameworkFig. 1 puts these different aspects together. Learning from acci-

dents can be studied in so many ways that as a result one finds a very wide range of articles and books on different aspects of this process. This framework has the purpose of distinguishing key dimensions to help ordering the diversity of papers. It is not an explanatory model of any sort, it visualises the many different sides from which learning from accidents can be investigated. But it is one possible step towards a more integrated approach.

In order to understand learning from accidents from the widest possible angle in a particular industry, in a specific country, one would need to consider together many different actors, many dif- ferent steps and would also need to combine various disciplinary scientific backgrounds. No research has ever been done on such a scale. The amount of qualitative and quantitative empirical data, the number of researchers and cooperation implied across disci- plinary backgrounds but also the methodological issues involved would be great challenges for the implementation of such a pro- gram. A comparison between countries and industries with this scope seems beyond any current research funding.

If one wished to understand the political aspects of learning from experience, the range of investigation would also be im- mense, across industries, steps and events, etc. Does this mean that it is not possible to obtain such as perspective? If one still wants to step back, what is a good strategy to follow? Would a thorough re- view of what is available be the answer? A thorough review of the field would consist in identifying and selecting a very large number of papers and books and mapping them onto the framework in or- der to indicate for instance areas (industry, steps, disciplines, ac- tors, etc.) which appear well covered and conversely, those which appear less well covered. What we know and what remains to be known could be more visible, although there would always be the risk of missing available studies. Table 2 illustrates this pos- sibility, using some of the papers introduced in Section 2.1. Such a review would be one strategy.

Within the limits of this paper, the approach is more modest. I propose to associate several works that are positioned differently within the suggested framework (Fig. 1) but that, once associated, contribute to a bigger picture in an informative way. As indicated, a single empirical study on learning from accidents covering and combining all steps, actors, scientific disciplines and intensities of events for an industry in a specific nation would be a challenging task but one can try to artificially create a broader picture, than an independent and local study, by combining isolated and scat- tered pieces of research. This is what is now attempted, on a still limited scale but hopefully convincing enough to illustrate the rel- evance of this strategy.

The selection of these papers is based on two principles. First, a paper must contribute empirical and descriptive data from a specific disciplinary angle (e.g. psychology, sociology). This first requirement is likely, but not guaranteed to restrict the empirical studies to a spe- cific field (e.g. aviation, nuclear, etc.). This is deliberatly favoured here, as qualitative research brings a level of detail in the description that is necessary for the purpose of this paper. It will also introduce different concepts from different disciplines and show the diversity of possible approaches to the topic of learning from accidents. Sec-

Page 9: 1-6 What Have We Learned About Learning From Accidents Post Disasters Reflections

Disciplines

Disciplines

Disciplines

Disciplines

DisciplinesDisciplines

4 J.C. Le Coze / Safety Science 51 (2013) 441–J.C. Le Coze / Safety Science 51 (2013) 441– 4

State (boards, control authorities)Professional associations (i.e. investigators association) Companies (corporate, sites)Scholars Consultants Media

Weak signal Near miss Incident Accident Disaster

Engineering sciences Cognitive sciences Ergonomics Sociology Management science Political science

Intensity of event

Disciplines

Actors

Learning from

accid ent

Ind ustry

CountriesNations

Steps

Asia (China, Japan, etc.) Europe (France, UK, etc.) North America (Canada, US) South America (Argentina, Brazil, etc.)Etc.

Reporting Selection Investigation Dissemination Prevention

Transport (aviation, railway, road, maritime, pipelines) AerospaceProcess industry (chemical, oil, etc) Nuclear industry Medical field

Fig. 1. A broad framework.

Table 2Mapping articles and books within the framework.

Paper Actors targeted Step(s) studied Industry Intensity Discipline Nation/country

Birkland Mainly state related actors More oriented towards Different examples of disasters Disaster Political US(2009) (politicians, group of prevention (or crises) from diverse science

interests) industriesLundberg State related actors, More interested in Rail, nuclear Incident or accident (Safety) Sweden

et al. professional investigators investigation requiring management(2009) investigation

Okstad et al. Both private and state All steps considered, but Petroleum Mainly accidents (Safety) Norway(2011) main focus on management

investigationRollenhagen Investigators of companies Focused on investigation Nuclear Incidents (Safety) Sweden

(2011) managementCatino (2008) Scientists, organisation Focused on investigating Not specified Not specified Sociology Not

members specifiedTamuz Private companies and state Reporting Aviation Near miss Political US

(2001) agencies scienceMacrae Risk managers of private More concerned with Aviation Near miss, weak Psychology, UK/

(2007, aviation company selection and signals social Australia2009) dissemination psychology

Watson Investigators Investigation prevention Railway Incidents and (Safety) UK(2004) accidents Management

Chikudate Train drivers and crew Reporting Railway Near misses, Anthropo- Japan(2009) incidents sociology

Boin and Columbia accident Investigation Aerospace Disaster Sociology, USSchulman investigation board members political(2009) science

Etc. / / / / / /

ond, studies focusing on specific steps of learning, i.e., reporting, selection, investigation, dissemination and prevention, are favoured. The quality of description is likely to be more interesting when a

study focuses on one particular step. This does not rule out the importance of conducting quantitative and multi-step studies across high-risk industries, but the emphasis here is on qualitative studies.

Page 10: 1-6 What Have We Learned About Learning From Accidents Post Disasters Reflections

4 J.C. Le Coze / Safety Science 51 (2013) 441–J.C. Le Coze / Safety Science 51 (2013) 441– 4In each of Sections 3–6, an introduction to for this purpose the

theme will be provided before introducing the study of interest. Each section will end with a link to a recent and related study, in order to show the present relevance of the theme. This approach will help to put ‘flesh’ on Fressoz’s otherwise representative and interesting but generic kind of macro-statement that ‘we don’t learn from accidents’. For this purpose, psychological, sociological and political views on learning from accidents are for this purpose in turn introduced in the next four sections. Based on the key messages contained in these recent contributions in these three disciplines and supported by other current studies that will be introduced below, they consequently provide interesting insights for the field of learning from accidents for different steps (report- ing, selection, investigation, prevention). In total, four ‘lessons about learning’ are extracted. There are defined as ‘lessons about learning’ in this paper because they provide empirical data to describe how learning operates at different steps and levels.

3. Political insight into reporting

Any learning system relies first on the ability to gather data about its dynamic status. Since Turner’s model (1978), based on the hypothesis that accidents are always preceded by signals during an ‘incubation period’, much credit has been attributed to this assumption (most notably by Vaughan, 1996). Aviation is one do- main in which many resources have been devoted to dedicated‘reporting systems’ (Pidgeon and O Leary, 2000). Apart from avia- tion, the fact that almost every organisation in different high risk industries, as well as each regulatory agency, implements such reporting systems (with different levels of maturity) demonstrates the importance granted to making a practical attempt at detecting early warnings of potentially oncoming disasters. Such systems are intended to provide information on pending issues lying dormant while operating within high risk socio-technical systems. Yet the process of reporting has proved to be a task in which at least two different interests meet: regulatory enforcement by state agencies and safety improvement by industry and companies operating high risk technologies (Pidgeon and O Leary, 2000; Tamuz, 2001).

Although these conflicts of interests constitute one central problem to be understood and overcome, Pidgeon and O Leary, (2000, 24) observed that ‘The question of how a reporting or monitor- ing system can be successfully embedded within the local social and political contexts (sometimes both organisational and national) where it will be expected to operate is invariably not posed.’ Providing an initial answer to this lack of studies, these authors indicated how in the UK, British Airways reported through three different sys- tems. What was then described to be important features for the practice of reporting was a certain number of prerequisite princi- ples including ‘establishing and maintaining a level of confidence and trust between reporters and evaluators’ (Pidgeon and O Leary,2000, 26). But that was not all.

They also stressed the importance of ‘separation of the primary goal of organisational learning from the use of collected information for instigating sanctions against individuals’ (Pidgeon and O Leary,2000, 26), referring to the work of Tamuz (1994). Tamuz indeed showed that between 1968 and 1971, when immunity was offered to pilots for reporting breaches of the rules to the federal authority in the US, a very significant increase in declared infringements was observed. This natural experiment demonstrated quite clearly how fear of prosecution for pilots constituted a barrier to the reporting of events, and then to learning about their extent. Separation be- tween the two goals, learning and sanctioning is therefore re- quired. This concern with attempting to create the proper conditions for learning, balancing on the one hand the need to learn without fear of sanction (or criminalisation) and on the other

hand the need to keep the possibility of condemning improper behaviour in order to ensure safety, was later coined as creating a ‘just culture’ (Dekker, 2007).

In the opinion of Tamuz (2001), who studied how US aviation was organised to report near misses, one therefore needs to consider and distinguish distinct levels of learning capabilities, including federal, industry and company levels. He describes four different reporting channels in the US, comprising:

– two at federal level (FAA-Federal Aviation Agency) NMAC for Near Mid Air Collision ATC (for Air Traffic Control), Center Monitoring

– one at industry level ASRS for Aviation Safety Reporting System

– one at company level ASAP for Air Safety Action Partnership.

He first observes, supporting Pidgeon and O’Leary’s claim, that one purpose of these separate reporting layers is to ‘maintain sep- arate safety reporting systems – to ensure regulatory enforcement and others to promote organisational learning’ (Tamuz, 2001, p. 298). He then also adds a second key feature that this nested design of chan- nels is expected to offer. It is the ability to ‘support diverse, multiple systems for monitoring safety’. What is particularly emphasised is the many tradeoffs these systems imply and that justify their diversity. While each of these channels has its virtues, none of them can satisfy several needs at the same time. For instance,‘Because ASAP operates within an airline, it has the capacity to take corrective action while maintaining pilot confidentiality. Yet, this advantage also simultaneously limits the ASAP. Unlike the national ASRS system described earlier, the airline-based ASAP does not for- mally exchange information with other airlines.’ (Tamuz, 2001, p.298). A variety of reporting systems would be required for a high risk domain to offer an appropriate level of learning.

First selected lesson about learning from accidents

The first lesson is that reporting needs to be understood as a practice embedded in a socio-legal-political context requiring strategies to cope with conflict of interests between different actors, including state-related agencies, industry associations, private companies or judicial sys- tem. A diversity of reporting systems can result from this context but also help to increase the ability to learn through the variety of channels created.

4. A psychological/cognitive view of the selection step

In the history of learning from accidents, psychological contribu- tions are very much associated with the study of ‘errors’. Many acci- dents are linked with ‘errors’ of front line operators in high risk systems, increasingly reliant on automation and human–machine interfaces (pilots in aviation, control room operators in the nuclear or process industry, drivers in railways, etc.). The importance of pro- viding elaborate views of cognition in the context of designing inter- faces, training operators but also investigating accidents has therefore been recognised as an important area for research, and has been being developed now for many years since the early contri- butions of the 1980s (i.e. Reason and Mycielska, 1982; Rasmussen,1986; Norman, 1988). More recently, accident investigation books or manuals have been dedicated to principles based on the latest cognitive theories of ‘errors’ (Dekker, 2002b; Strauch, 2003). For Dekker, one of the key messages is that pilots, control room opera-

Page 11: 1-6 What Have We Learned About Learning From Accidents Post Disasters Reflections

4 J.C. Le Coze / Safety Science 51 (2013) 441–J.C. Le Coze / Safety Science 51 (2013) 441– 4tors, ship captains, etc. interact very actively with their environ- ment. Knowing this, it is only from this angle that a retrospective analysis of ‘errors’ is relevant. In the process of interacting with their environment, individuals play a constructive role, using much of their intellectual resources and past experiences to enact the world they live in. Weick (1969) was an early promoter of this constructiv- ist (or pragmatist) idea, extended and labelled into a sense-making view of cognition (Weick, 1995). This author has also promoted the study of normal operation, and not only in the context of accidents.

Such a research standpoint on normal operations has also been taken recently by Macrae (2007, 2009), in a psychologically ori- ented analysis of learning from accidents in the aviation industry. Yet despite a psychological angle, this work is not directed towards an understanding of ‘errors’ of pilots (or other type of operators at the ‘sharp end’) for supporting investigation of incidents, but rather towards the selection of events (near misses or incidents) to be further investigated by airlines. Once reported and recorded, these data need to be analysed, compared and prioritised for fur- ther action (such as investigation). The likelihood of capturing a signal indicating a coming disaster partly depends on the quality of this selection process. However, vast amounts of data can be col- lected and it is a major problem for high risk companies to ensure in foresight that the right signals will be picked up among the large quantity of data. As indicated previously, in aviation, data is avail- able thanks to reporting systems which pilots or other employees of the airlines (maintenance engineers, cabin and ground crew) log into for reporting any near miss or problems met during operation.

Although certainly a matter of proportion between the number of people dedicated to this task in relation to the number of events reported, the quality of this process also relies on the ability to dis- criminate signals. This is more than a quantitative problem. ‘Iden- tifying risk was largely an interpretive rather than a statistical process’ (Macrae, 2009, p. 106). Matrixes combining severity and probabil- ity help risk managers to locate and appreciate events in this respect. Nevertheless, these matrixes do not really provide infor- mation on the cognitive processes behind selection. Although an important tool, they represent the formal side of selection. To go further, one needs to observe practices of employees performing this task. For Macrae ‘Studying risk management practices highlights the intuitive, generative, creative – and fallible – sociocognitive pro- cesses that seem essential to interpreting and making sense of ambig- uous risk events in organisations’ (Macrae, 2009, p. 106).

Thanks to a qualitative approach based on observations and interviews, this author indicates that there are at least four selec- tive cognitive strategies. ‘Making patterns’ consists in identifying events that once combined define a trend to be investigated and potentially corrected. ‘Drawing connections’ consists in linking spe- cific events with similar past major air accidents. ‘Recognising nov- elty’ implies a subtle attention to problems not experienced before, indicating a lack of awareness about specific types of operational failure. ‘Sensing discrepancy’ describes the analogical process of comparing events reported with expectations about normal opera- tions, about the way ‘things should work’. How intuition, experi- ence and imagination shape this process is stressed by the author ‘as previously indicated, warnings are not so much there for the taking but must be actively constructed and uncovered through vigilant interpretive work’ (Macrae, 2009, p. 115).

It is only when one reaches this level of description of the cogni- tive processes involved that one realises the very active nature of selection in learning from accidents. One can easily imagine how similar mental activities take place in many other industries, whether or not as many resources as in aviation are dedicated to it. Yet one can also understand better how fallible this selective pro- cess can be, as ‘the risk of an event was not an inherent quality of the event itself, but rather was

dependent on the assumptions, information, knowledge and, ultimately, imagination of the risk managers who were

Page 12: 1-6 What Have We Learned About Learning From Accidents Post Disasters Reflections

4 J.C. Le Coze / Safety Science 51 (2013) 441–J.C. Le Coze / Safety Science 51 (2013) 441– 4analysing it.’ (Macrae, 2009, p. 106). Of course the next stage is for the selected events to be prioritised and solved within the organisa- tion, steps requiring individuals to elaborate criteria, to disseminate data within the organisation, to convince and then to join together other actors of the organisation in problem solving networks. This also implies a ‘practical ability to frame and pose appropriate ques- tions to shape the enquiry of others’ (Macrae, 2009, p. 114).3

Second selected lesson about learning from accidents

Beyond reporting, selecting signals about potential acci- dents relies on cognitive processes involving different strategies including ‘making patterns’, ‘drawing connec- tions’, ‘recognising novelty’ and ‘sensing discrepancy’. One should not underestimate the level of expertise required for this task, including often unacknowledged subtle crea- tive and constructive features such as intuition and imag- ination as well as social skills. This helps to provide a more accurate view of the process of learning from acci- dents than formal presentations often do.

5. Sociological insight into the investigation step

Investigating accidents increasingly relies nowadays on profes- sional skills. A few decades ago, methods and models supporting investigation of accidents were not debated as much as they are to- day. One finds only a few authors in the 1960s with a broad inter- est in accident theory, challenging the prospect of a ‘science of accident research’ (Haddon et al., 1964). Later, in the 1970s, Benner, a professional investigator at the US National Transportation Safety Board (NTSB), reviewed investigation models and methods (Ben- ner, 1975, 1977). For this author ‘the most persuasive argument for developing an accident theory for SASI4

members is that assumptions, principles and rules of procedure are nowhere systematically organised, and that generally accepted rules of procedure for analysing, predicting or explaining the accident phenomenon are not available to the accident investigator’ (Benner, 1977, 18). Looking into history, he noted previ- ous contributions of Heinrich, Haddon but also Surry, and concluded that ‘the critical point is that an accident is a process involving interact- ing elements and certain necessary or sufficient conditions’ (Benner,1977, p. 19). It is in the 1970s that several other authors also contrib- uted decisively to the development of structuring thoughts in this area, in management with MORT (Management Oversight and Risk tree, Johnson, 1973) and in sociology of organisations with the‘man-made disaster’ model (Turner, 1978).

Much has been written since on this topic and a wide selection of models is now available to investigators depending on the scope (micro, meso, macro) and purpose of these models, whether they are normative or descriptive (Le Coze, 2008). The issue is certainly one of selection according to the type of accident under investigation (its intensity, novelty, complexity, etc.) and the resources available. It is in this spirit that some have argued against the ‘hegemony’ or misuse of Reason’s model (Shorrock et al., 2004). Reason’s model has been a ‘success story’ thanks to its ability to visually incorporate principles of defence in depth coupled with human and organisa- tional factors. A history of the evolution of the model has been pro- vided, partly as an answer to criticism (Reason et al., 2006).

3 The Columbia accident is now the classic example of this challenge of moving from an identified problem to a collectively acknowledged and accordingly treated one. Many interpretations are now available about this case, from the CAIB (2003) to further analysis in Starbuck and Farjoun (2005).

4 ‘SASI’ stands for Society of Air Safety Investigators.

Page 13: 1-6 What Have We Learned About Learning From Accidents Post Disasters Reflections

4 J.C. Le Coze / Safety Science 51 (2013) 441–J.C. Le Coze / Safety Science 51 (2013) 441– 4One of the main comments by Shorrock et al. (2004) was that

many accidents nevertheless involve ‘errors’ at the sharp end that need to be thoroughly studied and understood as they represent areas for improvement (for example in design or training) but also for further research for instance into the interplay of emotion and cognition. In this respect, the problem is for these authors that Rea- son’s model directs findings towards latent failures, away from ac- tive failures, in a move aiming at avoiding the bias of reducing accidents to ‘human errors’. While this orientation may be worth- while for major accidents and as a generic principle, many inci- dents require however to be analysed with a focus on sharp end behaviour. According to the authors, many professional investiga- tors (in national investigation boards within the transport domain, such as aviation or railway) focus their efforts on finding out about latent failures at management levels. For the authors this is clearly to be associated with the influence of the ‘Swiss Cheese’ philoso- phy, leaving in the background active failures (sharp end) insuffi- ciently explained, ‘the point is that the inquiry attempted to force this accident into the Reason model when it was probably inappropri- ate given the evidence’ (Shorrock et al., 2004, p. 142).

The limits of models, such as Reason’s model, are well under- stood. Models are only as good as their users. In this line of thinking, it might even be, in fact, a mistake to systematically distinguish models from their users. A model does not have any value if its full implications are not understood. While Reason’s model is very good at identifying a type of ‘systemic’ nature of accidents, indicating the importance of considering several ‘layers’ of issues, it remains underspecified. Investigators are left with their expertise to qualify the ‘holes’ in the slices. As it can at times cover several dimensions of an entire system, it requires knowledge from many different dis- ciplines to interpret data gathered at several strata of complex so- cio-technical systems. This situation is more likely to be met when several specialists interact on the same accident investigation (Svenson et al., 1999). As indicated, Shorrock et al. (2004) regret that in the case of some accidents investigated with Reason’s model the importance of cognitive issues at the sharp end was overlooked, in favour of managerial and organisational (blunt end) issues. It is argued that this complaint can also be partly interpreted as an alert against the misuse of (accident) models in general.

A similar situation has been pinpointed recently by Boin and colleagues (Boin and Schulman, 2009; Boin and Fishbacher-Smith,2011) involving this time the Columbia Accident Investigation Board report (CAIB, 2003) and its sociological interpretation of NASA ‘broken safety culture’. For these authors, the use of high reli- ability theory by the CAIB was inappropriate as ‘high reliability the- ory thus stands not as a theory of causation regarding high reliability but rather as a careful description of a special set of organisations’ (Boin and Schulman, 2009, p. 1053). In this respect, drawing upon a high reliability ‘model’ to look back into accident can mean falling into the trap of the all too well known ‘hindsight bias’. The authors do recall that the debate between normal accident (Perrow, 1984/1999) and high reliability organisations (Roberts,1993) has never really been settled. ‘If organisation theorists agree on anything, it is that ‘complete prevention [of organisational disas- ters] is impossible’. But that is where the agreement ends’ (Boin and Fishbacher-Smith, 2011, p. 81).

Considering NASA as an organisation failing to meet high reli- ability ‘expectations’ in the light of the loss of Columbia is an inap- propriate extension of this field of research into the making of history. In particular, Boin and colleagues disagree with a match between NASA and these organisations which were analysed and then qualified as highly reliable. ‘A research and development orga- nisation such as NASA cannot develop HRO characteristics because of the political environment in which it exists’ (Boin and Schulman,2009, p. 1056). As a consequence, the principle of applying this ‘la- bel’ fails to highlight the fact that the normal accident theory

would apply as well. The problem is that these investigative prac- tices and associated reports are not neutral, and the argument of the authors is that ‘using unproven theories to arrive at absolute ver- dicts may not be without adverse consequences. In a politicised envi- ronment, official assessments of crisis management performance – delivered by prominent inquiries – carry great weight. Their conclu- sions and prescriptions become milestones for progress. Whether they are sound, feasible and without unintended consequences usually does not concern public inquiry committees and the politicians that heed their advice’ (Boin and Fishbacher-Smith, 2011, p. 86).

Here again, one finds a warning this time at the socio-political level of interpretation of major accidents, with direct implications for the analysis of smaller scale events. It is not the use of Reason’s model that is scrutinised this time but the importation of social sci- ence research and debates into the making of history, without iden- tifying the limits of the translation process from one case to another. This point is extremely important and can be associated for any investigation with the sociological bias of ‘forcing fit’ (Vaughan,2004). This bias is the temptation to introduce data into available frameworks (this can also be described as the syndrome of the ‘data that should fit the model’, as introduced by Glaser and Strauss, 1967), while failing to remain aware of the diversity of situations, of the specificity of real life situations. This ‘forcing fit’ tendency is more likely to influence reports when time constraints are strong but also when expertise is low on the use of social science methodologies and models. These considerations have strong practical values as some have already started to look into practices of investigators in relation with the models that they use (Rollenhagen, 2011).

The interest in comparing the criticisms of these two models is that first, they are very popular and have been great influences in the field. Second, it stresses that investigating accidents will al- ways be influenced by available theories provided by scientists from different disciplines (e.g. psychology, management, sociol- ogy). Although this second point might sound obvious for some readers, the problem today is that improvements imply investiga- tions to probe ‘organisational factors’. This is a much-needed approach that requires a social sciences background. If resourced and professional agencies as well as in-depth investigative com- missions following major events fail to use adequately models as indicated by the selected authors, it is likely that, the same pattern will be observed in private companies (as observed in fieldwork). Often, models or research traditions are certainly used in ways that stray from the intention of the scientists, as stressed by Bourrier about the high reliability organisation research tradition ‘The HRO literature has continued to grow, evolving from a research topic to a powerful marketing label (. . .) This was never the intention of the Berkeley researchers’ (Bourrier, 2011, p. 12). The field of investigat- ing does not escape this. The challenge for investigations today is probably to include social sciences insights in a sound manner.

Third selected lesson about learning from accidents

Investigating accidents relies on the use of models which need the appropriate level of expertise in order to be applied adequately. More specifically, at the sociological level of interpretation, one should not underestimate the impact of unduly extending or translating high reliability organisation studies, and any other social sciences mod- els, into investigations. Recommendations derived from this translation could indeed run counter to their intended purposes. This shows, where appropriate, the challenge of adequately investigating accidents and this offers a cau- tious perspective on the possibility of in-depth analysis.

Page 14: 1-6 What Have We Learned About Learning From Accidents Post Disasters Reflections

4 J.C. Le Coze / Safety Science 51 (2013) 441–J.C. Le Coze / Safety Science 51 (2013) 441– 46. A political view of prevention

Professional investigation agencies, such as the US National Transport Safety Board (NTSB), have long known about the impor- tance, in the process of learning from accidents, of implementing recommendations. ‘Investigating accidents or conducting studies of transportation safety problems, in themselves, do not improve safety or prevent accidents from recurring, nor the issuance of safety recom- mendations (. . .) it is only by the implementation of the recommenda- tion, that real change takes place’ (Sweedler, 1995, p. 306).

Most of their recommendations are directed towards state agencies. ‘Over the years, the NTSB has directed recommendations to more than 1250 addressees. The number one recipient of its al- most 9000 recommendations, as would be expected, is the US Department of Transportation (DOT), and its modal administra- tions, such as the Federal Aviation Administration (FAA), the Fed- eral Railroad Administration, the Federal Highway Administration or the Coast Guard’ (Sweedler, 1995, p. 296).

Risk regulation regimes (Hood et al., 1999) are part of the level of safety of high risk industries, and ‘risk regulation’ has become in the past years a growing field of research for empirical studies (Hutter, 2006; Borraz, 2009). For example, Lindøe et al. (2011), have shown that the quality of responses to accidents from differ- ent industrial sectors differs, depending mainly on the degree of regulatory regimes. These authors indicate that ‘enforced regula- tions and a capacity for regulators to implement sanctions, and the presence of well-organised and competent industries as counterparts to the regulators, make a substantial contribution to the reduction of incidents and accidents’ (Lindøe et al., 2011, 96).

This macro-perspective on learning from accidents linking reg- ulatory and institutional contexts with the quality of accident pre- vention was earlier exploited by Chourlaton (2001) who contrasted aviation and oil spill prevention learning systems. This is also con- sistent with the historical view of Fressoz on the impact of regula- tory regimes of gas networks and plants between France and England in the 19th century. A more regulated French network in France created a safer system than the English one (Fressoz,2012). It therefore comes as no surprise that recommendations after accidents target changes in public policy, as confirmed by the NTSB’s experience, and studies have demonstrated the impor- tance of this learning configuration.

The specific independent status of the NTSB in its institutional context is advertised as playing a key role in the effectiveness of learning from accidents. ‘To guarantee its independence, the NTSB was directed to report to the Congress annually (. . .) Congress con- cluded that formation of this independent body would also eliminate distracting, partisan or propriety influences that are often present when accident causes must be determined by the same body that is responsible for operations, rulemaking, surveillance or regulations’ (Baxter, 1995, p. 272).

It is by combining investigative capabilities and independence that the NTSB has contributed to enhancing safety and is recogni- sed today as a landmark agency in transport safety (Stoop and Roed-Larsen, 2011). Some descriptions of limits or weaknesses in the system are nevertheless found in the literature. ‘The fruits of many NTSB investigations of airplane crashes, including precursors to Value Jet 592, were largely ignored for years by the Federal Aviation Administration’ (in Birkland, 2009, p. 149). The implementation of real changes or improvements following incidents, accidents or disasters is in fact not a straightforward process, even when an appropriate institutional design is granted.

Political scientists have shown interest in more fully under- standing how accidents, disasters (and more generally any kind of crisis) translate (or otherwise) into ‘lessons learnt’ (namely ade- quate changes) at political and policy levels, where a ‘double

loop’ type of learning could occur (cf. Birkland, 1998; Busenberg, 2001;

Page 15: 1-6 What Have We Learned About Learning From Accidents Post Disasters Reflections

4 J.C. Le Coze / Safety Science 51 (2013) 441–J.C. Le Coze / Safety Science 51 (2013) 441– 4De Vries, 2004; Boin et al., 2009; Birkland, 2009). Birkland has specialised, as he describes them, in ‘focusing events’. He is inter- ested in the way in which they participate in shaping political agendas and changes in public policies.

This idea can be illustrated simply when he states that ‘many observers considered the 1979 Three Mile Island (TMI) nuclear accident to be very serious. However, its influence on the agenda may not be as great as might be expected, because its harms – if any – were unclear and relatively hard to detect and understand. Almost 20 years after TMI, debate persists over the extent of any harms done to people from possible radiation leaks.’ (Birkland,1998, p. 55). The value of this quote is to show that the ‘mecha- nisms’ behind effective changes towards more safety through more appropriate state intervention and public policies is far from being an automatic process following major events, and even investiga- tions. And this is the very interest of these political studies to show this.

It always takes certain types of interactions between ‘interest groups, government leaders, policy entrepreneurs, the news media or members of the public to identify new problems, or to pay great- er attention to existing but dormant problems, potentially leading to a search for solutions in the wake of apparent policy failure’ (Birkland, 1998, p. 55). This dynamic is closer to the concept of the ‘garbage can’ decision making process (March and Olsen,1972) than a clear relationship between cause (an investigation) and effect (implemented recommendations). It is rather a social construction (e.g. Gephart, 1984) involving the potential destabili- sation of networks of power which tend to resist. Birkland consid- ers in that respect that learning at the policy level after a major accident (or focusing event) depends on whether or not there is in- creased media attention, group mobilisation, discussion of ideas and new policies adopted (Birkland, 2009, Fig. 1). A classification of five patterns is identified, linking in various manners an event, its investigation and real effective changes:

1. An event happens, and then change happens with little or no effort devoted to learning from the event (. . .).

2. An event happens, and an investigation is undertaken that is agency serving, is incomplete, or states the obvious, without any evidence of a serious attempt to learn (. . .).

3. An event happens, and an investigation is initiated, which leads to policy change, but that policy change cannot be linked to the investigation, or policy changes recommended in the post- event investigation (. . .).

4. An event happens, and a thorough and careful investigation is initiated, but policy change does not result. This may be because of cost, bureaucratic delay, political opposition, or any of the usual reasons for political and policy stasis (. . .).

5. An event happens, and a thorough and careful investigation is initiated, which leads to policy change as a result of careful investigation, assessment and policy design.

Birkland is rather pessimistic about the likelihood of the fifth pattern, the four previous ones representing what he calls ‘fantasy learning’ and the most likely to happen. The fifth one is rare. Birk- land considers the Columbia investigation as an example of proper learning. Given what has been said in the previous section, one is now in a position to even challenge Birkland’s appreciation. Birk- land’s claim is certainly to be verified for high risk systems. How much really happens after issuing reports on major accidents, and on much smaller scale incidents? This is what Hovden et al. (2011), without specific reference to Birkland’s work, have started to look into empirically. How, in different transport domains in Norway (aviation, shipping, rail), have accident investigations con- cretely contributed (or not) to making systems safer, by appropri- ate changes at different strata of socio-technical systems, based on

Page 16: 1-6 What Have We Learned About Learning From Accidents Post Disasters Reflections

4 J.C. Le Coze / Safety Science 51 (2013) 441–J.C. Le Coze / Safety Science 51 (2013) 441– 4investigation outcomes? They believe that it is too early to con- clude, given the data that they have collected, but have so far con- sidered rather positively the ability for investigative reports to participate in changes following major accidents.

Fourth selected lesson about learning from accidents

Learning does not take place systematically after an event is investigated, whether an incident or even a major acci- dent. It is only when recommendations are implemented that results are obtained, and the quality of this process is mostly related to the type of regulatory regime in place and its ability to adapt and transform public policies. The independence of specialised investigative bodies seems to be a sound design feature for learning as it means they are not a ‘first degree player’, whether operating high risk systems or regulating them. However, empirical analysis also demonstrates that learning depends on the different combinations of interest groups, government leaders or media before and after events, with a risk of ‘fantasy learning’.

7. Discussion

This paper starts with some questions. ‘What do we know today about the limits of this activity that could help to shed light on these recent disasters?’ It is reminded that Sagan’s opinion was not to ex- pect too much from learning (Sagan, 1993). It is commented that one was faced now with a wide range of approaches, interests and outcomes on the topic of learning from accidents and that no tentative overview of this diversity could be found, with the exception of a few, so far limited, attempts (Lindberg et al., 2010) that this article wished to pursue. Fig. 1 illustrates this patchy nat- ure by indicating the many dimensions to be considered (actors, scientific disciplines, industries, nations, intensity of events) be- yond a classic presentation of learning steps (reporting, selection, investigation, diffusion, prevention).

It makes it difficult to step back and consider the process as a whole. In the aftermath of a series of technological disasters in the past 10 years in different high risk industries, this situation does not help to understand the many challenges faced in terms of learning from experience. Some selected studies in the previous sections have offered empirical and theoretical insights into the drawbacks, limits and challenges of learning from accidents. This attempt to put together different inputs mainly from psychology, sociology and political science is certainly still limited but does help to:

Stand back to produce a bigger picture based on scattered empirical studies,

provide elements to sensitise interpretation of disasters from a learning from accidents perspective.

One of the aims of this paper was to try to produce a picture from fragmented studies to group different insights. In selecting works from diverse authors with backgrounds in psychology, soci- ology or political science, the idea was to overcome the lack of inte- grated empirical studies on learning from experience. By doing so, the outcome remains rather artificial but does clearly indicate some of the many challenges faced in terms of learning from expe- rience. The diversity of studies selected is complementary and cov- ers a wide range of issues, for different steps. It demonstrates that reporting is part of socio-legal-political system that constrains this

activity, that selecting signals relies on complex cognitive pro- cesses by actors within high-risk organisations, that investigating in-depth requires the mastery of social science models and that prevention depends on the possibility of challenging networks of power to provide sustained effects. Of course, this does not bring‘the’ bigger picture, it helps to bring ‘a’ bigger picture.

In the aftermath of a series of technological disasters in diverse industries, one can see how the psychological, social and political insights associated above and translated into lessons can guide inquiries and understanding. A series of questions can be derived from them to sensitise disasters from a learning from accident viewpoint. For instance, did conflicts of interests between different actors in the industry lead to difficulties in reporting near misses or incidents? Were signals reported but not selected due to cognitive processes involved, or were they reported and selected but not investigated properly because of failure to adequately mobilise models? Was there an independent agency in a position to inves- tigate independently and able to challenge both industry and the state? Were previous investigations before this accident examples of ‘fantasy learning’? Were the dynamics between the media, inter- est groups, government leaders or industry members unable to destabilise networks of power in order to change the current situ- ation? It is clear that the answer to these questions will vary according to the systems in place in the high risk industries con- cerned. One does not expect the exact same problems in learning in the aviation, nuclear and offshore industries as these industries do not have the same socio-technical background to learning, or the same types of accidents.

8. Conclusion

The new wave of technological disasters of the past 10 years in many different high risk industries requires a moment of reflexiv- ity. This paper provided this reflexivity by first, acknowledging the current scattered status of learning from accidents, second, identi- fying a diversity of studies in the field and organising it with the help of a framework, third, selecting descriptive studies for differ- ent steps of learning from accidents and finally, grouping relevant lessons from these different studies to provide a bigger picture. One question, as indicated in the introduction, remains: is any chance or any need – in the near or more distant future – for the field of ‘learning from accidents’ to exist as a separate area of in- quiry, as a ‘discipline’, or whether it will remain studied from many different angles without a centralised ‘paradigm’ structuring the multidimensional nature of this topic?

This paper, by selecting studies offering specific points of view on different steps might have overcome some of the problems met in a still fragmented field of research. Although this attempt clearly remains limited, the purpose of this paper demonstrates the feasibility of a wider perspective, justified by a context of sev- eral recent technological disasters. This paper should also stimu- late ideas. Future research in this field could consider a range of empirical and theoretical strategies to help to build a more inte- grated field. Although an ambitious perspective, one strategy could be to design cross-disciplinary studies on the different stages in- volved in learning from accidents, combining psychological, socio- logical, managerial and political specialists in empirical cases of varying scope, including various actors. Obviously, one would also like to see more empirical researches for each individual step of learning from different disciplines but in many industries. Another more theoretical option could be, as suggested, to provide a thor- ough review of the existing literature on the basis of Fig. 1, or part of it. These are some proposals that could represent stepping stones on the way towards a more structured and integrated field of learning from accidents.

Page 17: 1-6 What Have We Learned About Learning From Accidents Post Disasters Reflections

4 J.C. Le Coze / Safety Science 51 (2013) 441–J.C. Le Coze / Safety Science 51 (2013) 441– 4References

Argyris, C., 1993. Knowledge for Action. A Guide to Overcoming Barriers toOrganisation Change. Jossey Bass Publishers, San Francisco.

Baxter, T., 1995. Independent investigation of transportation accidents. SafetyScience 19, 271–278.

Beck, U., 1992. Risk Society: Towards a New Modernity. Sage, New Delhi (Translated from the German Risikogesellschaft published in 1986).

Beck, U., 2011. C’est le Mythe du Progress et de la Sécurité Qui est en Train de S’effondrer. Le Monde. <http://www.lemonde.fr/idees/article/2011/03/25/la- societe-du-risque-mondialise_1497769_3232.html>. (It is the myth of progress and safety that is collapsing).

Benner, L., 1975. Accident Theory and Accident Investigation. Society of Air SafetyInvestigators Annual Seminar. Ottawa, Canada, 7–9 October.

Benner, L., 1977. Accident Theory and Accident Investigators. Hazard Prevention.March/April.

Birkland, T.A., 1998. Focusing events, mobilization, and agenda setting. Journal ofPublic Policy 18 (1), 53–74.

Birkland, T.A., 2009. Disasters, lessons learned, and fantasy documents. Journal ofContingencies and Crisis management 17 (3).

Boin, A., Fishbacher-Smith, D., 2011. The importance of failure theories in assessing crisis management: the Columbia space shuttle disaster revisited. Policy and Society 30, 77–87.

Boin, A., Schulman, P., 2009. Assessing NASA’s safety culture: the limits and possibilities of high-reliability theory. Public Administration Review (11/12),1050–1062.

Boin, A., ‘t Hart, P., McConnell, A., 2009. Crisis exploitation: political and policy impacts of framing contests. Journal of European Public Policy 16 (1),81–106.

Borraz, O., 2009. Les Politiques Du Risque. Presses de Sciences Po, Paris (The politics of risk).

Bourrier, M., 2011. The legacy of the high reliability organization project. Journal ofContingencies and Crisis Management 19 (1), 9–13.

Busenberg, G.J., 2001. Learning in organizations and public policy. Journal of PublicPolicy 21 (2), 173–189.

Catino, M., 2008. A review of literature: individual blame vs organizational function logics in accident analysis. Journal of Contingencies and Crisis Management 16 (1).

Chikudate, N., 2009. If human errors are assumed as crimes in a safety culture: a lifeworld analysis of a rail crash. Human Relations 62 (9), 1267–1287.

Chourlaton, R., 2001. Complex learning: organizational learning from disasters.Safety Science 39, 61–70.

De Vries, M.S., 2004. Framing crises. Response patterns to explosions in fireworks factories. Administration and Society 36 (5), 594–614.

Dekker, S.W.A., 2002a. Reconstructing human contributions to accidents: the new view on error and performance. Journal of Safety Research 33, 371–385.

Dekker, S., 2002b. The Field Guide to Human Error Investigation. Ashgate, Aldershot,UK.

Dekker, S., 2007. Just Culture. Balancing Accountability and Safety. Ashgate.Evan, M.W., Manion, M., 2002. Minding the Machines. Prentice Hall, Preventing

Techonological Disasters.Fressoz, J.B., 2011. The Lessons of Disasters. A Historical Critic of Postmodern

Optimism. <http://www.booksandideas.net/The-Lessons-of-Disasters.html>. Fressoz, J.B., 2012. L’apocalypse Joyeuse. Une histoire du risque technologique,

Seuil.Gephart, R.P., 1984. Making sense of organizationally based environmental

disasters. Journal of Management 10, 205–225.Glaser, B.G., Strauss, A.L., 1967. The discovery of grounded theory: strategies for

qualitative research. Aldine de Gruyter.Haddon Jr., W., Klein, D., Suchman, E., 1964. Toward a science of accident research.

In: Haddon, W., Jr., Klein, D., Suchman, E. (Eds.), Accident Research: Methods and Approaches. Harper and Row, New York.

Hopkins, A. (Ed.), 2009. Why study Australia’s air traffic control agency? In:Learning from High Reliability Organisations, CCH.

Hood, C., Rothstein, H., Baldwin, R., Rees, J., Spackman, M., 1999. Where risk society meets the regulatory state: exploring variations in risk regulation regimes. Risk Management 1 (1), 21–34.

Hovden, J., Storseth, F., Tinmannsvik, R.K., 2011. Multilevel learning from accidents– case studies in transport. Safety Science 49, 98–105.

Hutter, B., 2006. Risk, regulation and management. In: Taylor Goody, P., Zinn, J.O. (Eds.), Risk in Social Science. Oxford university press.

Johnson, W.G., 1973. The management oversight and risk tree - MORT including systems developed by the Idaho operations office and aerojet nuclear company. Downloadable on www.nri.eu.com, the website of the Noordwisk Risk Initiative.

Jorgensen, K., 2011. A tool for safety officers investigating ‘simple’ accidents. SafetyScience 49, 32–38.

Khun, T., 1962. The Structure of Scientific Revolutions. University of Chicago press. Latour, B., 1987. La Science en Action. La Découverte (Science in Action).Latour, B., 2011. Cogitamus. Les Humanités Scientifiques. La Découverte (The

Scientific Humanities).Lawton, R., Ward, J.N., 2005. A system analysis of the Ladbroke Grove rail crash.

Accident Analysis and Prevention 37, 235–244.Le Coze, J.C., 2008. Organisations and disasters: from lessons learnt to theorising.

Safety Science 46, 132–149.

Le Coze, J.C., 2010. Accident in a French dynamite factory: an example of organisational investigation. Safety Science 48, 80–90.

Le Coze, J.C., 2012. Towards a constructivist program in safety. Safety Science 50 (9),1873–1887.

Lindberg, A.K., Hansson, S.O., Rollenhagen, C., 2010. Learning from accidents – what more do we need to know? Safety Science 48, 714–721.

Lindøe, P.H., Engen, O.A., Olsen, O.E., 2011. Responses to accidents in different industrial sectors. Safety Science 49, 90–97.

Lundberg, J., Rollenhagen, C., Hollnagel, E., 2009. What-you-look-for-is-what-you- find – the consequences of underlying accident models in eight accident investigation manuals. Safety Science 47, 1297–1311.

Macrae, C., 2007. Analyzing near miss events: risk management in incident reporting and investigation systems, discussion paper 47. London School of Economics and Political Science, Centre for Analysis of Risk and Regulation, London.

Macrae, C., 2009. From risk to resilience. Assessing flight safety incidents in airlines.In: Hopkins, A. (Ed.), Learning from High Reliability Organisations. CCH.

March, J.G., Olsen, J.P., 1972. A garbage can model of organizational choice.Administrative Science Quarterly 17 (1), 1–25.

Miller, G.A., 1986. Dismembering cognition. In: Hulse, S.H., Green, B.F. (Eds.), One Hundred Years of Psychological Research in America. John Hopkins University Press, Baltimore.

Moray, N., 1994. Error reduction as a systems problem. In: Bogner, M. (Ed.), HumanError in Medicine. Hillsdale, New Jersey, Lawrence Erlbaum, Associates, pp. 67–91.

Norman, D., 1988. The Design of Everyday Things. MIT Press.Okstad, E., Jersin, E., Tinnmannsvik, R.K., 2011. Accident investigation in the

Norwegian petroleum industry – common features and future challenges. Safety Science.

Perrow, C., 1984. Normal Accidents, first ed. Princeton University Press, Princeton.Perrow, C., 1999. Normal Accidents, second ed. Princeton University Press,

Princeton.Pidgeon, N., O’Leary, M., 2000. Man-made disasters: why technology and

organizations (sometimes) fail. Safety Science 34, 15–30.Popper, K., 1936. The Logic of Scientific Discovery. Routledge Classics. Prigogine, I., Stengers, I., 1978. La Nouvelle Alliance. Seuil. (Order ou of chaos). Rasmussen, J., 1986. Information Processing and Human–Machine Interaction.

North-Holland, Amsterdam.Rasmussen, J., 1995. A research program: risk management and decision making in

a dynamic society. Swedish Center for Risk Research and Education, Karlstad.Rasmussen, J., 1997. Risk management in a dynamic society: a modelling problem.

Safety Science 27 (2/3), 183–213.Rasmussen, J., Batstone, R., 1989. Why do complex organizational systems fail? The

World Bank policy planning and research staff. Environment Working Paper. No. 20.

Reason, J., Mycielska, K., 1982. Absent-Minded? The Psychology of Mental Lapsesand Everyday Errors. Prentice Hall, Englewood Cliffs (NJ).

Reason, J., Hollnagel, E., Paries, J., 2006. Revisiting the Swiss Cheese Model ofAccidents. EEC Note No. 13/06. Project Safbuild. Eurocontrol.

Roberts, K. (Ed.), 1993. New Challenges in Understanding Organisations. McMillan, New York.

Rollenhagen, C., 2011. Event investigations at nuclear power plants in Sweden:reflections about a method and some associated practices. Safety Science 49,21–26.

Sagan, S.D., 1993. The Limits of Safety. Princeton University Press, Princeton. Senge, P.M., 1990. The Fifth Discipline: The Art and Practice of the Learning

Organization. Doubleday Currency, NY.Shorrock, S., Young, M., Faulkner, J., 2004. Who moved my (Swiss) cheese? The

(R)evolution of human factors in transport safety investigation. In: ISASI 2004. Proceedings.

Sklet, S., 2004. Comparison of some selected methods for accident investigation.Journal of Hazardous Materials 111 (1–3), 29–37.

Starbuck, W.H., 2009. Cognitive reactions to rare events: perception, uncertainty, and learning. Organization Science 20, 925–937.

Starbuck, H.W., Farjoun, M., 2005. Organization at the Limit. Lessons from theColumbia Disaster. Blackwell Publishing.

Stengers, I., 1993. L’invention des sciences modernes. La découverte (the invention of modern sciences).

Stengers, I., 2009. Au Temps des Catastrophes, Resister à la Barbarie qui Vient. LaDécouverte (In Times of Catastrophes. Resisting Barbary).

Stengers, I., 2011. Comment N’avaient-ils Pas Prévu ? Le Monde. <http://www.lemonde.fr/idees/article/2011/03/25/comment-n-avaient-ils-pas-prevu_1498403_3232.html> (How could they not foresee?).

Stoop, J., Roed-Larsen, S., 2011. Public safety investigations – a new evolutionary step in safety enhancement? Reliability Engineering and System Safety 94,1471–1479.

Strauch, B., 2003. Investigating Human Error: Incident Accidents, and ComplexSystems. Ashgate.

Strauch, B., 2010. Can cultural differences lead to accidents? Team cultural differences and sociotechnical system operations. Human Factors 52,246–263.

Svenson, O., Lekberg, A., Johansen, A.E., 1999. On perspective, expertise and differences in accident analyses: argument for a multidisciplinary integrated approach. Ergonomics 42 (11), 1561–1571.

Sweedler, B.M., 1995. Safety recommendations – the engine that drives change.Safety Science 19, 295–307.

Page 18: 1-6 What Have We Learned About Learning From Accidents Post Disasters Reflections

4 J.C. Le Coze / Safety Science 51 (2013) 441–J.C. Le Coze / Safety Science 51 (2013) 441– 4Tamuz, M., 1994. Developing organizational safety information systems

for monitoring potential dangers. In: Apostolakis, G.E., Wu, J.S. (Eds.), Proceedings of PSAM II, vol. 2. Los Angeles: University of California, pp. 71:7–12.

Tamuz, M., 2001. Learning disabilities for regulators. The perils of organizational learning in the air transportation industry. Administration and Society 33 (3),276–302.

Turner, B.A., 1978. Man-Made Disaster. Wykeham Publications, London.

Vaughan, D., 1996. The Challenger Launch Decision: Risky Technology. University ofChicago Press, Chicago, Culture and Deviance at NASA.

Vaughan, D., 2004. Theorizing disaster: analogy, historical ethnography, and theChallenger accident. Ethnography 5 (3), 313–345.

Watson, S., 2004. Training rail accident investigators in UK. Journal of HazardousMaterials 111, 123–129.

Weick, K., 1969. The Social Psychology of Organizing, second ed., McGraw Hill. Weick, K., 1995. Sensemaking in Organizations. Sage.


Top Related