the global practice of after action review

43
THE GLOBAL PRACTICE OF AFTER ACTION REVIEW A SYSTEMATIC REVIEW OF LITERATURE

Upload: others

Post on 03-May-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

A SYSTEMATIC REVIEW OF LITERATURE

Page 2: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

A SYSTEMATIC REVIEW OF LITERATURE

Page 3: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

WHO/WHE/CPI/2019.9

© WORLD HEALTH ORGANIZATION 2019

Some rights reserved. This work is available under the Creative Commons Attribution-NonCommercial-Share-Alike 3.0 IGO licence (CC BY-NC-SA 3.0 IGO; https://creativecommons.org/licenses/by-nc-sa/3.0/igo).

Under the terms of this licence, you may copy, redistribute and adapt the work for non-commercial purposes, provided the work is appropriately cited, as indicated below. In any use of this work, there should be no sugges-tion that WHO endorses any specific organization, products or services. The use of the WHO logo is not permit-ted. If you adapt the work, then you must license your work under the same or equivalent Creative Commons licence. If you create a translation of this work, you should add the following disclaimer along with the suggested citation: “This translation was not created by the World Health Organization (WHO). WHO is not responsible for the content or accuracy of this translation. The original English edition shall be the binding and authentic edition”.

Any mediation relating to disputes arising under the licence shall be conducted in accordance with the mediation rules of the World Intellectual Property Organization.

SUGGESTED CITATION. The global practice of After Action Review: A Systematic Review of Literature. Geneva, Switzerland: World Health Organization; 2019 (WHO/WHE/CPI/2019.9). Licence: CC BY-NC-SA 3.0 IGO.

CATALOGUING-IN-PUBLICATION (CIP) DATA. CIP data are available at http://apps.who.int/iris.

SALES, RIGHTS AND LICENSING. To purchase WHO publications, see http://apps.who.int/bookorders. To submit requests for commercial use and queries on rights and licensing, see http://www.who.int/about/licensing.

THIRD-PARTY MATERIALS. If you wish to reuse material from this work that is attributed to a third party, such as tables, figures or images, it is your responsibility to determine whether permission is needed for that reuse and to obtain permission from the copyright holder. The risk of claims resulting from infringement of any third-party-owned component in the work rests solely with the user.

GENERAL DISCLAIMERS. The designations employed and the presentation of the material in this publica-tion do not imply the expression of any opinion whatsoever on the part of WHO concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. Dot-ted and dashed lines on maps represent approximate border lines for which there may not yet be full agreement.

The mention of specific companies or of certain manufacturers’ products does not imply that they are endorsed or recommended by WHO in preference to others of a similar nature that are not mentioned. Errors and omis-sions excepted, the names of proprietary products are distinguished by initial capital letters.

All reasonable precautions have been taken by WHO to verify the information contained in this publication. How-ever, the published material is being distributed without warranty of any kind, either expressed or implied. The responsibility for the interpretation and use of the material lies with the reader. In no event shall WHO be liable for damages arising from its use.

Page 4: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

III

CONTENTS

1

2

3

4

INTRODUCTION AND BACKGROUND1.1 RATIONALE OF THE REVIEW1.2 OBJECTIVE OF THE REVIEW1.3 STRUCTURE OF THE REVIEW

METHODOLOGY2.1 REVIEW PROTOCOL2.2 SEARCH STRATEGY, DATA SOURCES AND PUBLICATION TYPES

2.2.1 SEARCH TERMS 2.3 EXCLUSION AND INCLUSION CRITERIA2.4 INFORMATION MANAGEMENT AND QUALITY ASSESSMENT

RESULTS3.1 LIST OF SOURCES AND KEY COMPONENTS3.2 QUALITY ASSESSMENT ANALYSIS

SCOPE OF AARS4.1 DEFINING AARs IN POST-EMERGENCY SETTINGS

4.1.1 PURPOSE OF AARs4.1.2 BENEFITS: WHAT AARs CAN DELIVER

1122

333455

666

7788

Page 5: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

IV

5

6

4.1.3 CHARACTERISTICS OF AARs4.1.4 ANALYTICAL FRAMEWORK OF AARs4.1.5 FORMS OF AARs4.1.6 LIMITS OF AARs IN POST-EMERGENCY REVIEWS

4.2 ROLE OF AARs IN THE SPECTRUM OF POST-EMERGENCY REVIEWS4.2.1 LIMITS OF THE AAR AS A TOOL FOR EVALUATION4.2.2 AARs AS LEARNING TOOLS4.2.3 AARs AND OTHER RELEVANT KNOWLEDGE MANAGEMENT TECHNIQUES

CONDUCTING AN AAR5.1 AAR PROCESS5.2 AAR FORMAT5.3 AAR FACILITATOR AND PARTICIPANTS: ROLES AND

RESPONSIBILITIES5.4 WHEN TO CONDUCT AARs5.5 AARs: TIPS AND TRAPS

THE OUTCOMES OF AN AAR6.1 AFTER ACTION REPORTS6.2 A DATABASE OF LESSONS LEARNED AND BEST PRACTICES

CONCLUSIONS

REFERENCES

ANNEX. LIST OF REVIEW QUESTIONS

910111212131415

18182021

2222

242425

27

28

33

7

Page 6: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

V

This document was prepared by Landry Ndriko Mayigane, Nicolas Isla, Emanuele Bruni, Frederik Copper and Stella Chungong.

The World Health Organization (WHO) wishes to acknowledge the contribution of the following individuals: Michael Stoto (Georgetown University and the Harvard T.H. Chan School of Public Health) and Elena Savoia (Harvard T.H. Chan School of Public Health), for their inspirational support during the reviews; Marcus Abraha-msson (Lund University), for providing direction; Naouar Labidi (World Food Programme) and Cindy Chiu (Tohoku University), for their invaluable inputs. Thanks are also due to the following WHO staff members: Tomas Allen and Jose Garnica Carreno, for their support in outlining a search strategy; Paul Cox, for initial advice on sources of information on after action reviews; Yahaya Ali Ahmed, Denis Charles, David Cuenca, Yingxin Pei, Mary Stephen, Candice Vente and all the members of the former WHO Department of Country Health Emergency Preparedness and International Health Regulations (CPI), for their valuable inputs.

ACKNOWLEDGEMENTS

Page 7: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

VI

AARADBBAR ALNAP

CDC

CHSCLIC

CPI

DACECDC

FAO

FEMA

HSPHICRCIFRC

IHRIHR MEF

IOM

ISO

IZAJEELLISMSFNASA

NHS

OECD

PHACPHERTESSHUNDPUNHCR

UNICEFUNISDR

UNOCHA

UNOPSUSUSAID

WFPWHO

After action reviewAsian Development BankBefore action reviewActive Learning Network for Account-ability and Performance in Humanitarian Action Centers for Disease Control and Preven-tion (United States)Common Humanitarian StandardCumbria Learning and Improvement CollaborativeWHO Country Health Emergency Preparedness and International Health RegulationsDevelopment Assistance CommitteeEuropean Centre for Disease Prevention and ControlFood and Agriculture Organization of the United NationsFederal Emergency Management Agency (United States)Harvard School of Public HealthInternational Committee of the Red CrossInternational Federation of Red Cross and Red CrescentInternational Health Regulations (2005)International Health Regulations Monitoring and Evaluation FrameworkInternational Organization for Migration

International Organization for Standard-izationInstitute for the Study of Labour Joint external evaluationLessons Learned Information SharingMédecins Sans FrontièresNational Aeronautics and Space AdministrationNational Health Service (United King-dom)Organisation for Economic Co-operation and DevelopmentPublic Health Agency of CanadaPublic Health EnglandReal-time evaluationSociety for Simulation in HealthcareUnited Nations Development ProgrammeUnited Nations High Commissioner for RefugeesUnited Nations Children’s FundUnited Nations International Strategy for Disaster Risk ReductionUnited Nations Office for the Coordina-tion of Humanitarian AffairsUnited Nations Office for Project ServiceUnited StatesUnited States Agency for International DevelopmentWorld Food ProgrammeWorld Health Organization

ABBREVIATIONS AND ACRONYMS

Page 8: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

VII

Identifying lessons following an emergency response is an important part of any emergency management procedures. The purpose of these exercises is to en-sure quality improvement and the strengthening of preparedness and response systems based on learn-ing emerging from previous actions in responding to an emergency or event. Systematic post-event learn-ing will contribute to a culture of continuous improve-ment and can be a means of sharing innovative solu-tions on how to tackle emerging public health risks. There are different forms of evaluation and learning following an emergency, and the World Health Orga-nization (WHO) recommends that Member States conduct after action reviews (AARs) as part of the In-ternational Health Regulations (IHR) Monitoring and Evaluation Framework (IHR MEF) in order to assess the functionality of core capacities and to contribute to the development of comprehensive national action plans for health security.

An AAR is a qualitative review of actions taken in re-sponse to an emergency as a means of identifying best practices, gaps and lessons learned. The AAR offers a structured approach for individuals and part-ners to reflect on their experiences and perceptions of the response, in order to systematically and col-lectively identify what worked and what did not, why

and how to improve. AAR can range from quick infor-mal debriefing sessions with team members to larg-er workshops with broad, multisectoral participation led by facilitators. Importantly, AARs are not external evaluations of an individual‘s or a team’s perfor-mance. They do not seek to measure performance against benchmarks or key performance standards but are a constructive, collective learning opportu-nity, where the relevant stakeholders involved in the preparedness for, and response to, the public health event under review can find common ground on how to improve preparedness and response capability.

This literature review was undertaken to identify and build understanding around principle characteris-tics of AARs, including their methodologies, formats, planning and roles. It also sought to look at AARs in relation to forms of evaluation and different types of learning. The findings of this review ultimately aimed to understand the current methodologies for con-ducting AAR and contribute to the design of the WHO Guidance for AAR, published on the WHO website in 2019.

One hundred and twenty-two documents were ex-amined from two categories: 66 “methodological” reports or documents describing AAR processes or

EXECUTIVE SUMMARY

Page 9: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

VIII

other knowledge management techniques for post event review, and 56 AAR reports or other forms of post-event review, which describe the methodologies used for specific events or emergencies. A substan-tial amount of information related to AAR is grey liter-ature found on websites. This was considered in the reviews but was not used as a basis for any findings.

The results of the literature review indicate that there is general consensus around the principal character-istics of an AAR.

Although it is difficult to clearly express the scope of AARs, particularly vis-à-vis other post-event learn-ing and evaluation methods, the following features essentially define the AAR. An AAR:

l contributes to a culture of continuous personal, collective and institutional learning aimed at the gathering of lessons learned and best practices;

l is relatively quick and cheap to conduct, and is conducted shortly after the action or event being evaluated;

l relies on open, honest and equal interaction be-tween participants, and on the collective sharing of experience and perceptions of the event, proj-ect or issue being reviewed; and

l comprises a basic consistent analytical structure applying four main research questions in order to better understand what should have happened during the management of an event, what actu-ally happened, what went well and what did not and why, and what can be improved and how.

Applying the principles outlined above, AARs may be useful to support a component of a larger emergency evaluation, but have a focus on learning, which is one of the primary purposes for conducting an AAR. AARs present too many methodological limits to supply the quantitative outcomes required by the other types of evaluation conducted for the purpose of accountability.

Normally, an AAR is a quick, facilitated, safe, flexible and open process that is time bound to the event

under review. The methodology is pedagogical, interactive, and generally comprises a mix of several knowledge management and learning techniques. More specifically, AARs can make use of a previous data collection (based on secondary data review and survey), followed by a facilitated workshop held shortly after an emergency or event that brings together the different actors who participated in the response to the event. Most importantly, AARs should produce outcomes that can be transformed into recommendations for decision-makers when recorded in an after action report. AARs are an opportunity to translate participants’ experiences during the response into actionable roadmaps or plans, which can then be incorporated into national planning cycles. Many of the above key elements have now been incorporated into the recently published WHO Guidance for AAR, which takes Member States through the whole AAR process, from planning, preparing and conducting an AAR, to reporting AAR findings and following up on proposed actions. Accompanying this guidance, WHO has also provided AAR toolkits for different AAR formats, with templates, checklists, facilitators’ manuals, PowerPoint presentations, and a database of trigger questions. Moving forward, these ready-to-use tools both simplify and standardize the way AARs are conducted. Finally, one of the critical outcomes of an AAR is the creation of a repository of key challenges, best prac-tices and resulting recommendations. This literature review outlined the importance of ensuring that les-sons are not lost and that a mechanism for accessing the lessons is available through a “lessons learned database”, which helps to build institutional memo-ry and provides a resource for emergency prepared-ness and response stakeholders. The WHO Guidance for AAR has addressed this by providing a final AAR report template for the documentation of lessons learned and best practices. With a template provided, this is an important step forward to standardize data collection, which in turn can contribute towards the creation of a global “lessons learned database”.

Page 10: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

1

An after action review (AAR) is a qualitative review of actions taken to respond to an emergency,a as a means of identifying best practices, gaps and les-sons learned, and challenges emerging during the response. AARs generally follow a specific analyti-cal framework that seeks to identify what practices worked well, and how these can be institutionalized and shared with relevant stakeholders; and what practices did not work and require corrective action. AARs can be used as a functional evaluation of the national response capacities and processes in place; thus, an AAR can identify the actions needed to im-prove the response capacity and plans for future events. Such reviews are important in a culture of continuous learning and improvement.

In 2015, the Report of the Review Committee on Second Extensions for Establishing National Public

Health Capacities and on IHR Implementation (2) recommended that “States Parties should urgently … ii) implement in-depth reviews of significant dis-ease outbreaks and public health events. This should promote a more science or evidence-based approach to assessing effective core capacities under ‘real-life’ situations”. Following this recommendation, AAR was included as one of the four components through which Member States should monitor, evaluate and report on the implementation of core capacities under the International Health Regulations (IHR).

There are four components of the IHR Monitoring and Evaluation Framework (IHR MEF):

l Annual reporting to the World Health Assembly by States Parties;

l Voluntary external evaluation using the joint ex-ternal evaluation (JEE) tool,

l AARs; and l Simulation exercises.

The annual reporting to the World Health Assembly is obligatory, whereas the other three components are voluntary.

INTRODUCTIONAND BACKGROUND1

a An “event” is defined as an emergency incident or occurrence. The terms “event” and “incident” are often used interchangeably. An event may be insignificant or could be a significant occurrence, either planned or unplanned (e.g. extreme weather event or mass gathering), that may affect the safety and security of communities. Under the Article 1 of the IHR, an event is defined as “a manifestation of disease, or an occurrence that creates a potential for disease” (with particular reference to public health events of international concern). Source: (1)

RATIONALE OF THE REVIEW

1.1

Page 11: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

2

Implementation of the IHR MEF provides a compre-hensive, multisectoral picture of a country’s capaci-ties and functionalities for preventing, detecting, no-tifying and responding to public health emergencies. Each of the four components contributes in a differ-ent way to the identification of strengths, gaps and priorities, which are then addressed in multisectoral national action plans for health security.

This literature review was conducted to guide and provide an evidence base for developing a process for conducting AARs as part of IHR MEF, but also as a management tool to improve the performance of organizations and programmes through continuous learning.

This review surveyed the current body of research and practice on AAR, with the specific objectives of:

l defining the scope of AARs – definition, typologies and role in the spectrum of post-event learning; and comparison of AARs with other knowledge management techniques;

l delineating a general methodology for AARs – trademarks and key aspects, format and pro-cesses, and data-collection techniques; and

l outlining the main operational issues – precon-ditions for a successful AAR; roles and responsi-bilities in conducting AARs; and suggestions for how to share results, outputs and outcomes.

The findings of this review ultimately contributed to the design of the WHO Guidance for AAR (3) and the accompanying comprehensive toolkits (4) to assist the Member States in planning, preparing, and con-ducting AARs for collective learning and operational improvement after a public health response.

This paper is addressed to health, humanitarian and development professionals, and to bodies and orga-nizations active in the applied use of knowledge man-agement techniques in monitoring and evaluation of performance, country planning and operations, in the context of health emergency preparedness and re-sponse.

The review begins with a description of the method-ology adopted for the literature search, followed by the findings. The results of the review cover topics ranging from the basics and origins of AAR theories, to the benefits and purposes, evaluation systems, knowledge management theories and methodolo-gies, lessons learned and implications from the re-al-world application of AARs since their introduction in the 1970s by the United States (US) Army (5).

OBJECTIVE OF THE REVIEWSTRUCTURE OF THE REVIEW

1.21.3

Page 12: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

3

The review protocol consisted of: l elaborating a search strategy in collaboration with WHO library staff in WHO headquarters, Geneva, Switzerland;

l undertaking consultations with the Country Preparedness and International Health Regulations (CPI) team at WHO headquarters, other sections from WHO headquarters and WHO regional offices, the World Food Programme (WFP), Harvard T.H. Chan School of Public Health (HSPH), the European Centre for Disease Control, the University of Lund (Sweden) and the University of Monash (Australia);

l collating data and information in a synoptic Excel matrix;

l reading the materials; and l writing a final report.

The screening methodology involved selection by title, abstract and keywords. The search was conducted in several interconnected stages. The first step involved a top-level search using interdisciplinary databases such as PubMed, Medline and SAGE Journals Online.

The second phase involved: l searching of original peer-reviewed articles; review articles; standards; PowerPoint presentations; guidelines; government, health-care, humanitarian and industry reports of AARs; and other types of grey literature through general Internet searches; and

l targeted and narrative searches of the websites of various organizations:

• WHO; • various United Nations organizations – the UN Children’s Fund (UNICEF), Food and Agriculture Organization of the UN (FAO), WFP, UN High Com-mission for Refugees (UNHCR), UN Office for the

METHODOLOGY2

REVIEW PROTOCOL SEARCH STRATEGY, DATA SOURCES

AND PUBLICATION TYPES

2.1 2.2

Page 13: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

4

Coordination of Humanitarian Affairs (UNOCHA), UN Strategy for Disaster Risk Reduction (UNIS-DR) and the UN Office for Project Services (UN-OPS);

• other international organizations – the Interna-tional Organization for Standardization (ISO), In-ternational Organization for Migration (IOM), In-ternational Committee of the Red Cross (ICRC), the International Federation of Red Cross and Red Crescent Societies (IFRC), https://www.hu-manitarianresponse.info/, ReliefWeb, Médecins Sans Frontières (MSF), Active Learning Network for Accountability and Performance in Human-itarian Action (ALNAP), Common Humanitarian Standard (CHS), World Bank e-library, Profiling and Assessment Resource Kit database, ACAPS, Save the Children, CARE International, Click4it and Better Evaluation; and

• national or regional organizations – Public Health England (PHE), Public Health Agency of Cana-da (PHAC), US Centers for Disease Control and Prevention (CDC), European Centre for Disease Prevention and Control (ECDC), HSPH, National Health Service (NHS, United Kingdom), National Aeronautics and Space Administration (NASA), US Army, British Army and several fire brigades from around the world.

Many documents accessed were obtained because they were referenced in other documents consulted previous-ly. There were no restrictions on country or study type, to ensure that the review incorporated a comprehensive range of contexts.

A final phase involved consulting with experts from different institutions (ECDC, HSPH, University of Lund, University of Monash and WFP) to identify additional references.

Searches combined free text, acronyms and medical subject headings (Mesh). The searches were optimized to be both effective and efficient in capturing the most relevant literature in the time available. The terms used for search purposes included those listed in Box 1.

2.2.1SEARCH TERMS

After Action Review[Mesh] OR After Action Review[TIAB]

OR After Action Review[All Fields] OR After Action Re-

port[Mesh] OR After Action Report[All Fields] OR After

Action Report [All Fields] OR AAR[Mesh] OR AAR[TIAB]

OR AAR[All fields] OR Lesson learn[Mesh] OR Lesson

learn[All Fields] OR After Action Assessment[Mesh] OR

After Action Assessment[All Fields] Post Disaster eval-

uation[Mesh] OR Post disaster evaluation[TIAB] OR Post

disaster evaluation [All Fields] OR Operational Review

[Mesh] OR Operational Review [All Fields] Best Practices

[Mesh] Best Practices[All Fields]

Refugees[Mesh] OR Refugees[TIAB] OR War[Mesh]

OR War[TIAB] OR armed conflict[All Fields] OR Ethnic

Conflict[Mesh] OR Ethnic Conflict[All Fields] OR post

conflict[All Fields] OR Disasters[Mesh] OR Disaster

settings[TIAB] OR emergency shelter[Mesh] OR Relief

Work[Mesh] OR Relief Work[All Fields] OR protracted cri-

sis*[All Fields] OR forced migration[All Fields] OR forced

displacement[All Fields] OR internally displaced[All

Fields] OR humanitarian crises[All Fields] OR human-

itarian crisis[All Fields] OR humanitarian setting[All

Fields] OR humanitarian context[TIAB] OR humanitari-

an needs[TIAB] OR humanitarian needs[TIAB] OR fragile

state[All Fields] OR conflict affected[All Fields] OR Dis-

ease outbreaks[Mesh] OR Disease outbreak[TIAB] OR

asylum[All Fields] OR complex emergency[All Fields]

OR Disasters[Mesh] OR Earthquakes[Mesh] OR Tsuna-

mis[Mesh] OR Cyclonic Storms[Mesh] OR Food Sup-

ply[Mesh] OR Volcanic Eruptions[Mesh] OR Geological

Processes[Mesh]

Ebola[Mesh] OR Ebola[TIAB] OR Ebola[All Fields]

H1N1[Mesh] OR H1N1[TIAB] OR H1N1[All Fields]

Cholera[Mesh] OR Cholera[TIAB] OR Cholera[All Fields]

Guidance [Mesh] OR Guidance [TIAB] OR Guidance [All

Fields] OR Tool[Mesh] OR Tool[All Fields] OR Tool [All

Fields] OR Methodology[Mesh] OR Methodology[TIAB]

OR Methodology[All fields]

BOX 1. TERMS RELATED TO AARs USED FOR SEARCH STRATEGY

Page 14: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

5

Sources were included for assessment if they met the following inclusion criteria:

l they were related to at least one of the review questions (Annex 1);

l they were produced between 1982 and 2017 (to cover the entire history of AARs);

l they were written in English, French, Italian or Spanish; and

l full text was available (accessed through WHO if necessary).

Sources were excluded if they were unrelated to any of the review questions.

A literature review matrix (Annex.) was generated in Microsoft Excel to gather sources, key information and metadata. Tables and boxes presented in this document were extrapolated from the matrix.

The information and the studies collected were eval-uated as “relevant”, “fairly relevant” or “non-relevant”. The evaluation was based on the material’s relevance to the objectives and questions of the review, reliabili-ty and potential for bias, potential for conceptual over-lapping, clarity of results and conclusions presented.

EXCLUSION AND INCLUSION CRITERIA

INFORMATION MANAGEMENT AND QUALITY ASSESSMENT

2.3 2.4

Page 15: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

6

RESULTS3

The literature review allowed the collection of: l 122 documents, of which 66 were classified as “methodological” – that is, technical guidelines about AAR, or about other knowledge manage-ment techniques or other post events reviews methods; and

l 56 reports of AARs or of other forms of post-event review.

Of the 66 “methodological” pieces, two were considered “non-relevant” and five “fairly relevant”. Of the 56 re-ports of AARs or other forms of post-event review, 12 were considered “non-relevant” because they were not related to any of the review questions. Thus, 107 “relevant” documents were included in the review.

A significant amount of information that was col-lected from online sources was not included in the matrix, because the material was not in “doc” or “pdf” format. Although the information gathered from these online sources was not inserted in the matrix and was not considered in the quality assessment analysis, the information was crucial for the develop-ment of this report.

LIST OF SOURCES AND KEY COMPONENTS

QUALITY ASSESSMENTANALYSIS

3.1 3.2

Page 16: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

7

SCOPE OF AARs4

In post-emergency settings, an AAR is a knowledge- sharing and learning-focused tool (6). The AAR brings together a team – at the end of an emergency re-sponse – to discuss the encountered successes and failures in an open and honest fashion. The purpose is to learn from the experience, then use the lessons learned and best practices to improve performance, either in the next phase of the response, or the next time a similar event occurs (7). Specifically, AARs pro-vide team members with the opportunity to analyse their own behaviour, its contribution to the outcome, and any potential changes that may be needed (8).

There is no standardized definition for AARs post-event response assessment, particularly one that spans sectors and disciplines. Hence, as a starting point of this study, the definition considered was borrowed from the military, for which AARs are rec-ognized as a specific “professional discussion of an event, focused on performance standards” (5). How-

ever, further research demonstrated that AARs could have a broader application for reviewing a range of issues, as well as different means of data collection, facilitation and analysis. In short, there is a divergence in the literature because AARs can be considered as both a specific knowledge management evaluation tool with a definite methodology (9), and an approach for capturing lessons after an event that can benefit from a variety of methodologies and outcomes and depths of review (10).

In the WHO Guidance for AAR (3), AAR is defined as the “qualitative review of actions taken to respond to an emergency as a means of identifying best practices, gaps and lessons learned.” Specifically, it describes a process which involves “a structured facilitated discus-sion or experience sharing to critically and systematical-ly review what was in place before the response, what happened during the response, what went well, what went less well, why events occurred as they did, and how to improve.”

Although it is difficult to clearly set the scope of AARs, particularly vis-à-vis other post-event learning and evaluation methods, the following features essentially define the identity of the AAR:

l AARs contribute to a culture of continuous per-

DEFINING AARs IN POST-EMERGENCY SETTINGS

4.1

Page 17: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

8

l contributes to a culture of continuous personal, collective and institutional learning aimed at the gathering of lessons learned and best practices;

l is relatively quick and cheap to conduct, and should be conducted shortly after the action or event being evaluated;

l relies on open, honest and equal interaction be-tween participants, and on the collective sharing of experience and perceptions of the event, proj-ect or issue being reviewed; and

l comprises a basic consistent analytical struc-ture applying four main research questions (11) in order to better understand what should have happened during the management of an event; what actually happened; what went well and what did not, and why; and what can be improved and how (6).

In summary, AARs are a qualitative but systematic way to review planned action that allows the iden-tification of strengths, weaknesses and lessons to improve organizational learning (12) and response capacity. AARs are applicable to almost any event, and although their emphasis is normally on learning after failures, AARs conducted following successful experiences can also provide interesting insights (9) into how systems can learn from best practices, pro-ducing results in a short time (13).

AARs use a sharing of experience to improve perfor-mance by preventing recurrent errors and reproduc-ing success (14). The purpose of AARs is to capture, document and experience in order to improve future practice by:

l providing an opportunity to collectively analyse the results achieved;

l facilitating joint lesson learning for the benefit of future processes;

l identifying key point and strategic issues for fur-ther improvement; and

l strengthening team dynamics.

These peculiarities make AARs an effective tool for: undertaking continuous assessment of organizational performance, looking at successes and failures (13), providing a space for staff to capture key learning at a critical juncture of an emergency response, generating lessons learned that can be shared across disciplines, and allowing the building of a set of recommendations for the management for improving emergency policy and practice (15). In particular, AARs can provide a forum for determining the root causes of successes and failures in performance, and for developing strategies for mitigating causal factors in the future (16). Conducting regular AARs can help to track progress, correct unintended impacts and ensure that planned outcomes are achieved (17).

The main benefits of an AAR come from applying its results to future situations. By applying learning, a team can improve and perform to higher standards. By the end of an AAR, participants must clearly under-stand what worked well and why, what did not go well and where improvements can take place (18).

AARs can yield many benefits when conducted in an environment of openness, honest discussion, clarity and commitment to identifying and recommending solutions.

In the context of post-emergency evaluations, AARs help to prevent oversimplified interpretations of the response narrative (by encouraging discussion of errors and “near misses”) (19), enhance operational sensitivity and resilience, and provide opportunities to acknowledge individual expertise – all hallmarks of high-reliability organizations (20). In this regard, the most relevant benefits of conducting an AAR are the highly structured and supported analytical framework it provides for identifying lessons and best practices, and the focus it places on learning (as a primary objective) and performance.

4.1.1PURPOSE OF AARs

4.1.2BENEFITS: WHAT AARs CAN DELIVER

Page 18: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

9

This is especially pertinent to health-care, humanitarian and emergency professionals, whose learning activities are under constant time pressure (21).

From a technical point of view, conducting AARs should help to identify what elements of the response have been successful and should be taken forward, as well as what has not worked. Where failures are identified, there will be valuable knowledge and learning that should be captured (22). This has several advantages for participating organizations (5):

l the feedback generated from the review compares the actual output of a process with the projected outcome;

l through the AAR process, participants can identify organizational and environmental strengths and weaknesses, and agree together on how to develop future performance; and

l the shared learning experience improves task profi-ciency and promotes good relationships among the organizational members and others committed to the same issue (23).

Other important benefits described in the WHO Guidance for AAR (3) includes documentation of lessons learned, allowing advocacy for financial or technical support, and promoting ownership through obtaining a consensus among participants regarding the follow-up activities, to both prevent the next event and improve future re-sponse.

Furthermore, the information collected and lessons identified from AARs can be systematized and shared among peers or other stakeholders facing similar chal-lenges or threats, for their benefit. Results can be re-corded in registries so that they can be accessed next time there is a crisis (24). The building of these registries may also allow some deep analysis for retrospective and trend-based studies.

Guidance from the United States Agency for Interna-tional Development (USAID) (23) summarizes some key features of AARs into a useful checklist for planning and conducting AARs. These key features are:

l effective leadership engagement; l equal participation of team members; l inclusion of stakeholders; l positive environment for feedback; and l generation of shared knowledge.

Starting from this model and shifting towards a con-ceptual perspective borrowed from an academic paper on AAR used as a training tool (25), the trademarks that underline the effectiveness and the success of AARs are:

l observational learning, intended as a participative learning that occurs by observing the behaviour of others;

l collection and understanding of individual and group perceptions; and

l goal setting towards action to be taken in order to solve the identified issues.

From a technical point of view, the AAR approach in-volves basic aspects of social interaction such as learn-ing and cognition, team dynamics and collection of feed-back (26). Thus, although cognitive learning is a crucial part of the AAR process because it is focused on what happened during a response, feedback has to be consid-ered as being at the heart of the AAR process, because it concerns providing individuals involved in the response with information about what needs to be improved in the future on a task or set of tasks. In other words, AARs allow participants to have an active role in producing lessons learned, while also benefiting from these les-sons. Also important are team dynamics, because the individuals in the team are interdependent and seen as a “working unit”.

This means that an AAR can focus directly on the tasks and goals that were to be accomplished and discover why things happened, encouraging participants to high-light and discuss important lessons from the review (28).

4.1.3CHARACTERISTICS OF AARs

Page 19: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

10

All literature sources reviewed confirm that AARs are always structured around one basic analytical meth-odology that aims to respond to four main questions developed from the original method, which was estab-lished by the US army in the 1970s. This method was later adapted by international companies, fire brigades and the health-care sector for use in emergencies.

The questions are:1. What was supposed to happen?2. What actually happened?3. Why was there a difference?4. What can we learn from this?

This model (Fig. 1) is considered highly functional for its simplicity, and is normally adapted to the needs of the review and to the context. In fact, all the literature sources assessed agree that the strength of AARs is grounded in the simplicity of this model. However, it has to be taken into consideration that, as stated by the Cumbria Learning and Improvement Collaborative (CLIC), the most important process to be kept is the

dichotomous relation between “agreed facts” (ques-tions 1 and 2) and “shared opinions” (questions 3 and 4) (11).

The outcome of the AAR hinges on the assessment of process, or how the response performed vis-à-vis planned assumptions or processes, and why the system did not function in the way it was designed to function.

The WHO Guidance for AAR (3) adapted the above analytical framework into three key phases that are common to all AARs:

1. Objective observation – to establish how actions were implemented during the response and con-trast them to what ideally should have happened according to existing plans and procedures.

2. Analysis of gaps and best practices and contributing factors – to identify the gaps between planning and practice; and to analyse what worked well and what worked less well, and why.

3. Identification of areas for improvement – to iden-tify actions to strengthen or improve performance and determine how to follow up on them.

Source: (11)

Agreed Facts1.

What was supposedto happen?

3.Why was therea difference?

2.What actually

happened?

4.What can we learn

from this?Shared Opinions

FIG. 1. VISUALIZATION OF AARs

4.1.4ANALYTICAL FRAMEWORK OF AARs

Page 20: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

11

Generally, AARs take place as one of the following: a formal organized event hosted by a facilitator, an informal short review between a few individuals at any stage in a project (28), or a personal (one-on-one) conversation (Table 1). The type of AAR to be conducted may vary depending on the activity to be reviewed, the level of various team members’ partic-ipation in the project, the scope of the project, and a general assessment of which approach might work best (29). Each type of AAR generates a rich reflec-tion. However, whether the AAR is formal, informal or personal, documenting the results will capture the learning for potential use in future projects.

More specifically, a formal AAR is resource intensive and involves the planning, coordination and prepara-tion of supporting training aids, the AAR site, support personnel, and time for facilitation and report prepa-ration. A lead facilitator/interviewer guides the review discussion following an agenda, and notes are re-corded with the help of a note taker. Following the AAR session, a formal report is presented. Although it may

be desirable to have a neutral facilitator, who was not involved directly in the response – thereby allowing all team members to participate equally without respon-sibility for the process as a whole – any member of a project team can serve as facilitator.

Informal AARs require less preparation and planning (5). Frequently, an informal AAR is carried out by those responsible for the activity and, if necessary, the dis-cussion leader or facilitator can either be identified beforehand or chosen by the team itself. As with a formal AAR, the standard format and questions guide the discussion (18).

In short, formal evaluations are systematic and rigor-ous, whereas informal evaluations are more ad hoc (30). Informal AARs are usually conducted on site im-mediately following an event, activity or programme.

Finally, there are personal reviews. As the name im-plies, a personal review is a planning utility that con-cerns individual reflection on the course of action or activities of the immediate past (or even of longer term activities) (23).

4.1.5FORMS OF AARs

TABLE 1. COMPARISON OF FORMAL, INFORMAL AND PERSONAL AARs

Are facilitated byan objective outsider

Take more time

Are scheduled beforehand

Use complex techniquesand tools

Are conducted in meetingor other formal settings

Are conducted by those closest to the activity

Take less time

Are conducted when needed

Use simple review techniques and tools

Are held at the event site

Are conducted with individuals

Take less time

Are conducted when needed

Use interview technique

Are held at the event site

FORMAL AARsINFORMAL AARs (HOT

WASH OR DEBRIEF)INTERVIEW BASED

Page 21: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

12

In the WHO Guidance for After Action Review (3), four AAR formats are proposed to offer collective learning and operational improvement after a public health re-sponse. These include:1. debrief AAR – an informal facilitator-led discus-

sion with no more than 20 participants focusing on a limited number of functional areas (3 or less), taken place over half a day with more focus on learning within a team (best for smaller responses);

2. working group AAR – an interactive format that consists of guided group work and plenary ses-sions with up with 50 participants focusing on more than 3 functional areas, taken place over 2-3 days (best for responses that involved multiple sectors);

3. key informant interview AAR – a longer and more in-depth review of an event that begins with a lit-erature review and feedback surveys, followed by semistructured interviews and focus group dis-cussion, taken place over several weeks (best for complex and larger responses where those in-volved can no longer be brought together, or where confidentiality and non-attribution are necessary for honest and open feedback);

4. mixed-method AAR – a blended approach of the above three AAR formats consisting of a working group AAR with contents supplemented from key informant interviews (best for responses with a large scope where the majority but not all those involved can be brought together).

These AAR formats proposed in the WHO guidance can be adapted to accommodate the institutional cul-ture, practice, and needs.

AARs can also be classified according to: l the scope of the object of the review; or l the number of partners who are leading it (single versus joint).

In the first case, the object of the review may be train-ing (25), an emergency (31) or a specific response function. The US Federal Emergency Management Agency (FEMA) (32) distinguishes two types of eval-uations according to the scope of the object in relation to the scale of the event: small scale (hot washes and

critiques), and major incidents, which are followed by after action reports.

In the case of major incidents, an AAR can vary ac-cording to the number of partners leading it. Although such AARs are normally conducted by one institution (internal or external), in the humanitarian world there are examples of joint AARs, where different partners (e.g. international NGOs and UN agencies) share their resources and data to undertake the review (33).

In some cases, an AAR may not be the most appro-priate tool or may not result in the desired outcomes. Although this may be related to contextual or meth-odological problems (e.g. weak facilitation or prepa-ration), AARs are not an appropriate tool to provide impact assessment. Furthermore, they should not be used to evaluate an individual’s performance, and could create difficulties if adopted in simulations in-volving ‘‘emergent’’ learning objectives (21).

One of the main research questions of this study was to determine the specific role of AARs in the spectrum of post-emergency reviews. As a result, the investiga-tion acknowledges the lack of a general and validated consensus, not only in relation to the role of AARs, but also to all the modalities of post- emergency re-views in general. However, research on the distinct typologies of knowledge management and evalua-tion techniques available in studies of the marketing, health-care, emergency management, development and humanitarian sectors has allowed some compar-isons, which are outlined in the sections below.

4.1.6LIMITS OF AARsIN POST-EMERGENCY REVIEWS

ROLE OF AARs IN THE SPECTRUM OF POST-EMERGENCY REVIEWS

4.2

Page 22: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

13

The first question was aimed at identifying specific boundaries for AARs in the spectrum of evaluation typologies in health-care and post-event emergency. This question led directly to a second relevant que-ry on how AARs could contribute to undertaking an evaluation. Hence, a general overview of available evaluation models was conducted. It identified three evaluation models frequently used in the evaluation of public health projects and interventions, as well as in the development and humanitarian sectors, as out-lined below.

OECD MODELIn the model from the Organisation for Economic Co-operation and Development (OECD) (34), most types of evaluation can be described as a response to the following questions (35):

l Unit of analysis – what is being evaluated (policy, country programme, meta, theme, cluster, sector or project)?

l Timing – when is the evaluation occurring (for-mative mid-term, summative or ex-post)?

l Approach – what kind of methodology should be adopted (impact, participatory, process or mixed)?

l Relationship to the subject – what is the nature of the evaluator vis-à-vis the subject of the eval-uation (independent, self-evaluation, internal, ex-ternal, peer or mixed)?

CDC MODELThe model from the US Centers for Disease Control and Prevention (CDC) (36) recognizes that several different types of evaluation could be conducted, depending on when they are used and their intended objective. The different types of evaluation could be summarized as:

l formative evaluation – which ensures that a pro-gramme or programme activity is feasible,

appropriate and acceptable before it is fully imple-mented;

l process/implementation evaluation – which de-termines whether programme activities have been implemented as intended;

l outcome/effectiveness evaluation – which mea-sures programme effects in the target population by assessing the progress in the outcomes or out-come objectives that the programme is to achieve; and

l impact evaluation – which assesses programme effectiveness in achieving its ultimate goals.

IZA MODELThe third model is from the Institute for the Study of Labour (IZA), Bonn (Germany) (37). According to the IZA, evaluations in the humanitarian sector use mon-itoringa data, outcome and perception studies or re-al-time evaluations (RTEs). On the one hand, there are “high-quality outcome and perception studies” that use well-collected quantitative data, allowing measurement of the changes in conditions of beneficiaries in the tar-get area before and after a programme. Although these studies are important, they do not indicate whether the change in status would have been the same without humanitarian assistance, or whether the change was caused by the intervention, or was due to some other programme. This is because such studies do not offer a reply to the question “Why?”, but offer only a quantitative description of indicators encountered or not. Conversely, there are outcome studies that employ qualitative data based on small and unrepresentative samples, includ-ing surveys limited to the humanitarian community (e.g. donors, officials and NGOs) and other evaluation tools (e.g. key stakeholder interviews). These methods bring added value to impact evaluations with the collection of feedback and perceptions; however, there are frequent challenges in interpretation because these methods are insufficient to produce robust analyses of attributable impact (38). This classification is summarized in Box 2 (37).

4.2.1LIMITS OF THE AARAS A TOOL FOR EVALUATION

a For a definition of monitoring, please check the OECD Development Assistance Committee (DAC) continuing function that uses systematic collection of data on specified indicators to provide management and the main stakeholders of an ongoing develop-ment intervention with indications of the extent of progress and achievement of objectives and progress in the use of allocated funds: OECD-DAC – Glossary of key terms in evaluation and results based management, 2010.

Page 23: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

14

Source: (37)

BOX 2. COMPARISON OF PERCEPTION STUDIES, REAL-TIME EVALUATIONSAND THEORY-BASED IMPACT EVALUATIONS

Usually qualitative data

Sometimes do not userepresentative sample

Short term

Give a good idea of uptake by beneficiary population

Use qualitative andquantitative data

Frequently do not use representative samples

Evaluate processes

Focus on development and imple-mentation of the programme and assess whether implementation

outputs have been achieved

Use qualitative and quantitative data

Include real-time evaluations and perception studies to inform theories of change and im-

plementation of programmes

May be used to measure short-, medium- and long-term effects

Long-term policy implications

Based on a theory of change

Are able to measure attribution and contribution of the

programme to overall outcomes

PERCEPTIONSTUDIES

REAL-TIMEEVALUATIONS

THEORY-BASEDIMPACT EVALUATIONS

These three models make it possible to examine and identify the role of AARs in emergency response eval-uation. Starting with the OECD model, an AAR could be intended as a participative approach that could be used at various periods during an event (e.g. soon af-ter or ex-post) and that could be applied in the evalua-tion of different units of analysis. For example, it could be used for post-event evaluation, specific functions of the response (e.g. surveillance, laboratory for case management), or cross-cutting themes such as co-ordination, logistics or resource availability.

It becomes clear that the simple structure of an AAR may be useful to support a process/implementation evaluation, with a focus on learning, as in the CDC model. However, AARs present too many method-ological limits to supply the quantitative outcomes required by the other typologies of evaluation reported by the CDC (formative, outcome and impact), in ad-dition to being used in supporting the robust impact evaluations as described by the IZA.

AARs could nevertheless provide team members with the opportunity to analyse their own behaviour, its contribution to the outcome, and any potential chang-es that may need to occur (8). At the same time, AARs

could act as an integrative support tool to enrich oth-er typologies of evaluation, adding a systematic but qualitative touch to other kinds of assessment.

In conclusion, AAR is a tool that is relatively simple, qualitative, flexible, easy to use and focused, and can thus play an important role in supporting other types of evaluation.

It is useful to define and situate AARs in various learning models. A master’s thesis focused on AARs intended as a military learning or training tool (25) outlined several parallel but distinct learning or train-ing approaches – behavioural modelling training, re-flective learning and action learning – and compared them to AARs.

Behavioural modelling is a training technique that draws on the concept of observational learning (39) and incorporates observed positive or negative behaviour (40). Being a component of social learning theory, this could be summarized as the act of

4.2.2 AARs AS LEARNING TOOLS

Page 24: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

15

showing employees how to do something and guid-ing them through the process of imitating a modelled behaviour. However, even if, during an AAR, the use of a model is crucial (e.g. when answering the question of what was supposed to happen), the lack of “ex-pert model” is the key difference between AAR and standard behavioural modelling. In the behavioural modelling training, the model is represented by an expert person, to stimulate learning and effectively model the desired (or possibly undesired) behaviour. In contrast, in AARs, the participants themselves are the behavioural models and the modelled behaviour is not created before the training, but is instead creat-ed during the AAR or borrowed from the recent past, making it possible to apply it to the actual context or conditions.

There are also differences between AARs and reflec-tive learning, which asks an individual learner to re-flect upon prior experiences to maximize the learning gained from those experiences. Although the AAR uses a reflective perspective, when applied retro-spectively, this is always conducted in a collective environment to capitalize not only on an individual’s experiences, but also on the experiences of others (participants and facilitator).

In action learning, there is generally a regular consul-tation (41) to discuss work, question existing knowl-edge and share the difficulties (42). This meeting of peers, developed by physicians in the 1920s, allowed medical colleagues to learn from one another’s ex-periences based upon their well-developed and pre-viously learned knowledge base. AARs differ from action learning because AARs aim to create a dis-cussion among actors who bring different types and levels of knowledge to share “fresh” experiences.

Another key difference is that while learning models often focus on the individual’s learning outcome, col-lective learning is a key outcome for AARs. Several sources in the literature point to collective learning as one of the defining features and benefits of AARs (43). Collective learning can tend towards institutional or organizational learning if the scope of the review is broad.

In conclusion, a comparison of AAR and different learning models can help to clarify the mechanics of AARs, and their benefits and limitations. As a collab-orative or collective learning tool, an AAR can have a strong impact on how teams perform and interact with other parts of a system.

To complete the general overall picture of AARs in emergency settings, other knowledge management techniques were explored. While they tend to employ dif-ferent terminology, these techniques have several traits in common with AARs. For the sake of establishing a useful order, these knowledge management techniques are outlined according to their position in terms of timing with respect to the response.

BEFORE ACTION REVIEWA before action review (BAR) comprises a short disci-plined conversation that efficiently provides founda-tions for rigorous, meaningful learning by the same people engaged in the action. The BAR is about being prepared before launching into action, preparing to achieve the desired results together and preparing to learn together effectively (44). In short, BAR is a pre-view session that considers what issues might arise during a piece of work (45).

LEARNING REVIEWSimilar in methodology to the BAR, but different in terms of phase of execution, is the learning review. This is a technique used by a project team to aid team and individual learning during the work process (46).

HOT WASHES, CRITIQUES AND DEBRIEFINGSSoon after an event, other kinds of evaluation or re-view are conducted. These are called “hot washes” or “critiques”. According to FEMA (32), hot washes and critiques occur at the company level (although multiple companies may be involved), and typically evaluate small incidents, reviewing the functioning of

4.2.3AARs AND OTHER RELEVANT KNOWLEDGE MANAGEMENT TECHNIQUES

Page 25: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

16

specific tactics and identifying changes that might induce better results. They are conducted on a case-by-case basis, mostly for training purposes and for the overall improvement of operations. Critiques are normally conducted internally to an organization or team.

Similar to hot washes, “debriefings” (47) are defined as moderated discussions focused on gaining un-derstanding and insight about a specific operation or exercise. Debriefings are predominantly conducted at the tactical or operational level, and include only those people specifically involved in a given operation or exercise, under the lead of a person of authority or subject matter expert. Debriefings typically last somewhere between a few minutes and a few hours. The primary goal of a hot wash or a debriefing is to identify issues that require attention, and record the facts about what happened before memories fade. Although some hot washes identify “strengths and weaknesses” or “things that went well or not so well,” it is usually not possible to systematically address “Why?” questions in the immediate aftermath of the incident.

PEER ASSESSMENTThe HSPH defines peer assessment in post-emer-gency reviews as “a process designed to analyse a public health system’s response to an emergency, identify root causes of successes and failures, and highlight lessons that can be institutionalized by the responding public health system and others to im-prove future responses” (48). In this case, the focus of peer assessment is to support the understanding of how and why problems occurred, as a step towards identifying and addressing contributing factors that

are likely to condition forthcoming events. Further-more, the peer aspect “fosters communication and collaboration across jurisdictions”, and can also serve to bring an external input to the assessment. Accord-ing to USAID, peer assessments can be conducted after an incident, but also before (as a preparedness exercise) or during an event (18).

RETROSPECTIVEAs with the “look-back session” and the “reflective learning” mentioned in the previous section, a ret-rospective is about “learning after doing” (13) and is therefore run once the project or the event is finished. Given that there is no possibility of changing results, the focus is on developing recommendations to be implemented by people other than those making the recommendations.

SUMMARY OF AARS AND OTHER RELEVANT KNOWLEDGE MANAGEMENT TECHNIQUESIn conclusion, the method and tools used for iden-tifying lessons and best practices in relation to an event can depend on when, in relation to the event, the review will be conducted, as well as the purpose or expected outcome. It is important that, after any of these approaches, best practices and knowledge come back not only to the review participants but also to senior management (49).

Maintaining the AAR terminology as a “brand” that aims to define all these approaches, the different methodologies have been summarized and listed in Table 2.

Page 26: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

17

TABLE 2. APPROACHES TO ACTION REVIEWS

Before action review

Learning review

Debriefing and hot wash

Peer assessment

Retrospective

Beginning

During

After

During and after

After

Improving preparednesswhile providing foundation

for learning

Internal or externalfacilitation

Support team andindividual learning during

the work processFacilitation

Gaining understanding and insight about a specific

operation or exercise, and involving those people who

were personally involved

Internal discussion facilita-tion; must be quick and easy

Monitoring, capacity building and team building from an

external point of view

Pre data collection (survey, interviews and data analysis)Look-back session conducted

by external peer facilitator

Learn after doing Internal or external facilitation

NAMEWHEN

TO USEPURPOSE

AND OUTCOMEMETHODOLOGY

Timing of the AAR in relation to the event under review will depend on the purpose and desired outcome. Essen-tially, there are three periods (50):

l Before the event – In this case, the purpose of the review is to review preparedness efforts for a par-ticular event or project, to anticipate challenges and establish a general process for implementing the actions. This can also be run, for example, as a ta-ble-top exercise.

l During the event – In this case, the purpose is to undertake course correction during an event or project. It can be used to assess the level of per-formance vis-à-vis strategic and event-specific re-sponse plans for a single actor or across partners.

l After the event – In this case, the purpose is to iden-tify lessons and best practices, and establish better ways of implementing actions for future events.

This timing is illustrated in a master’s thesis on emer-gency response reviews (51), where AARs, intended as part of a specific emergency learning framework, can be broken down into six individual and self-explanatory categories (Box 3).

Source: (51)

Clearly, the concept of AARs cannot be reduced to a sin-gle method conducted at a specific moment in relation to the event or project being reviewed. In fact, AAR and the logic it proposes can be applied at different phases of the emergency response review process, and can sat-isfy different outcomes. In the WHO Guidance for AAR (3), which focuses on collective learning at the end of an event, AARs are recommended to be conducted as soon as possible (or within three months) following the official declaration of the end of the event by the ministry of health.

BOX 3. EMERGENCY LEARNING FRAMEWORK

l Intracrisis exercise – after action review l Intercrisis debrief – after exercise review l Intracrisis event – during event reflection l Debrief event – after event review l Ongoing learning and planning meetings l Intercrisis (or prescriptive) learning

Page 27: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

18

CONDUCTINGAN AAR5

Although most of the sources found in the literature refer to AAR only as a facilitated interactive session, the process of planning will largely depend on the scope, objective and form of AAR to be conducted. There may be some key steps in common such as: collection and review of emergency related docu-mentation, contacting all participants and asking them about some key events of the emergency, and organizing the setting of the AARs (52). In general, in-formal AARs will have fewer steps and require less time to plan.

To take an example from the Society for Simulation in Healthcare (SSH) the process of organizing a de-briefing can be summarized in seven steps, shown in Box 4 (21).

Source: (21)

Another example of an AAR with a larger, more com-plex scope, is that developed by the Asian Develop-ment Bank (ADB) (6). This ARR has four main steps – planning, preparing, conducting and following up – as shown in Box 5.

AAR PROCESS

5.1

l Define the rules of the debriefing l Explain the learning objectives l Benchmarks for performance l Review what was supposed to happen l Identify what actually happened l Examine why things happened the way they did

l Formalize learning

BOX 4. SEVEN STEPS FOR AARs (DEBRIEF)

Page 28: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

19

Source: (6)

From a practical point of view, this structure poses the setting of ground rules before the evaluation as an integral part of the process of an AAR. Setting these rules includes obtaining agreement from all taking part that the AAR takes place in an environment in which participants feel that discussions are confiden-tial, open, honest and safe (28).

Once AAR process is established, it should also be determined how long the process should last. The length of AARs varies, depending on the scope and objectives. In some cases – for example, military training evaluation or emergency nurse post-event evaluation – the AAR may have a duration of only 15 minutes to 2 hours. However, in a disaster and humanitarian setting, an AAR will be more complex and will thus take more time. For type 2 emergencies, CARE International and UNICEF describe the main event as typically comprising a two-day workshop. For a smaller type 1 emergency, a similar – though much lighter – process is encouraged (53). In oth-er cases, the length of an AAR meeting can vary, depending on the magnitude of the event or project reviewed, the number of partners involved (54) and perceived learning opportunities.

As described earlier, four formats of AAR are outlined in the WHO Guidance for AAR (3) each with differ-ent timeframes, the optimal number of pillars (broad technical areas of the response) to be reviewed, and the number of participants. Member States can de-cide on the most appropriate format depending on factors such as number of participants, cultural con-text, complexity of the event, location of the AAR, and resources available.

An AAR provides a prime opportunity to assess the functionality of IHR core capacities. Therefore, fol-lowing the identification of the best practices and challenges of the response, the WHO guidance also recommends participants to use qualitative ratings provided (3) to assess whether the selected IHR core capacities reviewed were performed without chal-lenges, with some challenges, with major challenges, or unable to be performed.

STEP 1. PLANNING l Identify an event or activity to be reviewed. l Identify the primary point of contact for the review.

l Determine when the AAR will occur. l Decide who will attend the AAR. l Select when and where the AAR will take place; plan for no more than 90 minutes.

l Confirm who will support the AAR; for exam-ple, technical lead, champion, point of contact and minute taker.

STEP 2. PREPARING l Select a facilitator. l Confirm the venue and agenda. l Obtain inputs from interested parties. l Announce the AAR and compile the list of attendees.

l Make logistical arrangements and set up the venue.

STEP 3. CONDUCTING l Seek maximum participation. l Maintain focus on a positive and informative AAR.

l Ensure honest, candid and professional dialogue.

l Record key points.

STEP 4. FOLLOWING UP (USING AAR RESULTS)

l Determine actionable recommendations that will improve the process.

l Identify tasks requiring senior leadership de-cisions.

l Determine a follow-up schedule and point of contact for each follow-up action.

l Provide assistance and support as required.

BOX 5. FOUR STEPS FOR AARs

Page 29: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

20

The techniques used in an AAR can vary, depending on contextual issues, and can include both written reports and face-to-face discussions (55). An interesting study on AARs about the response to pandemic influenza A (H1N1) 2009 in the US (56), showed that the techniques used may vary according to the context, the partici-pants, the resources available and the team leading the AAR. However, most of the reviewed sources agree that the whole process should be kept as simple and as easy to remember as possible.

All AARs follow the same general format (exchange of ideas, feedback and observations and focus on improv-ing), but different methodologies can be used to under-take them; for example, focus group discussions, inter-views, workshops and participative appraisal. These methods fall into four categories (26):

l surveys or questionnaires; l interviews; l task analysis data gathering process; and l observation.

Normally, every AAR workshop follows a similar struc-ture, with the facilitator getting agreement for the ground rules at the outset, and ensuring that everyone is clear about the specific purpose, objective and scope of the AAR and the logic to be used (9). Once the facilitator is satisfied that all aspects of participants’ expectations have been explored, and that each has contributed what actually happened from their own experience, then the facilitator invites the AAR participants to discuss why they thought there was a difference between these two aspects.

When AARs are conducted by humanitarian stakehold-ers and international NGOs, the facilitator normally starts the process with scene setting (disaster timeline) and

then uses group work to identify lessons learned and best practices (52). In post-emergency reviews, the ap-proach proposed by the HSPH peer assessment method is to conduct one-to-one interviews, followed by group interviews and then a facilitated “look back” session, which will be based on the root cause analysis methoda and the building of a timeline or story arc. This approach is similar to the one adopted in other settings, such as in the military sector and by USAID, as shown in Box 6 (5).

Source: (18)

CHRONOLOGICAL ORDER OF EVENTSThis technique is logical, structured and easy to un-

derstand. It follows the flow of the activity from start to

finish. By covering actions in the order they took place,

participants are better able to recall what happened.

KEY EVENTS, THEMES OR ISSUESA key events discussion focuses on critical events that

directly support identified objectives before the event

began. Keeping a tight focus on these events prevents

the discussion from becoming sidetracked by issues

that do not relate to the desired objectives. This tech-

nique is particularly effective when time is limited.

OPTIONAL DISCUSSION GUIDEWhen relevant or useful, the AAR facilitator can employ a

blended discussion technique that draws from elements

of a chronological or thematic review. In addition, it may

be helpful to collect information by:

l drilling further into the process or resources behind

an event or set of events;

l asking participants to identify unexpected results

and discuss their impact on the review topic or

topics;

l using “what-if” scenarios to assess how different

actions might have altered outcomes; and

l collecting data through complementary or more

detailed review methods (e.g. evaluations, studies

and statistics).

BOX 6. TECHNIQUES FOR AAR FORMAT

a Root cause analysis is a qualitative, retrospective, quality improvement tool used to analyse adverse incidents and sentinel events (e.g. a preventable error leading to death, serious physical or psychological injury, or risk of such injury) at the lowest system level (57).

AAR FORMAT

5.2

Page 30: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

21

Generally, in terms of roles and responsibilities, all the sources assessed agreed that AARs should involve an internal or external facilitator and participants. To ensure neutral facilitation, it may be a good idea to identify an external facilitator. Even if the AAR is conducted internally, someone should be identified who can facilitate the session to ensure that the AAR keeps to time and remains focused, and that everyone contributes. The facilitator can ask questions to clar-ify the distinction between opinion and fact, and help participants to expand on comments to provide an in-depth picture of what happened, what worked and what did not, and why. The WHO Guidance for AAR (3) also emphasizes the importance of an impartial facili-tator or lead interviewer who was not directly involved in the response to lead the AAR. This impartial facilita-tor or interviewer may be staff from partner agencies, international experts, or WHO staff.

Determining who will facilitate the debriefing should be done during the planning phase (59). The AAR fa-cilitator should make a concerted effort to draw in and include all participants in the AAR session (18). According to US Army guidelines for AAR (5), the facil-itator should be someone who is both knowledgeable about the tactics, techniques and procedures, and ca-pable of performing the tasks themselves. In a public health setting, this would equate to an individual with the appropriate clinical or emergency management skills, but who is also comfortable with facilitation and spurring discussion. Ultimately, there should not be a disproportionate emphasis on technical issues at the expense of managerial, administrative or coordination functions (10). The role of the facilitator is summa-rized in Box 7.

Source: (18)

Unfortunately, it is not always possible to use a fa-cilitator; for example, using a facilitator can be cost-ly and inefficient because of the need to conduct an AAR during normal operations. It is therefore critical to understand how to train teams to conduct their own AARs that result in effective learning. More re-cent work suggests that self-guided debriefs (i.e. an AAR that is guided by the team using a detailed guide) can be effective (8). However, there are cases in the literature where AARs have been conducted without a facilitator. A study on the role of the facilitator in AARs (7) also found that using trained facilitators signifi-cantly improved the effectiveness of the AARs com-pared with not using a facilitator, but the paper noted that few studies had been conducted without the use of a facilitator.

AAR FACILITATOR AND PARTICIPANTS: ROLES AND

RESPONSIBILITIES

5.3

l Keep group on task and on time. l Encourage participation by all. l Create an environment that supports expression of new ideas, original thinking, and recommended changes or solutions.

l Introduce the way ahead. l Reinforce the fact that it is permissible to disagree.

l Focus on learning. l Encourage people to give honest opinions. l Use open-ended questions to guide the discussion.

l Paraphrase, re-state and summarize key discussion points.

l Invite input from an activity or programme’s leadership to establish context.

l Set discussion parameters (if any), and introduce or reinforce the way ahead.

BOX 7. OUTLINE OF FACILITATOR’S ROLEIN AARs

Page 31: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

22

There is no standard protocol about when to con-duct AARs in emergency response settings; however, the literature review found that there are two main schools of thought about timing. One is that an AAR should be performed after each identifiable event or milestone and become a live-learning process (60), or be considered as a routine part of the life cycle of any response (54). The other, as stated by some hu-manitarian organizations (33), is that AAR should take place within the first 3–4 months of a response to a type 2 emergency, to reveal what has been learned, reassess directions, and review both successes and challenges (61). This divergence has been confirmed by a meta-study conducted on reviews of the 2009 response to pandemic influenza A (H1N1) (56) in the US involving health-care practitioners. In the study, participants indicated that real-time data collection or evaluation for the AAR (as opposed to waiting until after the event is over) was or would have been bene-ficial, because it offers an opportunity to have multiple cycles of improvement during the response, especial-ly for extended events such as pandemic influenza A (H1N1). However, in the same study, several partici-pants noted that real-time AAR activities were difficult to implement during real incidents, when practitioners are busy with the response.

All sources screened appeared to agree that AARs should be carried out shortly after the end of the event, while the team is still available and “memories are fresh” (13). This is because the flexible and easy structure of AARs makes it possible to capture the learning before a team splits up, or before people for-get what happened and move on to something new (17). Aligning with the general agreement from these sources, the WHO Guidance for AAR (3) recommends conducting AARs as soon as possible after the decla-ration of the end of the event by the ministry of health (or within three months).

The materials found in the literature review revealed numerous recommendations about how to conduct AARs and what to avoid. Recent research suggests that three things are essential for an AAR to be an effective learning tool (62):

l The AAR should allow for feedback on the ac-tions taken, and information sharing. Sharing task-related information allows team members to quickly identify the correct approach.

l The AAR should provide a framework that allows team members to critically reflect on the event, challenge implicit assumptions, and understand why something is working or not working.

l The AAR should provide a framework for estab-lishing common goals and future action plans, to prevent similar occurrences in the future (63) and to improve learning (8).

An additional strength of the AAR format should be in its flexibility. A chronological format can be used to structure the discussion; alternatively, it can be orga-nized around key events, themes, or issues. Process or cross-cutting themes (e.g. logistics, management, administration and support) can be discussed sepa-rately or woven into the substantive discussion. Each technique will generate discussion, and will identify strengths and successes, weaknesses and areas for improvement, and concrete, actionable recommen-dations (18).

Certain practices could compromise the success of an AAR. As noted in the previous section, AARs are structured discussions with the aim of enabling learning. They are not a blaming process. Everyone who participated in the project should be given equal opportunity to share, regardless of their expertise or position, and all views should be respected and recorded (this may include partners and users, when relevant) (22).

WHEN TO CONDUCT AARs AARs: TIPS AND TRAPS

5.4 5.5

Page 32: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

23

In other words, during the AAR, hierarchy should be suspended and staff should feel free to express their experiences without fear of reprisal. Moreover, AARs are not performance reviews and should not be conducted in order to allocate blame (or credit), but rather to encourage honest reflection by practitioners on in-country processes, and on key challenges and results achieved (54). In line with the reasoning above, the WHO Guidance for AAR (3) has also emphasized the importance of having a lead facilitator who is external to the response as well as avoiding having a senior manager as a facilitator to allow honest and open discussion. The WHO guidance underscores the importance of ensuring an optimal environment for collective learning.

A second issue is the use of statistics, which could become a double-edged sword. During an AAR, statistics could supply objective facts that reinforce observations of both strengths and weaknesses. The danger lies in statistics for their own sake. For example, presenting chart after chart of ratios, bar graphs and tables quickly obscures any meaning and lends itself to a “grading” of unit performance. This stifles discussion and degrades the value of the qualitative nature of AAR. Conversely, statistics and statistics-based charts can be valuable in identifying critical trends or issues, and can reinforce teaching points (5). Table 3 summarizes what to do and what not to do when conducting an AAR.

TABLE 3. “DOS” AND “DON’TS” FOR AN AAR

Schedule the AAR shortly after the completion of an activity Conduct AARs without planning

Make AARs routine Conduct AARs infrequently and irregularly

Use trained facilitators Allow dominating leaders to run AARs

Collect objective data whenever possible

Involve all participants in discussions

Probe for underlying cause-and-effect relationships

Identify activities to be sustained as well as errors to be avoided

Allow debates to become bogged downwhen establishing the facts

Allow senior managers or facilitators to dominate discussions

Criticize or fault individual behaviour or performance

Conclude without a list of learnings to be applied in the future

Establish clear ground rules – encourage honesty and openness, focus on things

that can be fixed and keep alldiscussion confidential

Proceed systematically: lWhat did we set out to do? lWhat actually happened? lWhy did it happen? lWhat are we going to do next time?

Base performance evaluations and promotions or demotions on mistakes

admitted in AARs

Allow unstructured, meandering and disorganized discussions

DO DON’T

Page 33: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

24

THE OUTCOMESOF AARS6

AARs are considered as a simple system of evalua-tion; nevertheless they can produce rich results that can be used for gathering lessons learned and best practices, management and implementation (13). As stated by the HSPH, in AARs, lessons learned can be generalized to focus on:

l actions that might prevent similar weaknesses or build on strengths in future responses;

l alternative actions the department could have taken to create different outcomes;

l changes or capacity development that could be implemented before future events to change subsequent outcomes;

l solutions that are innovative and potentially gen-eralizable to similar jurisdictions (although the response goals may be different – that is, in dif-ferent circumstances, a particular response may have been appropriate) (48); and

l addressing recurring difficulties in learning les-sons; these difficulties can occur in part because of the failure to produce generalized recommen-dations that strengthen the system and increase capacity (instead producing recommendations specific to the context of the management of the particular event being reviewed) (64).

The results of an AAR should be transformed into recommendations directed to the relevant stake-holders and gathered into after action reports.a Sev-eral sources in the literature note that the completion of such reports (64) is part of the reporting process for responsible public safety agencies, emphasizing the improvement of emergency management and first-response operations at all levels, and offering a means for documenting system improvements and planning implementation.

According to FEMA, after action reports should be completed within 120 days of every major or signif-icant incident or event. The report should be shared with all interested public safety and emergency man-agement organizations.

The literature review identified some suggestions for collecting and achieving recommendations gathered in after action reports. These are summarized in Box 8.

AFTER ACTION REPORTS

6.1

a As can be imagined, the closely related terminology (including two AAR acronyms) within the after action evaluation discourse can be confusing.

Page 34: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

25

Source: (59)

Although a number of templates are available on-line, there is no standard format or length for an AAR. Despite a large amount of energy being invested in homogenizing formats, the reports produced from AARs found in the literature are almost as varied as the organizations that produced them. This diversity affects the management of the data derived from an after action report because it makes it difficult to un-dertake individual and cross-case analysis. For this reason, the WHO Guidance for AAR (3) has provided a final AAR report template to encourage standardized data collection, a positive step towards facilitating the compilation of lessons learned from AARs to promote global collective learning. In the WHO guidance, it is recommended that final AAR reports are shared as widely as possible, including between countries, as it can be helpful for other countries experiencing similar challenges and risks. However, respecting possible sensitivities, the WHO guidance also clearly states that the decision on whether to publish the final AAR report ultimately resides with the health authority’s leadership.

The difficulty will be in ensuring that the lessons iden-tified have an actual impact on the response process or capacity, rather than simply being entered into a database of reports that are not applied.

Recent studies of AARs are increasingly supporting the notion that information collected in after action reports could be gathered in a database of reports, to connect past experience to future improvements (33). This basic idea is borrowed from an academ-ic consortium (24), and inspired by other systems, such as the Lessons Learned Information Sharing (LLIS) system developed by FEMA (32). Such a da-tabase could offer a way to identify and analyse rare events, and the responses to them, through the use of qualitative methodologies of analysis. This approach would make it possible to: understand the context of rare events and the mechanisms that drive success-ful and unsuccessful practices; identify and share best practices; drive individual and organizational improvement; and describe the nature and frequency of incidents (24).

The lack of a homogeneous system of reporting is an important issue that challenges the systematization of the information gathered (24). Moreover, whereas a typical improvement in a preparedness system re-quires systematic methods for learning from an indi-vidual organization’s experience in unusual situations, in the case of public health emergencies, this process is challenging because these are singular events, and responses may vary significantly from one incident to another. To cope with these issues, other sectors, such as aviation, have found ways to learn from one-of-a-kind events, and these may provide a model for the public health emergency preparedness system

l Archive the AAR record as a permanent and accessible document – both electronic and hard copy – to ensure that everyone has access to the document on an ongoing basis and, in particular, that decision-makers have been informed.

l Publicize the results via internal staff distribution lists, bulletin boards, discussion lists, staff notice boards, newsletters, website and staff meetings.

l Disseminate AAR outcomes to external stakeholders who have an interest in the findings.

BOX 8. SUGGESTED RECOMMENDATIONS FOLLOWING AN AAR

A DATABASE OF LESSONS LEARNED AND BEST PRACTICES

6.2

Page 35: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

26

that would enable the identification and systematic analysis of such events (24). Methods used in other sectors include reporting mechanism, incentive for reporting and data sharing.

Certainly, reporting mechanisms could be improved through the standardization of after action reports; also, effective work could be done in relation to incen-tive for reporting. For example, in the aviation industry, the critical incident registry’s purpose is to facilitate organizational learning and improvement of systems, rather than accountability – this approach seems to be a key reason why airline personnel submit reports about safety deficiencies that would not otherwise be detected. As is the practice in current after action reports, calling attention to system responses that worked well, in addition to those that were problem-atic, could help to emphasize the registry’s goal of driving improvement and learning.

The biggest challenge is related to data sharing. The literature review indicated that, although the princi-ples of transparency and sharing should be at the base of humanitarian actions, actors are not always prone to share challenges, perhaps because of the need to protect job security and avoid personal blame.

However, as demonstrated from the studies of the aviation industry, some registries share only the facts, removing the origin of the reports and making their contents available to the general public or keeping them confidential.

Finally, since the objective of a lessons learned data-base or critical incident registry is to facilitate learn-ing from individual incidents and apply the findings in other contexts, a public-health emergency-prepared-ness lessons-learned database requires a framework and set of tools that can produce meaningful and actionable insights about performance drivers and outcomes. This requires, first of all, development of a consensus for a method for performing rigorous analyses of individual incidents and for evaluating emergency responses. The development of the WHO Guidance for AAR (3) aims to address the above con-cerns through creating a standardized approach to conducting AARs and providing ready-to-use toolkits (4) with templates, checklists, facilitators’ manuals, PowerPoint presentations, and a database of trigger questions, to guide AARs from their preparation, con-duct, to reporting.

Page 36: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

27

CONCLUSIONS7

AARs in public health emergencies are a way to review actions taken by bringing together a team during or at the end of an emergency response. The AARs are based on a specific analytical framework that seeks to identify what worked well and how these practic-es can be institutionalized and shared with relevant stakeholders, as well as what did not work and requires corrective action. AARs can be used as practical as-sessments of response capacities and processes in order to identify actions needed to improve capacity for future events and should be considered as part of a culture of continuous learning and improvement.

The main purpose of an AAR is to capture and docu-ment feedback and experiences in order to improve fu-ture practices by means of an interactive analysis that is systematic and focuses on lessons learned. This approach can bring several benefits, such as an ap-praisal that does not apportion blame, documentation for strengths and weaknesses for future improvement, development of capacity and team building, improve-ments in thinking, a shared contextual awareness, and assistance with sustaining a competitive advantage and disclosing creativity that enables decision-making.

Normally, an AAR is a quick, facilitated, safe and open process that is time bound to the event under review. The methodology is pedagogical, and comprises a mix of several knowledge management and learning

techniques. More specifically, the format could make use of a previous data collection (based on secondary data review and survey), followed by a workshop (often conducted by a facilitator) that aims to bring together the different actors who participated in the response to an event. With the WHO Guidance for AAR (3), and the accompanying online training (65) and toolkits (4) pub-lished in 2019, Member States now have a clear step-by-step roadmap and ready-to-use toolkits to conduct AAR, with the technical support of WHO upon request.

AARs should produce outcomes that can be trans-formed into recommendations directed to deci-sion-makers when recorded in after action reports. Recent academic and institutional studies tend to support the idea of gathering this important informa-tion in databases such as critical incident registries, to connect past experience to future improvement. The development of such registries is ongoing. In the WHO Guidance for AAR (3), a final AAR report template has been provided to assist with standardized data collec-tion, which could facilitate the compilation of a glob-al “lessons learned database”. Ultimately, the devel-opment of this database can promote transparency and collective learning among peer countries or other stakeholders to strengthen the global preparedness and response to public health events.

Page 37: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

28

1. WHO. Framework for a public health emergency operations centre. Geneva: World Health Organization (WHO); 2015 (http://www.who.int/ihr/publications/9789241565134_eng/en/, accessed March 2020).

2. Implementation of the International Health Regulations (2005): Report of the Review Committee on Second Extensions for Establishing National Public Health Capacities and on IHR Implementation. Geneva: World Health Assembly; 2015 (http://apps.who.int/gb/ebwha/pdf_files/WHA68/A68_22Add1-en.pdf, accessed March 2020).

3. WHO. Guidance for after action review (AAR). Geneva: World Health Organization (WHO); 2019 (https://www.who.int/ihr/publications/WHO-WHE-CPI-2019.4/en/ , accessed March 2020).

4. WHO. After action review (AAR) toolkits. Geneva: World Health Organization (WHO); 2019 (https://www.who.int/ihr/procedures/AAR-Toolkit/en/, accessed March 2020)

5. US Army. A leader’s guide to after-action reviews. Training Circular. 1993:25–20 (https://www.acq.osd.mil/dpap/ccap/cc/jcchb/Files/Topical/Af-ter_Action_Report/resources/tc25-20.pdf, ac-cessed March 2020).

6. Serrat O. Conducting after-action reviews and retrospects, Knowledge Solutions, Asian De-

velopment Bank. 2008:823–825 (https://www.adb.org/sites/default/files/publication/27570/conduct ing-af ter-act ion-rev iews.pdf , accessed March 2020).

7. Tannenbaum SI, Cerasoli CP. Do team and in-dividual debriefs enhance performance? A me-ta-analysis. Human Factors. 2013;55(1):231–245 (http://journals.sagepub.com/doi/abs/10.1177/0018720812448394, accessed March 2020).

8. Reiter-Palmon R, Kennel V, Allen JA, Jones KJ, Skinner AM. Naturalistic decision mak-ing in after-action review meetings: The im-plementation of and learning from post-fall huddles. Journal of Occupational and Organizational Psychology. 2014;88(2):322–340 (http://dx.doi.org/10.1111/joop.12084, accessed March 2020).

9. Walker J, Andrews S, Grewcock D, Halligan A. Life in the slow lane: making hospitals safer, slowly but surely. Journal of the Royal Society of Medicine. 2012;105(7):283–287 (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3407393/, accessed March 2020).

10. NASA. After action review (pause and learn). National Aeronautics and Space Administra-tion (NASA);(https://km.nasa.gov/wp-con-tent/uploads/sites/3/2015/11/After-Ac-tion-Review.pdf, accessed March 2020).

REFERENCES

Page 38: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

29

11. CLIC. After action review (AAR) – evaluate all ac-tivities (CPS toolkit). Cumbria, United Kingdom: Cumbria Learning and Improvement Collabora-tive (CLIC);(http://www.theclic.org.uk/resourc-es/toolkits/toolkit-2/summary/module-1/af-ter-action-review-aar, accessed March 2020).

12. Ellis S, Davidi I. After-event reviews: drawing les-sons from successful and failed experience. Jour-nal of Applied Psychology. 2005;90(5):857 (http://psycnet.apa.org/record/2005–10696–003, accessed March 2020).

13. FAO. After action reviews and retrospects. Food and Agriculture Organization of the United Na-tions (FAO); 2006 (http://www.fao.org/elearn-ing/course/FK/en/pdf/trainerresources/PG_AARRetrospects.pdf, accessed March 2020).

14. Degrosky M. Improving after action review (AAR) practice. Eighth International Wildland Fire Safety Summit, Missoula, MT: Fort Hays State University; 2005 (https://www.researchgate.net/publica-tion/254216971_Improving_After_Action_Re-view_AAR_Practice, accessed March 2020).

15. Ellis S, Mendel R, Nir M. Learning from suc-cessful and failed experience: the moder-ating role of kind of after-event review. J Appl Psychol. 2006;91(3):669–680 (https://www.ncbi.nlm.nih.gov/pubmed/16737362, accessed March 2020).

16. Mission-Centred Solutions. The after action review. 2008 (http://www.reindervrielink.nl/Handout%20After%20Action%20Review.pdf, accessed March 2020).

17. UNDP. Knowledge management toolkit for cri-sis prevention and recovery practice area. Unit-ed Nations Development Programme (UNDP); 2007 (http://reliefweb.int/sites/reliefweb.int/files/resources/p%26i%20to%20post.pdf, accessed March 2020).

18. USAID. After-action review: technical guidance. United States Agency for International Develop-ment (USAID); 2006 (http://pdf.usaid.gov/pdf_docs/PNADF360.pdf, accessed March 2020).

19. Allen JA, Baran BE, Scott CW. After-action reviews: a venue for the promotion of safety climate. Ac-cident Analysis & Prevention. 2010;42(2):750–

757 (https://www.sciencedirect.com/sci-ence/art ic le/pi i/S0001457509003054 , accessed March 2020).

20. Weick KE, Sutcliffe KM. Managing the un-expected: resilient performance in an age of uncertainty. John Wiley & Sons. 2011 (http://high-reliability.org/Managing_the_Un-expected.pdf, accessed March 2020).

21. Sawyer TL, Deering S. Adaptation of the US Ar-my’s after-action review for simulation debrief-ing in healthcare. Simulation in Healthcare. 2013;8(6):388–397 (https://journals.lww.com/simulationinhealthcare/Abstract/2013/12000/Adaptation_of_the_US_Army_s_After_Action_Review.6.aspx, accessed March 2020).

22. ALNAP. Humanitarian Innovation Fund af-ter action review (AAR) guidelines. 2017 (http://www.alnap.org/resource/22344 , accessed March 2020).

23. USAID. Pre-action review in Jordan. United States Agency for International Development (USAID); 2013 (https://jordankmportal.com/resources/ldlyl-l~-lmrj-at-lty-tsbq-tkhdh-ljrt, accessed March 2020).

24. Piltch-Loeb R, Kraemer JD, Nelson C, Stoto MA. A public health emergency preparedness crit-ical incident registry. Biosecurity and Bioterror-ism: Biodefense Strategy, Practice, and Science. 2014;12(3):132–143 (http://online.liebert-pub.com/doi/abs/10.1089/bsp.2014.0007, accessed March 2020).

25. Villado AJ. The after action review training ap-proach: An integrative framework and empirical investigation (Master thesis). Texas A&M Uni-versity. 2008 (http://oaktrust.library.tamu.edu/bitstream/handle/1969.1/ETD-TAMU-3059/VILLADO-DISSERTATION.pdf?sequence=1, accessed March 2020).

26. Mastiglio T, Wilkinson J, N. Jones P, Bliss J, S. Bar-nett J. Current practice and theoretical founda-tions of the after action review.2011 (https://www.researchgate.net/publication/235169250_Current_Practice_and_Theoretical_Foun-d a t i o n s _ o f _ t h e _ A f t e r _ A c t i o n _ R ev i e w , accessed March 2020).

Page 39: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

30

27. Gratch J, Mao W. Automating after action re-view: attributing blame or credit in team training, Proceedings of the 12th Conference on Behav-ior Representation in Modeling and Simulation, University of Southern California. 2003 (http://people.ict.usc.edu/~gratch/brims03.pdf accessed March 2020).

28. Bray K, Laker S, Ilott I, Gerrish K. After action review: an evaluation tool. Bridging the gap be-tween knowledge and practice, 2013 (https://www.cebma.org/wp-content/uploads/Start-er-for-10-No-9-Final-05–03–2013.pdf , accessed March 2020).

29. Hengeveld-Bidmon E. An after action review of a research project team: a teaching case. Berkeley University California: 2015 (http://mackcenter.berkeley.edu/sites/default/files/an_after_ac-tion_review_of_a_research_project_team.pdf, accessed March 2020).

30. Beerens RJJ, Tehler H. Scoping the field of disaster exercise evaluation – a literature overview and analysis. International Jour-nal of Disaster Risk Reduction. 2016;19:413–446 (http://www.sciencedirect.com/sci-ence/art ic le/pi i/S2212420916302072 , accessed March 2020).

31. Massachusetts Emergency Management Agen-cy. After action report for the response to the 2013 Boston Marathon bombings. 2014 (http://archives.lib.state.ma.us/handle/2452/264302, accessed March 2020).

32. FEMA. Operational lesson learned in di-saster response. Emmitsburg, Maryland: Federal Emergency Management Agency (FEMA); 2015 (https://www.usfa.fema.gov/downloads/pdf/publications/operation-al_lessons_learned_in_disaster_response.pdf, accessed March 2020).

33. CARE International, Catholic Relief Services, OXFAM GB, World Vision International. Joint after-action review of our humanitarian re-sponse to the tsunami crisis. 2005 (http://www.betterevaluation.org/en/resources/exam-ples/after_action_review/Tsunami_response, accessed March 2020).

34. OECD, DAC. Glossary of key terms in evalua-tion and results based management. France: Organisation for Economic Co-operation and Development (OECD) Development As-sistance Committee (DAC); 2010 (https://www.oecd.org/dac/evaluation/2754804.pdf, accessed March 2020).

35. DFID. International development evaluation policy. Department for International Development (DFID); 2013 (https://www.oecd.org/derec/unit-edkingdom/DFID-Evaluation-Policy-2013.pdf, accessed March 2020).

36. US CDC. Types of evaluation. Atlanta, Georgia: Centers for Disease Control and Prevention (CDC);(https://www.cdc.gov/std/Program/pupestd/Types%20of%20Evaluation.pdf , accessed March 2020).

37. Puri J, Aladysheva A, Iversen V, Ghorpade Y, Brück T. What methods may be used in impact evalua-tions of humanitarian assistance? Institute of La-bor Economics (IZA); 2014 (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2554883, accessed March 2020).

38. White H. Theory-based impact evalua-tion: principles and practice. Journal of De-velopment Effectiveness. 2009;1(3):271–284 (http://www.tandfonline.com/doi/a b s / 1 0 . 1 0 8 0 / 1 9 4 3 9 3 4 0 9 0 3 1 1 4 6 2 8 , accessed March 2020).

39. Bandura A. Self-efficacy: Toward a unify-ing theory of behavioral change. Psycho-logical review. 1977;84(2):191 (http://psyc-net.apa.org/record/1977–25733–001 , accessed March 2020).

40. Taylor PJ, Russ-Eft DF, Chan DW. A meta-analyt-ic review of behavior modeling training. Journal of Applied Psychology. 2005;90(4):692 (http://psycnet.apa.org/fulltext/2005–08269–007.html, accessed March 2020).

41. Revans RW. What is action learning? Journal of Management Development. 1982;1(3):64–75 (http://www.emeraldin-sight.com/doi/abs/10.1108/eb051529 , accessed March 2020).

Page 40: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

31

42. Robinson M. It works, but is it action learn-ing? Education + Training. 2001;43(2):64–71 (http ://www.emerald ins ight .com/doi/abs/10.1108/EUM0000000005425 , accessed March 2020).

43. PLAN International. Knowledge management tool: after action reviews. (http://www.plan-acad-emy.org/pluginfile.php/20448/mod_data/con-tent/2943/GLO-After-Action_Review-_Knowl-edge_Management_Tool-Final-IO-Eng-jul14.pdf, accessed March 2020).

44. Degrosky M, Parry C. Beyond the AAR: The ac-tion review cycle (ARC). Missoula, Montana, USA, International Association of Wildland Fire. 2002 (https://www.researchgate.net/publica-tion/228452347_Beyond_the_AAR_The_Action_Review_Cycle_ARC, accessed March 2020).

45. NHS. Knowledge management postcards. Unit-ed Kingdom: National Health Service (NHS); 2010 (http://content.digital.nhs.uk/media/15466/Knowledge-management-postcards/pdf/km-postcards.pdf, accessed March 2020).

46. Young R. Knowledge management tools and techniques manual. Asian Productivity Organi-zation; 2010 (http://www.kmbestpractices.com/uploads/5/2/7/0/5270671/km_tools__tech-niques_manual.pdf, accessed March 2020).

47. Heal S. Debriefings and after action reviews. Justice Academy; 2009 (http://www.jus-ticeacademy.org/iShare/Heal/Debriefings%20and%20After%20Action%20Reviews.pdf , accessed March 2020).

48. HSPH. Peer assessment of public health emer-gency response toolkit. Harvard School of Public Health (HSPH); 2014 (https://cdn1.sph.harvard.edu/wp-content/uploads/sites/1609/2016/07/Peer-Assessment-toolkit-2-28-14v2.pdf, ac-cessed March 2020).

49. Darling MJ, Parry CS. After-action reviews: link-ing reflection and planning in a learning prac-tice. Reflections. 2001;3(2):64–72 (https://www.homeworkmarket.com/sites/default/files/qx/16/04/04/11/after-action_reviews-_linking_reflection_and_planning_in_a_learning_prac-tice.pdf, accessed March 2020).

50. Signet Research & Consulting. After action re-view and the action review cycle, fact sheet. 2007 (http://www.signetconsulting.com/downloads/AAR%20Fact%20Sheet.pdf , accessed March 2020).

51. Kaliner J. When will we ever learn? The after ac-tion review, lessons learned and the next steps in training and educating the homeland security enterprise for the 21st century. Monterey Califor-nia: Naval Postgraduate School; 2013 (https://calhoun.nps.edu/handle/10945/34683 , accessed March 2020).

52. CARE International. Facilitation of after ac-tion review (AAR) for emergency response programme in North East Kenya and Dadaab. 2012 (http://www.alnap.org/resource/6327, accessed March 2020).

53. UNICEF. A quick guide to choosing a tool. Unit-ed Nations Children’s Fund (UNICEF); 2007 (https://www.unicef.org/knowledge-ex-change/files/entry_points_production.pdf, accessed March 2020).

54. UN CERF. After action reviews: CERF guid-ance. United Nations Central Emergency Re-sponse Fund (UN CERF); 2014 (https://www.unocha.org/cerf/sites/default/files/CERF/CERF%20After%20Action%20Guidance.pdf, accessed March 2020).

55. Tami G, Bruria A, Fabiana E, Tami C, Tali A, Limor A-D. An after-action review tool for EDs: Learn-ing from mass casualty incidents. The American Journal of Emergency Medicine. 2013;31(5):798–802 (http://www.ajemjournal.com/arti-cle/S0735–6757(13)00042–9/abstract , accessed March 2020).

56. Stoto MA, Nelson C, Higdon MA, Kraemer J, Sin-gleton C-M. Learning about after action report-ing from the 2009 H1N1 pandemic: a workshop summary. Journal of Public Health Management and Practice. 2013;19(5):420–427 (https://jour-nals.lww.com/jphmp/Abstract/2013/09000/Learning_About_After_Action_Reporting_From_the.5.aspx, accessed March 2020).

Page 41: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

32

57. Wu AW, Lipshutz AM, Pronovost PJ. Effec-tiveness and efficiency of root cause analy-sis in medicine. JAMA. 2008;299(6):685–687 (http://dx.doi.org/10.1001/jama.299.6.685, accessed March 2020).

58. WHO. South Sudan emergency response: after action review. World Health Organization (WHO); 2014.

59. ALNAP. A comparative study of after action review (AAR) in the context of the Southern Africa crisis. United Kingdom: ALNAP; 2003 (https://relief-web.int/report/lesotho/comparative-study-af-ter-action-review-aar-context-southern-afri-ca-crisis, accessed March 2020).

60. UNITAR. Organising an after action review. United Nations Institute for Training and Research (UNI-TAR); 2017 (http://www.click4it.org/images/d/d2/Organizing_an_After_Action_Review.pdf, accessed March 2020).

61. Collison C, Parcell G. Learning to fly: practi-cal knowledge management from leading and learning organizations (2nd edition). John Wiley & Sons. 2007.

62. Eddy ER, Tannenbaum SI, Mathieu JE. Help-ing teams to help themselves: comparing two team‐led debriefing methods. Personnel Psy-chology. 2013;66(4):975–1008 (http://onlineli-brary.wiley.com/doi/10.1111/peps.12041/full, accessed March 2020).

63. DeChurch LA, Haas CD. Examining team plan-ning through an episodic lens: effects of de-liberate, contingency, and reactive planning on team effectiveness. Small Group Research. 2008;39(5):542–568 (http://journals.sagepub.com/doi/abs/10.1177/1046496408320048, accessed March 2020).

64. Savoia E, Agboola F, Biddinger PD. Use of after ac-tion reports (AARs) to promote organizational and systems learning in emergency preparedness. In-ternational Journal of Environmental Research and Public Health. 2012;9(8):2949–2963 (http://www.mdpi.com/1660–4601/9/8/2949htm, accessed March 2020).

65. Open WHO. Management and facilitation of an after action review (AAR), online training course. Geneva: World Health Organization (WHO); 2019 (https://openwho.org/courses/AAR-en, ac-cessed March 2020)

Page 42: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

THE GLOBAL PRACTICE OF AFTER ACTION REVIEW

33

ANNEX. LIST OF REVIEW QUESTIONS

lWhat is the definition of an AAR? lWhat are the differences between AAR, operational review, and lessons learned? lWhere does AAR sit on a spectrum of post emergency review? lWhat are cross-cutting methodological best practices that define an AAR? lWhat are cross-cutting pitfalls which define an AAR and must be avoided? lWhat are other cross-cutting methodological features that appear in AARs? l How to change culture towards one critical review of actions and continuous learning? lWhen should an AAR be conducted? l For what events are AARs most relevant? lWhat terminology is used? lWhat is the format of an AAR? l How long should the AAR be? lWho should be responsible for planning and conducting AARs? l How many people should participate in an AAR? lWhat methods of data collection can be used for an AAR? (and what are the advantages and disadvantages of the different methods)

l How are the results used? l How are the results shared?

Page 43: THE GLOBAL PRACTICE OF AFTER ACTION REVIEW