supporting non-experts in judging the credibility of risk assessments (cora)

7
Supporting non-experts in judging the credibility of risk assessments (CORA) Peter M. Wiedemann a, , Franziska Boerner a , Gregor Dürrenberger b , Jimmy Estenberg c , Shaiela Kandel d , Eric van Rongen e , Evi Vogel f a Institute for Technology Assessment and Systems Analysis, Karlsruhe Institute of Technology, Science Forum EMF, Anna-Louisa-Karsch-Str. 2, 10178, Berlin, Germany b Swiss Research Foundation on Mobile Communication, c/o ETH Zürich/Gebäude ETZ, IFH/K86, Gloriastrasse 35, CH-8092 Zürich, Switzerland c Swedish Radiation Safety Authority, Strålsäkerhetsmyndigheten, SE-171 16 Stockholm, Sweden d School of Public Policy, Hebrew University of Jerusalem, Mount Scopus, 91905 Jerusalem, Israel e Health Council of the Netherlands, PO Box 16052, 2500 BB, The Hague, The Netherlands f Bavarian State Ministry of the Environment and Public Health, Rosenkavalierplatz 2, 81925 München, Germany HIGHLIGHTS The CORA framework is designed to improve risk assessment reporting. CORA combines 18 criteria for making informed judgments on credibility of a risk assessment. The use of the framework requires some science literacy and critical thinking skills. abstract article info Article history: Received 24 September 2012 Received in revised form 6 June 2013 Accepted 9 June 2013 Available online 6 July 2013 Editor: Simon James Pollard Keywords: Credibility Risk assessment Risk communication EMF Purpose: One of the crucial communication issues that have to be tackled by risk assessors is how to provide a comprehensible and informative characterization of their ndings. The CORA framework (CORA stands for credibility of risk assessment) is designed for helping non-experts in judging the credibility of risk assess- ments. The CORA framework can be used by (1) stakeholders and policy makers, to make an educated judg- ment about the credibility of an assessment, and (2) the authors of a risk assessment, to improve the evaluability of their reports. The CORA framework consists of 18 criteria, leading to six main recommenda- tions. The framework's application is not limited to (EMF) risk assessment, for which it was originally devel- oped, but can be used in any area of risk or hazard assessment. © 2013 The Authors. Published by Elsevier B.V. All rights reserved. 1. Introduction Experts in risk assessment and in toxicology acknowledge the criti- cal need to improve risk communication (Schreider et al., 2010; WHO, 2010). In this matter, risk assessors as well as risk managers can draw on a vast amount of literature on risk communication. For instance, there are numerous recommendations and guidelines on how to accu- rately and transparently report the ndings of various types of studies (Altman et al., 2001) and for evaluating and reporting scientic evi- dence in reviews and in risk assessments (Repacholi and Cardis, 1997; Balshem et al., 2011; Guyatt et al., 2011). Unfortunately, those recommendations do not usually take into ac- count the limitations of risk literacy of non-experts and the constraints of intuitive toxicology (i.e. laypeople's peculiar understanding of the principles of risk assessments). Therefore, it is still an open question of how to communicate a risk assessment to non-experts, such as policy makers and stakeholders. It remains particularly disputed which addi- tional information should be given, so that non-experts can make better judgments about the credibility of a risk assessment. Science of the Total Environment 463464 (2013) 624630 This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial-No Derivative Works License, which permits non-commercial use, distribution, and reproduction in any medium, provided the original author and source are credited. Corresponding author at: Institute for Technology Assessment and Systems Analysis, Karlsruhe Institute of Technology, Science Forum EMF, Anna-Louisa-Karsch-Str. 2, 10178, Berlin, Germany. Tel.: +49 72160822501; fax: +49 72160824806. E-mail addresses: [email protected] (P.M. Wiedemann), [email protected] (F. Boerner), [email protected] (G. Dürrenberger), [email protected] (J. Estenberg), [email protected] (S. Kandel), [email protected] (E. van Rongen), [email protected] (E. Vogel). 0048-9697/$ see front matter © 2013 The Authors. Published by Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.scitotenv.2013.06.034 Contents lists available at ScienceDirect Science of the Total Environment journal homepage: www.elsevier.com/locate/scitotenv

Upload: evi

Post on 30-Dec-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Supporting non-experts in judging the credibility of risk assessments (CORA)

Science of the Total Environment 463–464 (2013) 624–630

Contents lists available at ScienceDirect

Science of the Total Environment

j ourna l homepage: www.e lsev ie r .com/ locate /sc i totenv

Supporting non-experts in judging the credibility of riskassessments (CORA)☆

Peter M. Wiedemann a,⁎, Franziska Boerner a, Gregor Dürrenberger b, Jimmy Estenberg c,Shaiela Kandel d, Eric van Rongen e, Evi Vogel f

a Institute for Technology Assessment and Systems Analysis, Karlsruhe Institute of Technology, Science Forum EMF, Anna-Louisa-Karsch-Str. 2, 10178, Berlin, Germanyb Swiss Research Foundation on Mobile Communication, c/o ETH Zürich/Gebäude ETZ, IFH/K86, Gloriastrasse 35, CH-8092 Zürich, Switzerlandc Swedish Radiation Safety Authority, Strålsäkerhetsmyndigheten, SE-171 16 Stockholm, Swedend School of Public Policy, Hebrew University of Jerusalem, Mount Scopus, 91905 Jerusalem, Israele Health Council of the Netherlands, PO Box 16052, 2500 BB, The Hague, The Netherlandsf Bavarian State Ministry of the Environment and Public Health, Rosenkavalierplatz 2, 81925 München, Germany

H I G H L I G H T S

• The CORA framework is designed to improve risk assessment reporting.• CORA combines 18 criteria for making informed judgments on credibility of a risk assessment.• The use of the framework requires some science literacy and critical thinking skills.

☆ This is an open-access article distributed under the tAttribution-NonCommercial-NoDerivativeWorks License,use, distribution, and reproduction in anymedium, provideare credited.⁎ Corresponding author at: Institute for Technology Ass

Karlsruhe Institute of Technology, Science Forum EMF, AnBerlin, Germany. Tel.: +49 72160822501; fax: +49 7216

E-mail addresses: [email protected] ([email protected] (F. Boerner), gregor@mobile-(G. Dürrenberger), [email protected] (J. [email protected] (S. Kandel), [email protected] (E. Vogel).

0048-9697/$ – see front matter © 2013 The Authors. Puhttp://dx.doi.org/10.1016/j.scitotenv.2013.06.034

a b s t r a c t

a r t i c l e i n f o

Article history:

Received 24 September 2012Received in revised form 6 June 2013Accepted 9 June 2013Available online 6 July 2013

Editor: Simon James Pollard

Keywords:CredibilityRisk assessmentRisk communicationEMF

Purpose: One of the crucial communication issues that have to be tackled by risk assessors is how to provide acomprehensible and informative characterization of their findings. The CORA framework (CORA stands forcredibility of risk assessment) is designed for helping non-experts in judging the credibility of risk assess-ments. The CORA framework can be used by (1) stakeholders and policy makers, to make an educated judg-ment about the credibility of an assessment, and (2) the authors of a risk assessment, to improve theevaluability of their reports. The CORA framework consists of 18 criteria, leading to six main recommenda-tions. The framework's application is not limited to (EMF) risk assessment, for which it was originally devel-oped, but can be used in any area of risk or hazard assessment.

© 2013 The Authors. Published by Elsevier B.V. All rights reserved.

1. Introduction

Experts in risk assessment and in toxicology acknowledge the criti-cal need to improve risk communication (Schreider et al., 2010; WHO,

erms of the Creative Commonswhichpermits non-commerciald the original author and source

essment and Systems Analysis,na-Louisa-Karsch-Str. 2, 10178,0824806.. Wiedemann),research.ethz.chrg),[email protected] (E. van Rongen),

blished by Elsevier B.V. All rights re

2010). In this matter, risk assessors as well as risk managers can drawon a vast amount of literature on risk communication. For instance,there are numerous recommendations and guidelines on how to accu-rately and transparently report the findings of various types of studies(Altman et al., 2001) and for evaluating and reporting scientific evi-dence in reviews and in risk assessments (Repacholi and Cardis, 1997;Balshem et al., 2011; Guyatt et al., 2011).

Unfortunately, those recommendations do not usually take into ac-count the limitations of risk literacy of non-experts and the constraintsof intuitive toxicology (i.e. laypeople's peculiar understanding of theprinciples of risk assessments). Therefore, it is still an open question ofhow to communicate a risk assessment to non-experts, such as policymakers and stakeholders. It remains particularly disputed which addi-tional information should be given, so that non-experts canmake betterjudgments about the credibility of a risk assessment.

served.

Page 2: Supporting non-experts in judging the credibility of risk assessments (CORA)

625P.M. Wiedemann et al. / Science of the Total Environment 463–464 (2013) 624–630

This issue deserves attention because non-experts – such as policymakers, stakeholders and the general public – often face a dilemma.They need to assess the validity of a risk assessment report, but dueto limited knowledge and time constraints, they often encounterdifficulties to do so properly. In order to come to some conclusion atall, they are then forced to use simplified evaluation strategies.

Along this line of thinking, the question arises about which infor-mation might help non-experts to make a more educated judgmenton the credibility of a risk assessment. Regarding this aim, we will in-troduce a framework entitled credibility of risk assessments (CORA).

According to a recent human health framework from EPA (2012),risk assessment consists of several steps. Planning and scoping deter-mine the context of the assessment's intended application. Next,problem formulation defines the stressor(s), the exposed population,the injurious effect and its measurement and the analytical methodsto be used by the assessment. In this step, the available scientificevidence describes whether or not there is a risk based on a synthesisof a variety of data from different studies, for example, in vitro stud-ies, animal research, epidemiology and studies with human volun-teers. In the risk assessment phase, dose–response and exposureanalyses characterize the relationship between the dose and responseand the exposure of people. The exposure value is then used to pre-dict the effect that is likely to occur. In risk characterization, the con-clusions of the assessment are described. Finally, in risk management,it should not simply be experts who engage in making decisionsabout the appropriate measures for protecting public health. Stake-holders, the public and decision makers should also be involved.

In order to make informed decisions, these stakeholders should beable to make sense of the conclusions made by the risk assessors.However, risk perception research, especially studies on intuitive tox-icology (Kraus et al., 1992), indicate that non-experts usually face dif-ficulties in understanding risk assessments in depth. In short, theirnumeracy is often not sufficient for the understanding of risk num-bers, they interpret basic concepts of risk assessment (such as dose–response relations) differently than experts and they do not usuallypossess the capabilities of differentiating between proper and poorlyconducted risk assessments. Naturally, this is not a surprise. Theoften-suggested remedy to focus on an empowerment strategy, i.e.informing and educating people so that they are able to comprehendcomplex risk assessments, is an honorable aim. However, it may failin face of reality due to the bounded risk literacy described above. Itseems to be unrealistic to expect everyone to be intimately familiarwith the features of the risk assessment process. Hence, the questionfor risk communication concerns the kind of information to focus onin order to support non-experts capability to make an informed judg-ment on the credibility of risk assessments.

2. An example: the RF EMF risk controversy

In modern societies, people are ubiquitously exposed to radio-frequency electromagnetic fields (RF EMF) from cell phones, base sta-tions, broadcast towers and other EMF radiation sources. Hence, itdoes not come as a surprise that there are pronounced public con-cerns about the potential health risks of RF EMF exposure from cellphone towers and cell phones. These concerns extend from Europe(Eurobarometer, 2008, 2010) to Asia and Australia (Wiedemann etal., 2013). Consequently, in many countries, the RF EMF case attainedhigh priority on the political agenda, with several hundred millionsof Euros spent on EMF risk research in the last two decades. Thisresulted in over 2500 published experimental and epidemiologicalstudies (FEMU, 2013). Based on this vast amount of research, variousorganizations and groups have conducted risk evaluations regardingRF EMF. Currently, approximately 190 such risk assessments areavailable from various scientific groups across the world. Althoughmany of them point in the same direction, others do not. For instance,the BioInitiative Group states that RF EMF is a hazard to health,

concluding, “The current standard for exposure to the emissions ofcell phones and cordless phones is not safe considering studiesreporting long-term brain tumor and acoustic neuroma risks”(BioInitiative Group, 2012, p. 10). In contrast, the International Com-mission on Non-Ionizing Radiation Protection (ICNIRP) is much moreskeptic about the existence of RF EMF risk potentials. They write, “Al-though there remains some uncertainty, the trend in the accumulat-ing evidence is increasingly against the hypothesis that mobilephone use can cause brain tumours in adults” (ICNIRP, 2011). InMay 2011, the International Agency for Research on Cancer (IARC)classified RF EMF as “possibly carcinogenic to humans” (Baan et al.,2011). Immediately afterwards, the international WHO EMF projectreleased a fact sheet, summarizing that “A large number of studieshave been performed over the last two decades to assess whethermobile phones pose a potential health risk. To date, no adverse healtheffects have been established as being caused by mobile phone use.”Furthermore, the fact sheet underlines, “While an increased risk ofbrain tumors is not established, the increasing use of mobile phonesand the lack of data for mobile phone use over time periods longerthan 15 years warrant further research of mobile phone use andbrain cancer risk.” Obviously, the WHO fact sheet does not contradictwith IARC's evaluation, however, the focus applied is different.

Those controversial conclusions create tremendous difficulties fornon-experts: How should they decide who is right and who is wrong?It is even more difficult to come to an informed judgment when onetakes into account the Internet publications of various activist groups.These groups try to influence public risk perception by delivering veryopinionated and sometimes exaggerated views on RF EMF risk assess-ment. For instance, the International Electro-Magnetic Fields Alliance(IEMFA), an activist group, stated, “Ongoing developments in biomedicalsciences increase worldwide consensus amongst leading life scientiststhat the multitude of cellular changes induced by non-ionizing electro-magnetic fields may over several years bioaccumulate into a range of se-rious health problems, due to prolonged exposure at levels significantlybelow the current exposure guidelines. These fast-emerging long-termrisks form a wide and major threat for public health” (InternationalElectro-Magnetic Fields Alliance (IEMFA), 2013).

The presented viewpoints document clearly that the public isconfronted with both warning and reassuring messages. Under thosecircumstances, the issue of credibility of these various EMF risk asses-sors becomes especially important.

3. Theoretical background: insights of credibility research

According to source credibility theory (Hovland, 1959; Hovlandand Weiss, 1951), competence and trustworthiness are the twomain variables that influence credibility judgments. Competence re-fers to the capability to make valid assertions and trustworthinessto the absence of vested interests, i.e. to the willingness to disclosevalid assessments (McCracken, 1989).

The Elaboration Likelihood Theory (ELT), developed by Petty andCacioppo (1986), is another interesting approach to assess the credibil-ity of a claim. The ELT considers a dualmode of thinking, which differen-tiates between two routes of information processing that depend on theknowledge and the involvement of the person making a judgment. Thereceiver of a message – for instance, a message about the riskiness of RFEMF – will choose the “central” route of information processing. If suit-ably motivated, they then evaluate the validity of the claim, possessingthe relevant knowledge to do so. In this case, the receiver will scrutinizethe quality of the arguments upon which the claim is based

When people are less motivated or interested and/or when theydo not possess the necessary knowledge for evaluating the validityof a claim, the so- called “peripheral route” will be activated. In sucha situation, the receiver of a message relies on more sketchy features.For example, the receiver will take into account the appearance of themessenger, his or her social status, the preconceived value similarity

Page 3: Supporting non-experts in judging the credibility of risk assessments (CORA)

626 P.M. Wiedemann et al. / Science of the Total Environment 463–464 (2013) 624–630

with the messenger or the compatibility of the message with his ownbeliefs and attitudes. In effect the receiver can only evaluate the cred-ibility of the message, not its validity. Of course, a credible assessmentdoes not necessarily mean that the assessment is true.

In line with this view, credibility can be considered a heuristic thatsubjects apply when they either do not have the time or the requiredexpertise for judging the scientific basis, and thus the quality of an as-sessment. Gigerenzer and Gaissmaier (2011, p 454) define a heuristicas “a strategy that ignores part of the information, with the goal ofmaking decisions more quickly, frugally, and/or accurately thanmore complex methods” (p. 454).

Recent research by Earle et al. (2007) underlines two differentevaluation strategies for judging credibility through the Trust Confi-dence Cooperation model (TCC model). First, credibility can be linkedto the evaluation of the past experience with an actor. Here inferen-tial heuristics, such as the familiarity heuristic, are applied in orderto extrapolate from the past and apply it to the future. Second, cred-ibility can be rooted in perceived value similarity. Here, choice heuris-tics are applied to judge credibility (Earle, 2010).

A further theoretical underpinning is Vygotsky's concept of “Zoneof proximal development” (Vygotsky, 1978), which depicts the inter-val between what a person can do without any support and what heor she can do with support. The idea that some types of support aremore helpful than other types is essential. From this point of view,our question concerns which information about a hazard assessmentreport and at what level of detail should be provided in order toempower non-experts to move up in the zone of proximal develop-ment. The central idea here is to support people to choose more ap-propriate heuristics for evaluating risk assessments that are outsidetheir own field of expertise. In addition, the aim is also to providean opportunity for risk assessors to self-evaluate the credibility oftheir assessments.

4. The development of CORA

The framework for supporting informed credibility judgments onrisk assessments, which will now be discussed in detail, is called com-municating risk assessments (CORA).

CORA was developed by members of a working group on riskmanagement that used to be part of BM 0704 project. The BM 0704project was funded by the European Cooperation in Science and Tech-nology (COST). It supports cooperation among scientists and re-searchers across Europe. COST1 BM 0704 focused on emerging EMFTechnologies under the aspect of health risk management. The devel-opers of CORA represent a range of scientific disciplines including bi-ology, engineering, health physics and social science that collaboratedin several governmental risk assessment bodies (e.g. Dutch HealthCouncil and the German Radiation Protection Commission). As afirst step in the development of CORA, we started with a synopsis ofsocial science literature on trust and credibility, as well with a riskcommunication review regarding RF EMF. It led to the conclusionthat non-experts might base their evaluation of credibility on eitherconfidence or general social trust. Therefore, we focused on the ques-tion of which information will enable non-experts to link a hazard orrisk assessment with confidence and trust.

A prototype version of the CORA framework was applied tovarious available EMF risk assessments. Among them were EMFrisk assessments conducted by the International Commission forNon-ionizing Radiation Protection (ICNIRP) and the Scientific Com-mittee on Emerging and Newly Identified Health Risks (SCENIHR).1

The authors of these assessments read our evaluations and provided

1 For more information, see the COST Web site: http://www.cost.eu.

feedback to the developers of CORA, which was used to revise theCORA framework.

Although the CORA framework was developed by using the EMFrisk assessment example, it can be generalized to a broad range ofother assessments regardless of whether they focus on hazard or risk.

CORA can be used to evaluate any part of a risk assessment or theentire process. This broad applicability is possible because CORAfocuses on more generic aspects of the assessment as well as on addi-tional background information.

5. Structure of CORA

The CORA framework consists of 18 criteria, grouped into six sections.They pertain to the assessment itself (the content), the process of theassessment and the particular characteristics of the assessors (Fig. 1).

In each section below, an objective is given to help evaluate assess-ment reporting. Criteria for checking the compliance with the objectiveis also given, followed by a rationale for this objective. Finally, points forattention are added.

5.1. Section 1: overview of the report

5.1.1. ObjectiveObtain a clear picture about the fundamental characteristics of the

report (Table 1).

5.1.2. RationaleAnyone looking for information, e.g. on the Internet, will be guided

initially by very basic features; which includes the document's title, theauthor's and publisher's names, and the year of publication (Leicht,2011). The disclosure of this information seems to be a first step towardsfamiliarity, which helps to build trust (Kramer, 1999). It is a simple heu-ristic that assigns trust to those who are familiar and distrust to thosewho are unfamiliar. Therefore, information that has a “human face” hasa better chance of gaining trust. Of course, familiarity is not a sufficientcondition for gaining trust. In addition, the expression of empathy andwillingness for caring is required (Earle, 2010; Peters et al., 1997). Notethat the authors' affiliation seems to be a first proxy for judging his orher expertise and impartiality. Furthermore, the objectives of the assess-ment report and the topic(s) covered by the report are essential informa-tion for the reader. These descriptions determine the relevance of thereport for the reader and should be easy to find for a quick first check.

5.1.3. Point for attentionBe wary when detailed information about the scope of the report

is absent.

5.2. Section 2: accountability

5.2.1. ObjectiveSearch for information about to whom the assessors are account-

able (Table 2).

5.2.2. RationaleInformation about conditions that may influence the agenda of the

experts who conducted the risk assessment is essential. Especiallyimportant to know are the funding source of the report and informa-tion on which organization the risk assessors are accountable. Thereis reliable evidence that pecuniary interests can influence the con-duct, interpretation and presentation of research (Huss et al., 2007;Lesser et al., 2007). The mandate may also influence research conduct,and especially the interpretation of its results. MacCoun (2005) dis-cusses various ideological biases. For instance, the allegiance with acertain outcome of an assessment. This may lead to an interpretationof research data in support of this a priori preferred position.

Page 4: Supporting non-experts in judging the credibility of risk assessments (CORA)

Fig. 1. Main sections of the CORA framework.

627P.M. Wiedemann et al. / Science of the Total Environment 463–464 (2013) 624–630

5.2.3. Point for attentionAn open-minded approach takes into account both financial and

ideological sources of research bias.

5.3. Section 3: expertise and impartiality

5.3.1. ObjectiveAssess whether information is given regarding how expertise and

impartiality was established (Table 3).

5.3.2. RationaleVarious disciplines, such as risk research, marketing research and

discourse theories, agree that perceived expertise is a crucial compo-nent of credibility (Peters et al., 1997; Eisend, 2006; Webler, 1995).However, with respect to expertise, two questions may be raised.First, does the expert have the right expertise? Second, will the expertuse it in a proper way? In order to give an answer to the first question,it is important to report what kinds of expertise were brought togeth-er and whether they cover the needed expertise. However, mostpeople know that experts can disagree. Especially when there arepublic concerns and the scientific controversy in the media, peoplewill ask which experts are selected and whether they represent thefull range of the scientific spectrum. Therefore, it should be describedwhether the selection process was unbiased. McComas (2008) dem-onstrated that trust in an expert is based on the belief that no vestedinterests have influenced the judgment of the expert. A further pointto consider is how potential biases in the selection and interpretationof research were avoided. In discussing how to control the selective

Table 1Criteria related to background information.

CORA criteria

1.1 Authorship Does the report disclose the names and affiliations of theexperts who conducted the assessment?

1.2 Objectives & Scope Is the scope of the risk assessment described?

Table 2Criteria related to accountability.

CORA criteria

2.1 Mandate Does the report include information on the mandate of theassessment group?

2.2 Funding Is there information available about the funding of the assessmentgroup?

use and emphasis of evidence, MacCoun (2005) pointed out thata prerequisite for unbiased conclusions is to motivate the assessorsto actively consider an alternative and competing interpretation ofthe available evidence. Leicht (2011) conducted a series of qualitativeinterviews with educated members of the general public and medicalpractitioners about the background information that they would liketo receive in order to evaluate the credibility of an assessment. Itturned out that in both groups – albeit at different levels of detail –integrity and independence of the assessors were fundamentalcriteria.

5.3.3. Point for attentionScientific expertise is an indispensable necessity for any sound

scientific conclusion. However, impartiality is a necessary prerequi-site for achieving unbiased conclusions. Both affect the quality of anassessment.

5.4. Section 4: Adherence with good scientific practices

5.4.1. ObjectiveSearch for information that depicts the quality of the assessment

(Table 4).

5.4.2. RationaleThe issues being addressed in this section are related to the most

critical parts of the risk assessment process that require a substantialamount of science literacy. For instance, in the debate about the EMFrisk potentials, it is decisive whether a distinction has been made be-tween biological and adverse health effects in the risk assessment.This issue emerges, for example, when electroencephalogram (EEG)studies are discussed. Changes in the EEG do not necessarily indicateadverse health effects. Risk assessments that do not adequately con-sider the above distinction between adverse and biological effectslead to wrong impressions and confusion. Furthermore, a clear de-scription of the literature search strategy and the criteria used for

Table 3Criteria related to composition of the assessment group.

CORA criteria

3.1 Criteria for selectingexperts

Are the criteria upon which the selection of the expertswas based disclosed?

3.2 Composition of theexpert group

Is the composition of the expert group explained, whatkind of experts are included and is the spectrum ofrequired expertise covered?

3.3 Assurance ofimpartiality

Are the procedures that are applied to get an impartialview, i.e. free of vested interests, described?

Page 5: Supporting non-experts in judging the credibility of risk assessments (CORA)

Table 4Criteria related to procedure of the risk assessment.

CORA criteria

4.1 Conceptual clarity Does the report have a sound conceptual base? Forinstance, does it distinguish between biological andadverse health effects?

4.2 Method of literaturesearch

Is information available about the literature searchstrategy, and was the strategy unbiased?

4.3 Method of qualityassessment

Does the report indicate which procedures wereusedto assure the quality of the assessment?

4.4 Procedure forweighingevidence

Is the process of weighing the various studiesexplained as well as how they are combined for theconclusions of the assessment?

4.5 Consensus findingprocedure

Is information available about the procedure offindingconsensus among the experts?

Table 6Criteria related to the conclusions.

628 P.M. Wiedemann et al. / Science of the Total Environment 463–464 (2013) 624–630

study selection, as well as the procedure to weigh the strength ofevidence, are of foremost importance (NHMRC, 2000a, 2000b). Finally,a transparent description on the judgment and decision-making pro-cess used in reaching the conclusions may help readers to evaluatehow credible the conclusions are (Brouwers et al., 2010). Additionally,good scientific practice requires that quality control measures, i.e. themeasures used to ensure the objectivity, reliability and validity of theassessment, are reported (NHMRC, 2000b).

5.4.3. Point for attentionObtain an idea about themain sources of information used to compile

the report. Is the report based on a comprehensive and updated data-base? Be wary if only studies are presented that are supporting just onepossible conclusion of a risk assessment without taking into account theopposite or other view.

5.5. Section 5: consultation and participation

5.5.1. ObjectiveExamine whether there was a public consultation process in order

to get the opinions of various stakeholders on the risk assessmentreport (Table 5).

5.5.2. RationaleConsultation and stakeholder participation are useful for many rea-

sons. For instance, they foster the discussion of controversial topics inorder to obtain a broad range of views. Furthermore, theymight improvethe assessment. Finally, they are crucial in gaining understanding and ac-ceptance by the public, communities and stakeholders (Kheifets et al.,2010; U.S. EPA, 2003). Studies on public participation in risk-relateddecision making suggest that stakeholder involvement improves thequality of both the process and the outcome (Beierle and Cayford,2001, 2002) and is seen by the public as a potent conflict resolutiontool (Wiedemann and Schütz, 2008). Including important stakeholdersin risk assessment activities can promote trust in the resulting conclu-sions and in the acceptability of the connected risk management recom-mendations (Dietz and Stern, 2008).

Table 5Criteria related to discourse and conflict resolution.

CORA criteria

5.1 Public consultation andstakeholder participation

Is there information about a procedure appliedin order to receive comments and inputs fromvarious stakeholders and the general public?

5.2 Special procedure foraddressing controversies

Is there information about a special procedurethat addresses scientific controversies?

5.5.3. Point for attentionStakeholder participation cannot be a substitute for expertise,

which is necessary for guaranteeing the quality of the risk assess-ment. However, it indicates that a range of societal values and con-cerns have been taken into account in the course of the assessmentprocess, which might improve the credibility of the risk assessmentreport.

5.6. Section 6: structure of the conclusions

5.6.1. ObjectiveCheck whether the conclusions in the report are balanced, evi-

dence based and transparent (Table 6).

5.6.2. RationaleThe conclusions should summarize the key findings as well as the

strengths and weaknesses of the assessment in a transparent andunderstandable way. Furthermore, in order to get a balanced andunbiased representation of the findings, it is important that argumentsthat discount and support (adverse) effects are both presented. In addi-tion, uncertainties should be described in the report in a way thatenables the reader to understand not only the strengths, but the weak-nesses of the available evidence aswell (Ioannidis et al., 2004). The finalconclusions of the risk assessment, and the assessor's confidence in it,should be described clearly and linked transparently to the presentedbody of evidence.

Wiedemann et al. (2011) showed that laypeople prefer to receivea comprehensive and balanced risk assessment report. They like toget information about the base of evidence (number of studies avail-able), the main arguments that support a risk and those that speakagainst it (pro- and con-arguments) and the information about theremaining uncertainty. However, other studies (Wiedemann et al.,2008; Han et al., 2010) have shown that disclosure of uncertaintiesmight have led to an increase of risk perceptions and a decreasetrust in the conclusions of the assessment. Therefore, uncertaintyinformation should be accompanied by a clear description of the avail-able evidence.

Usually, many readers do not study risk assessments in detail. Rath-er, they inspect the abstract, introduction and summary then probablyscreen selected chapters of the assessment report. Against that back-ground, it is evident that the information given in abstracts, policy sum-maries and other concluding parts of reports is probably the mostimportant with regard to opinion formation. This is especially true forpersonswithout the specialized scientific background knowledge need-ed to follow in detail the evidence presented in a report. Considering thelimited familiarity of general readers with scientific terminology(Glenton et al., 2010), it makes sense to complement a risk assessmentreport with a plain-language summary. Such a summary should be bal-anced, discussed in the context of other available risk assessments andpresented in a non-persuasive style (Zaza et al., 2000).

CORA criteria

6.1 Two-sidedargumentation

Does the report provide a balanced discussion of the prosand cons for the conclusions? Are the strengths andweaknesses of the available evidence indicated?

6.2 Evidence-basedconclusions

Are the conclusions explicitly linked to the evidencediscussed in the body of the report?

6.3 Uncertaintyreporting

Is there information available about the remaininguncertainties of the conclusions?

6.4 Transparentsummary

Is there a plain language summary?

Page 6: Supporting non-experts in judging the credibility of risk assessments (CORA)

629P.M. Wiedemann et al. / Science of the Total Environment 463–464 (2013) 624–630

5.6.3. Point for attentionBe wary if there are no uncertainties reported and the conclusions

are framed as a definite judgment. Be also wary if the conclusions arenot balanced in terms of discussing both the pros and cons of the riskarguments. Be wary if the information is presented in a sensational,emotive or alarmist way.

6. Discussion

Communication about risk and hazard assessments requires clear,comprehensive, transparent and reasonable messages (U.S. EPA,2000). Regardless of how well these requirements are met, there arealways some limitations due to the motivation, time constrains andlimits in knowledge of non-experts. These characteristics obstructtheir understanding of complex risk messages. Therefore, credibilitybecomes the important issue. However, the same limitations thathamper the proper understanding of a risk assessment may bias theevaluation of its credibility. For instance, credibility judgments mightbe based on oversimplified and inappropriate information, such aswhether the conclusions of the risk assessment is in line with one'sown prior belief of whether there is a risk or not.

CORA provides a framework for making a more qualified judgmentabout the credibility of an assessment. It fills a gap, because other in-struments, like CONSORT (Moher et al., 2010) or AGREE (AGREECollaboration, 2013), aim at improving the reporting of scientific stud-ies and assessments to a scientist audience. That is, they attempt to im-prove assessment quality and not necessarily improve the credibility ofthe assessment. In addition, the CORA framework does not intend topersuade the reader that a certain assessment is credible but providesa guide to better judge the credibility of a risk assessment.

Although CORAwas developedwith respect to the EMF controversy,the framework can be used for other risk assessments as well. It isgeneric enough to provide orientation in any case.

Nevertheless, there are open questions. The first question refers tothe usefulness of the framework in terms of its diagnostic utility. Inother words, is CORA able to detect significant differences among vari-ous risk assessments? If all risk assessments are equally good or equallybad in the light of the CORA framework, then its diagnostic utilitywouldbe rather limited. Second, do various stakeholders perceive CORA as avaluable tool? Which of the items of the CORA framework are seen asmost valuable? At present, no research is available that has rigidly test-ed the usability of CORA for various stakeholder groups. Can CORA keepits promises and indeed support qualified judgments about the credibil-ity of a risk assessment? And if so, which qualifications in science arerequired for the proper use of CORA? It might be of great value to testCORA for different hazard and risk assessments and observe how thecriteria are used in practice by various groups of people with differentknowledge and demographic backgrounds.

References

AGREE Collaboration. Appraisal of Guidelines for Research & Evaluation (AGREE)Instrument. Available at: www.agreecollaboration.org, 2013.

Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D, et al. The revisedCONSORT statement for reporting randomized trials: explanation and elaboration.Ann Intern Med 2001;134(8):663–94.

Baan R, Grosse Y, Lauby-Secretan B, El Ghissassi F, Bouvard V, Benbrahim-Tallaa L, et al.Carcinogenicity of radiofrequency electromagnetic fields. Lancet Oncol 2011;12(7):624–6.

Balshem H, Helfand M, Schunemann HJ, Oxman AD, Kunz R, Brozek J, et al. GRADE guide-lines 3: rating the quality of evidence—introduction. J Clin Epidemiol 2011;64(4):401–6.

Beierle TC, Cayford J. Evaluating dispute resolution as an approach to public participa-tion. Discussion Paper 01–40. Washington: Resources for the Future; 2001.

Beierle TC, Cayford J. Democracy in Practice: Public Participation in EnvironmentalDecisions. Washington: Resources for the Future; 2002.

BioInitiative group. BioInitiative 2012 Report Issues NewWarnings on Wireless and EMF.Rensselaer, NY: University at Albany; 2012 [Available at: http://www.bioinitiative.org/media/press-releases/].

Brouwers M, Kho ME, Browman GP, Cluzeau F, Feder G, Fervers B, et al. on behalf of theAGREE Next Steps Consortium. AGREE II: advancing guideline development,reporting and evaluation in healthcare. Can Med Assoc J 2010;182(18):839–42.

Dietz Th, Stern PC. Public Participation in Environmental Assessment and DecisionMaking. Washington: National Academies Press; 2008.

Earle TC. Trust in risk management: a model-based review of empirical research. RiskAnal 2010;30(4):541–74.

Earle TC, Siegrist M, Gutscher H. Trust, Risk Perception and the TCCModel of Cooperation.In: Siegrist M, Earle TC, Gutscher H, editors. Trust in Cooperative Risk Management.Uncertainty and Skepticism in the Public MindLondon: Earthscan; 2007. p. 1–49.

Eisend M. Source credibility dimensions in marketing communication—a generalizedsolution. J Empir Gen Mark Sci 2006;10(2):1–33.

EPA US. Framework for human health assessment to inform decision making. Availableat: http://www.epa.gov/raf/frameworkhhra.htm, 2012.

European Commission. Standard Eurobarometer 68. Available at: http://ec.europa.eu/public_opinion/archives/eb/eb68/eb68_en.htm, 2008.

European Commission. Special Eurobarometer 347. Electromagnetic fields. Availableat: http://ec.europa.eu/public_opinion/archives/ebs/ebs_347_en.pdf, 2010.

FEMU. Forschungsbericht 2012. http://www.arbeitsmedizin.rwth-aachen.de/fileadmin/resources/FEMU/Forschungsberichte/femu_forschungsbericht_2012.pdf, 2013.

Gigerenzer G, Gaissmaier W. Heuristic decision making. Annu Rev Psychol 2011;62:451–82.

Glenton C, Santesso N, Rosenbaum S, Nilsen ES, Rader T, Ciapponi A, et al. Presentingthe results of Cochrane Systematic Reviews to a consumer audience: a qualitativestudy. Med Decis Making 2010;30(5):566–77.

Guyatt GH, Oxman AD, Vist G, Kunz R, Brozek J, Alonso-Coello P, et al. GRADE guide-lines 4: rating the quality of evidence—risk of bias. J Clin Epidemiol 2011;64(4):407–15.

Han S, Kang T, Salter S, Yoo YK. A cross-country study on the effects of national cultureon earnings discretion. J Int Bus Stud 2010;41:123–41.

Hovland CI. Reconciling conflicting results derived from experimental and survey studiesof attitude change. Am Psychol 1959;14(1):8–17.

Hovland CI, Weiss W. The influence of source credibility on communication effective-ness. Public Opin Q 1951;15(4):635–50.

Huss A, Egger M, Hug K, Huwiler-Muntener K, Röösli M. Source of funding and resultsof studies of health effects of mobile phone use: systematic review of experimentalstudies. Environ Health Perspect 2007;115(1):1–4.

ICNIRP. Mobile phones, brain tumors, and the interphone study: where are we now? .Available at: http://www.icnirp.de/documents/SCIreview2011.pdf, 2011.

International Electro-Magnetic Fields Alliance (IEMFA). Global Health at Risk byDaily Electromagnetic Fields. Chronic EMF-associated diseases: everyone isat potential risk. Urgent need for a precautionary approach. Available at: http://www.elektrosmog.com/international-emf-alliance/, 2013.

Ioannidis JP, Evans SJ, Gotzsche PC, O'Neill RT, Altman DG, Schulz K, et al. Betterreporting of harms in randomized trials: an extension of the CONSORT statement.Ann Intern Med 2004;141(10):781–8.

Kheifets L, Swanson J, Kandel S, Malloy TF. Risk governance for mobile phones, powerlines, and other EMF technologies. Risk Anal 2010;30(10):1481–9.

Kramer RM. Trust and distrust in organizations: emerging perspectives, enduring ques-tions. Annu Rev Psychol 1999;50:569–98.

Kraus N, Malmfors T, Slovic P. Intuitive toxicology: expert and lay judgments of chemicalrisks. Risk Anal 1992;12:215–32.

LeichtM.Wer vertrautwarum inwelche Expertise? Ein Vergleich zwischenMedizinerInnenund NichtMedizinerInnen. Diplomarbeit, Leopold–Franzens–Universität Innsbruck;2011.

Lesser LI, Ebbeling CB, Goozner M, Wypij D, Ludwig DS. Relationship between fundingsource and conclusion among nutrition-related scientific articles. PLoS Med2007;4(1):e5.

MacCoun RJ. Conflicts of interest in public policy research. In: Moore DA, Cain DM,Loewenstein G, Bazerman M, editors. Conflicts of Interest: Problems and Solutionsfrom Law, Medicine and Organizational Settings. London: Cambridge UniversityPress; 2005. p. 233–64.

McComas K. The role of trust in health communication and the effect of conflicts ofinterest among scientists. Proc Nutr Soc 2008;67:428–36.

McCracken G. Who is the celebrity endorser? Cultural foundations of the endorsementprocess. J Consum Res 1989;16:310–21.

Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, et al. For theCONSORT Group. CONSORT 2010 Explanation and Elaboration: updated guidelinesfor reporting parallel group randomised trial. Br Med J 2010;340:c869.

NHMRC. How to review the evidence: systematic identification and review of the scientificliterature. Available at: http://www.nhmrc.gov.au/_files_nhmrc/publications/attachments/cp65.pdf, 2000.

NHMRC. How to Use the Evidence: Assessment and Application of Scientific Evidence.Available at: http://www.nhmrc.gov.au/_files_nhmrc/publications/attachments/cp69.pdf?q=publications/synopses/_files/cp69.pdf, 2000.

Peters RG, Covello V, McCallum DB. The determinants of trust and credibility in envi-ronmental risk communication, an empirical study. Risk Anal 1997;17(1):43–54.

Petty RE, Cacioppo JT. Communication and Persuasion: Central and Peripheral Routesto Attitude Change. New York: Springer-Verlag; 1986.

Repacholi MH, Cardis E. Criteria for EMF health risk assessment. Radiat Prot Dosimetry1997;72(3,4):305–12.

Schreider J, Barrow C, Birchfield N, Dearfield K, Devlin D, Henry S, et al. Enhancing thecredibility of decisions based on scientific conclusions: transparency is imperative.Toxicol Sci 2010;116(1):5–7.

U.S. EPA. Risk Characterization Handbook. Washington: United States EnvironmentalProtection Agency; 2000.

Page 7: Supporting non-experts in judging the credibility of risk assessments (CORA)

630 P.M. Wiedemann et al. / Science of the Total Environment 463–464 (2013) 624–630

U.S. EPA. Framework for implementing EPA's public involvement policy. Available at:http://www.epa.gov/publicinvolvement/policy2003/framework.pdf, 2003.

Vygotsky LS. Interaction between learning and development. In: Cole M,John-Steiner V, Scribner S, Souberman E, editors. Mind in Society: The Develop-ment of Higher Psychological Processes. Cambridge, MA: Harvard UniversityPress; 1978. p. 79–91.

Webler Th. Right Discourse in citizen participation: an evaluative yardstick. In: Renn,Webler, Wiedemann, editors. Fairness and Competence in Citizen Participation.Dordrecht und Boston: Kluwer; 1995. p. 35–86.

WHO. WHO research agenda for radiofrequency fields. Available at: http://whqlibdoc.who.int/publications/2010/9789241599948_eng.pdf, 2010.

Wiedemann PM, Schütz H. Informing the public about information and participationstrategies in the siting of mobile communication base stations: an experimentalstudy. Health Risk Soc 2008;6:517–34.

Wiedemann PM, Schütz H, Thalmann A. Perception of uncertainty and communicationabout unclear risks. In: Wiedemann PM, Schütz H, editors. The Role of Evidence inRisk Characterization. Making Sense of Conflicting Data. Weinheim: WileyVCH;2008. p. 161–83.

Wiedemann PM, Schütz H, Krug H, Spangenberg A. Evidence maps: communicatingrisk assessments in societal controversies the case of engineered nanoparticles.Risk Anal 2011;31(11):1770–83.

Wiedemann PM, Schuetz H, Boerner F, Clauberg M, Croft R, Shukla R, et al. When precau-tion creates misunderstandings: the unintended effects of precautionary informationon perceived risks. The EMF case. Risk Anal 2013. http://dx.doi.org/10.1111/risa.1203.[Article first published online: 28 MAR 2013].

Zaza S, Wright-De Aguero LK, Briss PA, Truman BI, Hopkins DP, Hennessy MH, et al.Data collection instrument and procedure for systematic reviews in the Guide toCommunity Preventive Services. Prev Med 2000;18:44–74.