h white and ha waddington, journal development efectivesness

Upload: andrea-fur

Post on 02-Apr-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/27/2019 h White and Ha Waddington, Journal Development Efectivesness

    1/9

    This article was downloaded by: [200.42.178.74]On: 14 January 2013, At: 13:13Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

    Journal of Development EffectivenessPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/rjde20

    Why do we care about evidencesynthesis? An introduction to thespecial issue on systematic reviewsHoward White a & Hugh Waddington a

    a International Initiative for Impact Evaluation, 3ie, London, UKVersion of record first published: 18 Sep 2012.

    To cite this article: Howard White & Hugh Wadding ton (2012): Why do we care about evidencesynthesis? An introduction to the special issue on systematic reviews, Journal of DevelopmentEffectiveness, 4:3, 351-358

    To link to this article: http://dx.doi.org/10.1080/19439342.2012.711343

    PLEASE SCROLL DOWN FOR ARTICLE

    For full terms and conditions of use, see: http://www.tandfonline.com/page/terms-and-conditionsesp. Part II. Intellectual property and access and license types, 11. (c) Open AccessContent

    The use of Taylor & Francis Open articles and Taylor & Francis Open Selectarticles for commercial purposes is strictly prohibited.

    The publisher does not give any warranty express or implied or make any representationthat the contents will be complete or accurate or up to date. The accuracy of anyinstructions, formulae, and drug doses should be independently verified with primarysources. The publisher shall not be liable for any loss, actions, claims, proceedings,demand, or costs or damages whatsoever or howsoever caused arising directly orindirectly in connection with or ari sing out of the use of this material.

    http://www.tandfonline.com/page/terms-and-conditionshttp://www.tandfonline.com/loi/rjde20http://www.tandfonline.com/page/terms-and-conditionshttp://www.tandfonline.com/page/terms-and-conditionshttp://dx.doi.org/10.1080/19439342.2012.711343http://www.tandfonline.com/loi/rjde20
  • 7/27/2019 h White and Ha Waddington, Journal Development Efectivesness

    2/9

    Journal of Development EffectivenessVol. 4, No. 3, September 2012, 351358

    Why do we care about evidence synthesis? An introductionto the special issue on systematic reviews

    Howard White and Hugh Waddington*

    International Initiative for Impact Evaluation, 3ie, London, UK

    Systematic reviews are currently in high demand in international development. At least100 new reviews are ongoing or already completed on a range of topics across the board in international development, many of which were commissioned by policy-makingagencies. These new reviews need to be based on answerable questions, using methods

    of analysis and reporting which are appropriate for social and economic development programmes and relevant to users. This introductory paper lays out why we believesystematic reviews should be an important component of evidence-informed develop-ment policy and practice. It concludes by introducing the papers collected in this issue,which aim to demonstrate how reviews can be made to live up to the promises generated around them.

    Keywords: systematic reviews; impact evaluation

    In the nineteenth century, as the public-health doctor Muir Gray has said, we made greatadvances through the provision of clean, clear water; in the twenty-rst century we will makethe same advances through clean, clear information. Systematic reviews are one of the greatideas of modern thought. They should be celebrated. (Goldacre, 2009, p. 98)

    1. Introduction

    The use of the systematic reviews methodology is comparatively new among socialscientists in the international development eld, but has grown rapidly in the last 3 years.

    In the medical eld, the case for reviews has been based on two main rationales. First,the power of meta-analysis when you pool ndings from inconclusive studies, as demon-strated in the case of the efcacy of corticosteroid treatment for early deliveries captured in the Cochrane Collaboration logo (Greenhalgh 2001, p. 132). Second, reviews have over-turned accepted wisdom, as in the recent review which demonstrated the limited efcacy(and harms) caused by Tamiu medication (Jefferson et al . 2012).

    To date, there has not been a strong tradition of using rigorous evidence in inter-national development. The evidence bar has been rather low, with many policies based onanecdote and cherry picking of favourable cases. In the last decade, thanks to the effortsof groups such as the Abdul Lateef Jameel Poverty Action Lab (J-PAL), Innovations for Poverty Action (IPA), the World Banks Development Impact Evaluation (DIME) initiativeand Strategic Impact Evaluation Fund (SIEF) and the International Initiative for ImpactEvaluation (3ie), there is growing recognition of the need for evidence from studies with

    *Corresponding author. Email: [email protected]

    ISSN 1943-9342 print/ISSN 1943-9407 online 2012 Howard White, Hugh Waddingtonhttp://dx.doi.org/10.1080/19439342.2012.711343http://www.tandfonline.com

    w

    mailto:[email protected]:[email protected]
  • 7/27/2019 h White and Ha Waddington, Journal Development Efectivesness

    3/9

    352 H. White and H. Waddington

    rigorous designs. But there are legitimate concerns about the ability to generalise from justone case, or even from a few cases. The arguments for systematic reviews are thus that theyseparate the wheat from the chaff, by excluding evidence unless it meets explicit qualitycriteria, and that they include all available quality evidence, rather than cherry picking. Theexamination of the quantity and quality of evidence allows reviews to assess the extent towhich generalisable statements can be made.

    This special issue of the Journal of Development Effectiveness includes methodological papers which discuss how to conduct systematic reviews and meta-analyses, together with papers which exemplify experiences in conducting systematic reviews in the emerging eld dedicated to reviews of relevance to international development. This introduction discussesthe questions what is a systematic review of interventions? and why do a review, and when? in Sections 2 and 3, respectively. Section 4 then overviews the papers collected inthis issue.

    2. What is a systematic review of interventions?

    Systematic reviewing is a rigorous methodological approach to evidence synthesis, usuallyoriented towards answering policy-relevant questions, such as does a specic developmentintervention work?, what affects take-up and adherence to a specic intervention? or what are peoples views about an intervention, including their willingness-to-pay?. Thesedifferent questions are answerable using different types of literature, usually in separatereviews. 1 As examples, Masset et al . (2012) include quantitative causal evidence in their review of effects of agricultural interventions on nutrition, Munro et al . (2007) includequalitative evidence in their review of adherence to tuberculosis treatment, and Null et al .(2012) review quantitative evidence collected on willingness-to-pay for water purication.

    There is thus a parallel between debates as to how to conduct reviews and the debatearound randomised control trials (RCTs) in designing impact evaluations. Just as RCTscan be the best approach to answer some questions, so a review constructed around a sta-tistical meta-analysis of experimental and quasi-experimental evidence can be the bestapproach to a review. But it all depends on the question. Different questions requiredifferent approaches.

    Reviews should synthesise all relevant, high-quality evidence from the existing liter-ature, reaching unbiased conclusions in a transparent and replicable manner. Thus, theCochrane Collaboration denes a systematic review as a systematic, up-to-date sum-mary of reliable evidence of the benets and risks of [an intervention]. 2 The CampbellCollaboration states that reviews sum up the best available research on a specicquestion. 3

    The denitions in the previous paragraph do not in themselves accord any primacyto quantitative evidence, let alone reliance on evidence from RCTs. However, Cochraneand Campbell have traditionally excluded all but quantitative studies to answer the whatworks question, focusing largely on evidence from RCTs. While Campbell Collaborationguidance explicitly recognises some quasi-experimental designs like regression discon-tinuity and interrupted time series (Shadish and Myers 2004), about half of Campbellreviews have excluded all but RCT and quasi-RCT designs 4 (Konnerup and Kingsted 2012). This restriction, however, is changing in two ways. First, the Cochrane Collaborationis increasingly open to admitting a wider range of approaches to causal evidence. 5 The

    Campbell Collaboration has recently launched an International Development CoordinatingGroup (IDCG), which encourages broader quasi-experimental evidence and aims to equipreviewers with the tools to incorporate them (IDCG 2012).

    w

  • 7/27/2019 h White and Ha Waddington, Journal Development Efectivesness

    4/9

  • 7/27/2019 h White and Ha Waddington, Journal Development Efectivesness

    5/9

    354 H. White and H. Waddington

    Second, there is an increased emphasis on programme theory that is, understandingthe causal mechanisms by which particular interventions lead to outcomes which neces-sitates reliance on factual information, including that collected using qualitative methods,for analysis of the different types of research question such as questions addressing thelower reaches of the causal chain that is, issues which arise in implementation such ashow well training is delivered, understood and implemented by service providers respon-sible for the implementation (Snilstveit 2012). 6 Another example would be adherenceor compliance, which can rely on factual analysis of qualitative data (formative evi-dence), rather than just the counterfactual analysis (summative evidence) needed for causalstatements (White 2010). Indeed, the reasons for non-compliance are often best understood through qualitative inquiry (Bamberger et al . 2010). However, it would be right to say thatmost reviews completed to date have not drawn on the full range of available evidence.We hope and expect to see that changing in the coming year.

    Outside of international development, the demand for answers to a range of differentquestions has led to the development of different types of systematic review (Petticrew and Roberts 2006, Lavis 2009). The most common type of review are effectiveness reviewswhich include studies addressing the impact question, that is, what difference the inter-vention made to outcomes (what works?). Table 1 lists reviews registered with theCampbell Collaboration International Development Group as of June 2012. Other typesof review are appropriate according to the type of question(s) a review aims to answer (seeSnilstveit et al . 2012 in this issue; see also the recent introduction to systematic reviews bythe Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI-Centre),Gough et al . 2012). 7

    A systematic review, of whichever type, must have (1) a well-dened question for the review, (2) an explicit search strategy, (3) clear criteria for the inclusion or exclu-

    sion of studies, (4) systematic coding and critical appraisal of included studies, and (5) asystematic synthesis of study ndings, including meta-analysis where (it is) appropriate.In addition, particular aspects of the review, such as inclusion decisions and data coding,should be conducted, independently, by more than one team member.

    3. Why do a systematic review, and when?

    Sackett (2002) has argued that it is irresponsible to interfere in the lives of other peopleon the basis of theories unsupported by reliable empirical evidence (cited in Chalmers2005, p. 229). Indeed one can extend Sacketts argument that it is irresponsible to makegeneral policy guidelines based on the results of one study alone, particularly when manysuch studies exist. Systematic reviews aim to make generalisations about interventions asa whole and are thus a vital part of broader evidence-informed decision-making.

    By ltering the existing evidence, systematic reviews provide a tool to help alleviatethree information problems facing decision-makers: (1) knowing where to obtain relevantevidence, (2) determining which evidence is reliable, and (3) reaching transparent conclu-sions by synthesising multiple, possibly contradictory, sources of evidence. The reviewsaim to avoid two major problems which undermine the credibility of the standard litera-ture review. By collecting and appraising all evidence relevant to the review question, theyavoid cherry picking to suit authors and / or commissioners prior beliefs, and by provid-ing a means of evidence synthesis, reviews assist with the problem of information overload (Petticrew and Roberts 2006).

    By assessing the extent to which a review of all the available evidence suggests that a particular intervention is effective, a review of effects is in part an accountability exercise.

    w

  • 7/27/2019 h White and Ha Waddington, Journal Development Efectivesness

    6/9

    Journal of Development Effectiveness 355

    Finding out whether particular policies may be benecial, ineffective or even harmfulnecessitates collection of all relevant, published and unpublished, evidence. If, having presented a decision-maker with the evidence they do something else instead, then atthe very least the review suggests they should explain why they do otherwise (whichof course may be for legitimate reasons). The review is also partly an audit of the evi-dence base. Systematic reviews tell us what evidence exists on a particular issue, and through a critical appraisal of the included evidence help us to assess how reliable is theevidence, and whether we know what we think we know. Finally, and most importantlyfor decision-making, reviews of effects enable focus on development effectiveness rather than aid effectiveness: the primary interest is in which interventions work, not whofunded them (White 2005). In doing so, they are an important tool for lesson learningin decision-making, providing a rigorous assessment of what, if any, generalisable guid-ance is supported by the weight of evidence. In the context of international development in particular, this necessitates synthesis of evidence from multiple contexts and populationsand an assessment of the extent to which generalisable conclusions can be made.

    Systematic reviews are currently in high demand among policy makers within the eld of international development. 8 However, it is important to note that in many cases it might be that a systematic review is not going to be the best use of resources (Petticrew and Roberts 2006). Due to the rigours of the approach, a systematic review is usually appropri-ate for answering a specic question or for testing a hypothesis relating to a limited range of interventions. Overviews of reviews, which draw together the evidence collected in individ-ual reviews, can answer questions about comparative effectiveness of many interventions.Unfortunately, as some of the reviews commissioned in international development so far have shown (see Mallett 2012), it is possible that the research question that funders wantanswers to is simply too broad, not appropriate or answerable through a (single) systematic

    review.For many development interventions, the body of impact evaluation literature is still

    limited. As a result, some reviews might identify too few studies for there to be any clear,evidence-based conclusions to guide policy. While useful for highlighting evidence gapsand questioning the effectiveness of untested interventions, such reviews are unlikely tomeet the expectations of the end users and / or commissioners. This issue highlights theimportance of initial scoping of the literature to manage expectations upfront and to makean assessment of whether a systematic review is likely to meet the needs of the end user.If the evidence on effects is known to be very limited, a review of risk factors relating toa problem could still be useful (Snilstveit et al . 2012). In many areas, additional primarystudies, that is, impact evaluations, are needed more than reviews. However, even under these circumstances, it is considered good practice to do at least an appraisal of the exist-ing evidence, if not a full systematic review prior to planning further primary research(Greenhalgh 2001, Petticrew and Roberts 2006). Doing so avoids unnecessary duplicationof primary research, while indicating where replication of studies and evidence is required.

    Reviews can take a long time to complete, usually at least 12 months, often muchlonger, due to the rigour of data collection and the peer review process. They are oftenmore costly than standard literature reviews. 9 Systematic reviews are therefore better suited to being embedded within a larger, planned, research endeavour, hence the coordinationapparatus provided by the Cochrane and Campbell Collaborations. A reviews contributionshould be seen in the context of providing an up-to-date global public good (Chalmers2005), with high xed costs of the initial product, but low marginal costs of updates as newevidence becomes available. While process management does need to be improved in thereviews eld, where urgency on the part of the users dictates that the review needs to be

    w

  • 7/27/2019 h White and Ha Waddington, Journal Development Efectivesness

    7/9

  • 7/27/2019 h White and Ha Waddington, Journal Development Efectivesness

    8/9

    Journal of Development Effectiveness 357

    of the benets of daycare programmes to child development. This is what we call a rst-generation impact evaluation question: does it work? Second-generation questions arethose of more use to policy makers, such as if any subsidy is justied, and if so, generalor targeted, and if targeted by age, region or income? Are there particular pedagogicalapproaches which are particularly effective in certain settings? and so on. Unfortunately,the primary studies currently available do not currently allow for such a nuanced analysis.

    The good news is that these weaknesses are well recognised in the review community,and that concerted efforts are being made to conduct more policy-relevant reviews drawingon a fuller range of evidence. Readers are encouraged to consult future issues of the Journal of Development Effectiveness , for example, of such studies.

    Notes1. There are examples of reviews which combine different questions within the same study; see

    Harden et al . (2009).2. http://www.cochrane.org/faq/general [Accessed 30 January 2012].3. http://www.campbellcollaboration.org/what_is_a_systematic_review/index.php [Accessed 30

    January 2012].4. A quasi-RCT is a study which prospectively assigns participants to treatment using a

    non-random method of allocation, such as alternation, date of birth and letter of the alphabet.5. Less than 6 per cent of Cochrane reviews include evidence from non-randomised studies

    (Konnerup and Kingsted 2012). The current edition of the Cochrane Handbook recommendsrestricting inclusion for reviews of effects to RCTs where possible (Higgins and Green 2011),although it notes that experimental designs may not be appropriate for certain types of interven-tion (for example, where exposure would be unethical). In discussions which have involved 3ie,the Handbook chapter on non-randomised studies is currently being updated and will include broader designs including regression discontinuity and propensity score matching.

    6. See also Davis (2006).7. The EPPI-Centres international development evidence library contains a range of differenttypes of review. http://eppi.ioe.ac.uk/cms/Default.aspx?tabid = 3312.

    8. 3ie has been managed or supported seven calls for proposals since 2008, involving both gov-ernmental and non-governmental organisations, including the UK Department for InternationalDevelopment (DFID), the Australian Agency for International Development (AusAID), theCanadian International Development Agency (CIDA), the Millennium Challenge Corporation(MCC), the Norwegian Aid Agency (NORAD), the United States Agency for InternationalDevelopment (USAID), Sightsavers and Population Services International.

    9. 3ie (2012) estimated the average cost of its reviews funded in 2011 as USD75,000, although thecosts vary depending on a range of factors including the scope of the review and the methodsof synthesis employed.

    10. http://www.campbellcollaboration.org/news_/Random_notes_reviews_as_tools.php [Accessed 15 April 2012].

    ReferencesBamberger, M., Rao, V., and Woolcock, M., 2010. Using mixed methods in monitoring and eval-

    uation experiences from international development . Washington, DC: World Bank, PolicyResearch Working Paper No. 5245.

    Chalmers, I., 2005. If evidence-informed policy works in practice, does it matter if it doesnt work intheory? Evidence & policy , 1 (2), 227242.

    Davies, P., 2006. What is needed from research synthesis from a policy-making perspective? In:J. Popay, ed. Moving beyond effectiveness in evidence synthesis methodological issues in

    the synthesis of diverse sources of evidence . London: National Institute for Health and ClinicalExcellence.Duvendack, M., et al ., 2012. Assessing what works in international development: meta-analysis for

    sophisticated dummies. Journal of development effectiveness , 4 (3), 456471.

    w

    http://www.cochrane.org/faq/generalhttp://www.campbellcollaboration.org/what_is_a_systematic_review/index.phphttp://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3312http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3312http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3312http://www.campbellcollaboration.org/news_/Random_notes_reviews_as_tools.phphttp://www.campbellcollaboration.org/news_/Random_notes_reviews_as_tools.phphttp://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3312http://www.campbellcollaboration.org/what_is_a_systematic_review/index.phphttp://www.cochrane.org/faq/general
  • 7/27/2019 h White and Ha Waddington, Journal Development Efectivesness

    9/9

    358 H. White and H. Waddington

    Goldacre, B., 2009. Bad science . London: Fourth Estate.Gough, D., Oliver, S., and Thomas, J., 2012. An introduction to systematic reviews . London: Sage.Greenhalgh, T., 2001. How to read a paper: the basics of evidence based medicine . London: BMJ

    Books.GSR, 2009. Rapid evidence assessments toolkit [online]. London: Government Social

    Research Unit. Available from: http://www.civilservice.gov.uk/networks/gsr/resources-and-guidance/rapid-evidence-assessment [Accessed 15 July 2012].Harden, A., et al. , 2009. Teenage pregnancy and social disadvantage: systematic review integrating

    controlled trials and qualitative studies. British medical journal , 339, b4254.Higgins, J.P.T. and Green, S., eds., 2011. Cochrane handbook for systematic reviews of interventions

    [online]. Version 5.1.0 (updated March 2011). The Cochrane Collabration. Available from: http://www.cochrane-handbook.org/ [Accessed 30 January 2012].

    IDCG (Campbell International Development Coordinating Group), 2012. Protocol and review guide-lines [online]. The Campbell Collaboration. Available from: http://www.campbellcollaboration.org/artman2/uploads/1/Campbell_International_Development_Group_Protocol_and_Review_ Guidelines_Mar2012.pdf [Accessed 1 April 2012].

    Jefferson, T., et al ., 2012. Neuraminidase inhibitors for preventing and treating inuenza inhealthy adults and children. Cochrane database of systematic reviews , (1), CD008965,doi:10.1002 / 14651858.CD008965.pub3.

    Konnerup, M. and Kongsted, H., 2012. Do cochrane reviews provide a good model for social science?The role of observational studies in systematic reviews. Evidence & policy , 8 (1), 7996.

    Lavis, J., 2009. How can we support the use of systematic reviews in policy making? PLoS medicine ,6 (11), 16.

    Leroy, J.L., Gadsden, P., and Guijarro, M., 2012. The impact of daycare programs on child health,nutrition and development in developing countries: a systematic review. Journal of development effectiveness , 4 (3), 472496.

    Mallett, R., et al ., 2012. The benets and challenges of using systematic reviews in inter-national development research. Journal of development effectiveness , 4 (3), 445455.

    Masset, E., et al ., 2012. Effectiveness of agricultural interventions that aim to improve nutritionalstatus of children: systematic review. British medical journal , 2012 (344), d8222.

    Munro, S.A., et al ., 2007. Patient adherence to tuberculosis treatment: a systematic review of qualitative research. PLoS medicine , 4 (7), e238. doi:10.1371 / journal.pmed.0040238. Null, C., et al ., 2012. Willingness to pay for cleaner water: a systematic review of evidence.

    Systematic review 006 . New Delhi: International Initiative for Impact Evaluation (3ie).Petticrew, M. and Roberts, H., 2006. Systematic reviews in the social sciences: a practical guide .

    Oxford: Blackwell Publishing.Sackett, D.L., 2002. The arrogance of preventive medicine. Canadian medical association journal ,

    167 (4), 363364.Shadish, W. and Myers, D., 2004. Research design policy brief [online]. The Campbell

    Collaboration. Available from: http://www.campbellcollaboration.org/artman2/uploads/1/C2_Research_Design_Policy_Brief-2.pdf [Accessed 15 July 2012].

    Snilstveit, B., 2012. Systematic reviews: from bare bones reviews to greater policy relevance. Journal of development effectiveness , 4 (3), 388408.

    Snilstveit, B., Oliver, S., and Vojtkova, M., 2012. Narrative approaches to systematic review and synthesis of evidence for international development policy and practice. Journal of development effectiveness , 4 (3), 409429.

    Stewart, R., de Wet, T., and van Rooyen, C., 2012. Purity or pragmatism? reecting on the use of systematic review methodology in development. Journal of development effectiveness , 4 (3),430444.

    Waddington, H., et al ., 2012. How to do a good systematic review and meta-analysis of effects ininternational development. Journal of development effectiveness , 4 (3), 359387.

    White, H., 2005. Challenges in evaluating development effectiveness [online]. Brighton: Institute of Development Studies, IDS Working Paper No. 242. Available from: http://www.ids.ac.uk/les/Wp242.pdf [Accessed 30 January 2012].

    White, H., 2010. Theory-based impact evaluation: principles and practice. Journal of development effectiveness , 1 (3), 271284.

    w

    http://www.civilservice.gov.uk/networks/gsr/resources-and-guidance/rapid-evidence-assessmenthttp://www.civilservice.gov.uk/networks/gsr/resources-and-guidance/rapid-evidence-assessmenthttp://www.cochrane-handbook.org/http://www.cochrane-handbook.org/http://www.campbellcollaboration.org/artman2/uploads/1/Campbell_International_Development_Group_Protocol_and_Review_Guidelines_Mar2012.pdfhttp://www.campbellcollaboration.org/artman2/uploads/1/Campbell_International_Development_Group_Protocol_and_Review_Guidelines_Mar2012.pdfhttp://www.campbellcollaboration.org/artman2/uploads/1/Campbell_International_Development_Group_Protocol_and_Review_Guidelines_Mar2012.pdfhttp://www.campbellcollaboration.org/artman2/uploads/1/C2_Research_Design_Policy_Brief-2.pdfhttp://www.campbellcollaboration.org/artman2/uploads/1/C2_Research_Design_Policy_Brief-2.pdfhttp://www.ids.ac.uk/files/Wp242.pdfhttp://www.ids.ac.uk/files/Wp242.pdfhttp://www.ids.ac.uk/files/Wp242.pdfhttp://www.ids.ac.uk/files/Wp242.pdfhttp://www.campbellcollaboration.org/artman2/uploads/1/C2_Research_Design_Policy_Brief-2.pdfhttp://www.campbellcollaboration.org/artman2/uploads/1/C2_Research_Design_Policy_Brief-2.pdfhttp://www.campbellcollaboration.org/artman2/uploads/1/Campbell_International_Development_Group_Protocol_and_Review_Guidelines_Mar2012.pdfhttp://www.campbellcollaboration.org/artman2/uploads/1/Campbell_International_Development_Group_Protocol_and_Review_Guidelines_Mar2012.pdfhttp://www.campbellcollaboration.org/artman2/uploads/1/Campbell_International_Development_Group_Protocol_and_Review_Guidelines_Mar2012.pdfhttp://www.civilservice.gov.uk/networks/gsr/resources-and-guidance/rapid-evidence-assessmenthttp://www.civilservice.gov.uk/networks/gsr/resources-and-guidance/rapid-evidence-assessmenthttp://www.cochrane-handbook.org/http://www.cochrane-handbook.org/