methodological pluralism in the age of evidence-informed

10
© 2014 the Nordic Societies of Public Health DOI: 10.1177/1403494813516716 Scandinavian Journal of Public Health, 2014; 42(Suppl 13): 18–27 The emergence and evolution of evidence-informed practice in social services Evidence-informed practice (EBP) has its roots in health care’s evidence-based medicine (EBM) move- ment, which began its formal usage of the term in the early 1990s [1]. EBM was later codified into a con- ceptual model and set of specific steps in the seminal work Evidence-based medicine: how to practice and teach EBM by Sackett et al. [2]. The model consists of three concentric circles representing best evidence, practitioner expertise, and client values, and their union represents the ideal practice of EBM (Figure 1). The idea was then translated and transplanted from its origins in medicine to the helping professions by Eileen Gambrill [3] and later by Leonard Gibbs [4]. An important advance to this early model by Haynes et al. [5] was introduced into social services by Gambrill [6]. Rather than EBP itself as the intersection of the three components, this newer model frames clinical expertise as the optimal inte- gration of current best evidence, client preferences and actions, and client clinical state and circumstances (Figure 2). That is, clinical expertise only occurs when these three constructs are optimally combined. In some cases, evidence may be strong and perhaps it is weighted accordingly when decisions are made. In other cases, there may be little or no valid evidence and strong client preferences, in which case prefer- ences would figure more prominently in decisions [7]. This model is better described as evidence- informed practice (EIP) and, despite later definitions that use it differently, the first use of the term “evi- dence-informed” described EBM’s application to an educational context [8]. Thus both terms originally Methodological pluralism in the age of evidence-informed practice and policy ARON SHLONSKY 1,2 & ROBYN MILDON 2 1 School of Health Sciences, University of Melbourne, Australia and 2 Parenting Research Centre, Melbourne, Australia Abstract The use of evidence in practice and policy in public health and social services is a tricky endeavour. While virtually every practitioner, manager, or policy maker would agree that evidence should be used, there is disagreement about the nature of evidence and which evidence should be used, how, when, in what circumstances, and for whom. Within these disagreements, however, can be found some essential truths: (1) scientific knowledge evolves over time; (2) different types of evidence are needed for different purposes; (3) evidence has a range of quality; (4) synthesising multiple forms of evidence is difficult and inevitably includes some level of subjectivity; and (5) effective implementation of evidence is as important as the decision to use evidence in the first place. This paper will discuss the use of evidence in practice in what is arguably the most complex helping environment – social services – detailing the emergence and evolution of evidence-informed practice, dispelling some myths about its structure and application, and linking it to the broader origins and structure of the social and governmental systems in which it operates. Using this expanded view, the paper will then describe some useful approaches for incorporating these larger considerations into the use of evidence in practice. Key Words: Evidence-based practice, evidence-informed practice, systematic reviews Correspondence: Aron Shlonsky, Evidence Informed Practice, University of Melbourne, Melbourne, Australia. E-mail: [email protected] 516716SJP 0 0 10.1177/1403494813516716A. Shlonsky and R. MildonShort title research-article 2014 ORIGINAL ARTICLE

Upload: others

Post on 11-May-2022

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Methodological pluralism in the age of evidence-informed

© 2014 the Nordic Societies of Public HealthDOI: 10.1177/1403494813516716

Scandinavian Journal of Public Health, 2014; 42(Suppl 13): 18–27

The emergence and evolution of evidence-informed practice in social services

Evidence-informed practice (EBP) has its roots in health care’s evidence-based medicine (EBM) move-ment, which began its formal usage of the term in the early 1990s [1]. EBM was later codified into a con-ceptual model and set of specific steps in the seminal work Evidence-based medicine: how to practice and teach EBM by Sackett et al. [2]. The model consists of three concentric circles representing best evidence, practitioner expertise, and client values, and their union represents the ideal practice of EBM (Figure 1). The idea was then translated and transplanted from its origins in medicine to the helping professions by Eileen Gambrill [3] and later by Leonard Gibbs [4].

An important advance to this early model by Haynes et al. [5] was introduced into social services by Gambrill [6]. Rather than EBP itself as the

intersection of the three components, this newer model frames clinical expertise as the optimal inte-gration of current best evidence, client preferences and actions, and client clinical state and circumstances (Figure 2). That is, clinical expertise only occurs when these three constructs are optimally combined. In some cases, evidence may be strong and perhaps it is weighted accordingly when decisions are made. In other cases, there may be little or no valid evidence and strong client preferences, in which case prefer-ences would figure more prominently in decisions [7]. This model is better described as evidence-informed practice (EIP) and, despite later definitions that use it differently, the first use of the term “evi-dence-informed” described EBM’s application to an educational context [8]. Thus both terms originally

Methodological pluralism in the age of evidence-informed practice and policy

ARON SHLONSky1,2 & ROByN MILDON2

1School of Health Sciences, University of Melbourne, Australia and 2Parenting Research Centre, Melbourne, Australia

AbstractThe use of evidence in practice and policy in public health and social services is a tricky endeavour. While virtually every practitioner, manager, or policy maker would agree that evidence should be used, there is disagreement about the nature of evidence and which evidence should be used, how, when, in what circumstances, and for whom. Within these disagreements, however, can be found some essential truths: (1) scientific knowledge evolves over time; (2) different types of evidence are needed for different purposes; (3) evidence has a range of quality; (4) synthesising multiple forms of evidence is difficult and inevitably includes some level of subjectivity; and (5) effective implementation of evidence is as important as the decision to use evidence in the first place. This paper will discuss the use of evidence in practice in what is arguably the most complex helping environment – social services – detailing the emergence and evolution of evidence-informed practice, dispelling some myths about its structure and application, and linking it to the broader origins and structure of the social and governmental systems in which it operates. Using this expanded view, the paper will then describe some useful approaches for incorporating these larger considerations into the use of evidence in practice.

Key Words: Evidence-based practice, evidence-informed practice, systematic reviews

Correspondence: Aron Shlonsky, Evidence Informed Practice, University of Melbourne, Melbourne, Australia. E-mail: [email protected]

516716SJP0010.1177/1403494813516716A. Shlonsky and R. MildonShort titleresearch-article2014

originAl ArTicle

Page 2: Methodological pluralism in the age of evidence-informed

Methodological pluralism and EIP 19

refer to the same exact set of steps and processes for integrating current best evidence with client context. Given that the use of evidence can be inexact and must reflect these client and environmental contexts in the EBM model, the originators of the term EBM might instead choose EIP if they could go back in time.

At face value, the EIP decision-making model is easily accepted across a range of disciplines in the social services. It allows for different sources of input into clinical decision making and enshrines clinical expertise at the core of service provision. The diffi-culties arise when the specific steps associated with the application of the model are introduced, and

these may be conveniently left out of descriptions of the model (see, for example, www.socialworkpolicy.o r g / r e s e a r c h / e v i d e n c e - b a s e d - p r a c t i c e - 2 .html#examples). As originally conceptualised, the underlying framework translates into a series of spe-cific, practical steps to ensure that the model is sys-tematically implemented (Table I). In EBM and EBP, individual clinicians are instructed to pose a question that can be answered using scholarly data-bases, query these databases to locate potentially relevant studies, appraise found studies for their quality, establish study applicability with respect to client context, and evaluate whether the interven-tion was helpful for a particular client or group of clients [2–4, 9].

Thus questions are driven by a formulation of the client’s problem, and rigorous evidence is sought for its application in a given context. In general, there are five types of questions:1

1. Effectiveness: Is one treatment or intervention more effective than another or no treatment/intervention at ameliorating or diminishing a problem or condition experienced by a client?

2. Prevention: Is one treatment or intervention aimed at preventing or stopping the initial occur-rence of a problem or condition more effective than another or no treatment/intervention?

3. Risk/prognosis: What is the likelihood or proba-bility that a client will experience undesirable consequences within a given interval of time?

4. Assessment: Does a client have a problem/condi-tion/strength? Has a client benefited from a given treatment?

5. Descriptive: What are the dynamics of a given population (e.g. population trends)? What is the client experience of a given problem or treat-ment? What is the practitioner experience when providing a given treatment? What are the needs of a given community? What was the quality of implementation of a given intervention?

A frequent criticism of the EBP model is that it relies too heavily on evidence in the form of randomised controlled trials (RCTs) [10, 11], but it seems clear from the original EBM texts that many question types do not rely on RCTs. While most attention has been focused on questions of effectiveness and pre-vention, this does not mean that the model is narrow and exclusive of other forms of questions. Diagnostic questions are crucial for ascertaining whether clients have a particular condition,2 and prognostic ques-tions are essential for conveying the known risks and benefits of potentially life-changing decisions. Furthermore, the original EBM model included an

Figure 1. The standard evidence-based medicine/evidence-based practice model.

Figure 2. Revised evidence-informed practice model.

Page 3: Methodological pluralism in the age of evidence-informed

20 A. Shlonsky and R. Mildon

entire section devoted to asking what Sackett et al. [2] referred to as background questions. That is, questions can be asked that generate crucial informa-tion about the course of a particular problem; its various manifestations; and its prevalence, incidence, and key characteristics.

Expanding on this idea, important information about the extent and nature of client problems and the interventions proposed to deal with them can be obtained using: observational studies of prevalence and incidence; interviews with clients, community members, and providers; and data from management information systems. That is, systematically gathered descriptive information is an essential part of under-standing individual problems and their expression. Different forms of information can shed light on the way in which different treatment approaches are experienced by clients and providers, paving the way to better decision making with respect to which type(s) of services to provide. Most importantly, descriptive questions can help individual clinicians integrate client preferences and values into key deci-sions by understanding and describing the experi-ences of other clients who were in similar situations.

The role of evidence synthesis

Some of the issues raised by practitioners in the social services involve the perception that they do not have ample time to carry out the steps of EIP, they are insufficiently versed in research methods and sta-tistics to adequately understand the information found, access to full-text articles is limited, clinical implications of findings are not made sufficiently clear, and there is often a dearth of studies upon which to inform their decisions [10–13]. These con-cerns have some merit, though the extent to which they undermine the model is less clear. The criti-cisms, and others like it, do little to obviate the need for some sort of evidence-informed approach. The alternative (i.e. not using evidence-informed approaches) is not easily defended and there is a

danger that practice will devolve into what amounts to a free-for-all (i.e. we do not really know what works, so anything goes), a reliance on intuition in the guise of expertise or an authority-based approach [3], all of which are philosophically and morally untenable. That is, if evidence exists and we are in a position of helping clients, we are obliged make our best effort to be aware of the evidence and to convey this to our clients so they can make informed choices about their treatment [14]. One promising pathway toward better decision making around the use of what evidence to use, when, and for whom that fits with the model is the continued development of sys-tematic reviews of social interventions.

Systematic reviews provide helping professionals with transparent, rigorous, and informative syntheses of research in a given area [8]. While often focusing on effectiveness questions, systematic reviews are now being conducted that rigorously synthesise evi-dence from the broader range of questions described above. Rather than relying solely on seemingly authoritative review bodies that rate or classify the range of evidence in a given area using questionable criteria, systematic reviews attempt to decrease reporting and other forms of bias by transparently synthesising evidence for very specific questions. Where possible, advanced meta-analytic techniques are used to more objectively combine the results of studies into single measures of effectiveness. Readers are then asked to come to their own conclusions, which is more in line with a clinical decision-making model that takes client context into account. Systematic reviews are published in many fields and journals, but they vary widely in terms of their qual-ity and credibility and care should be taken that they are of high quality [15]. That is, even systematic reviews can have poor search strategies, can include biased studies, can use improper techniques for syn-thesising studies, and can overstate their findings. However, high-quality reviews, such as found in the Campbell Collaboration (www.campbellcollabora-tion.org) and Cochrane Collaborations (www.

Table I. Steps of evidence-based practice.

1. Become motivated to apply EBP2. Convert information need (prevention, assessment, treatment, risk) into an answerable question3. Track down the best evidence to answer the question4. Critically appraise the evidence for its validity (closeness to the truth), impact (size of the effect), and applicability

(usefulness in our practice)5. Integrate critical appraisal with our practice experience, client’s strengths, values, and circumstances6. Evaluate effectiveness and efficiency in exercising steps 1–4 and seek ways to improve them next time7. Teach others to follow the same process

Source: Gibbs LE. Evidence-based practice for the helping professions: a practical guide with integrated multimedia. Pacific Grove, CA: Brooks/Cole-Thomson Learning, 2003.

Page 4: Methodological pluralism in the age of evidence-informed

Methodological pluralism and EIP 21

cochrane.org), can save a great deal of time and effort and, on balance, represent a substantial advance toward truly implementing EIP in practice and pol-icy. Nonetheless high-quality reviews, as found in Campbell and Cochrane, are necessary but insuffi-cient tools for the types of decision required in practice.

An expanded view of evidence-informed practice

The EIP model and its associated questions are nec-essarily clinically focused on individual problems experienced by clients or patients. Systematic reviews focusing on effectiveness utilise studies involving individuals, and results from these studies are then combined, often using meta-analysis, to formulate effect sizes that can be useful for practitioners in decision making for and with their clients. However, individual clinical decisions do not occur in a vac-uum. Rather, they occur in a vastly complex social context that is likely to have at least as much to do with overall client outcomes as clinical work. The context of clinical interactions is strongly influenced

by the features of the place in which it occurs: its his-tory, geography, population dynamics, and profes-sional and cultural practices [7]. The uncomfortable fact is that these things make a difference to individ-ual clients, practitioners, the agencies in which they work, and the social systems in which their agencies exist. Rather than being dismissive of the EIP model, however, the model can be easily expanded to a framework that integrates these influences.

In an expanded EBP framework (Figure 3), cur-rent best evidence, client preferences and values, and client state and circumstances remain at the centre of practice but are surrounded by varying levels of con-text in which clinical decision making occurs [7]. This framework was first introduced by Regehr et al. [16]3 and highlights the professional, socio-historical, economic, and political contexts that often influence what is done to what effect. The professional context influences the training and supervision received by social service providers, which in turn influences the types of services providers are likely to recommend, refer to, or deliver. The economic context undoubt-edly influences the set of resources available to pro-viders and the agencies in which they work. Thus,

Figure 3. The expanded evidence-based practice model.Source: Regehr C, Stern S and Shlonsky A. Operationalizing evidence-based practice: the development of an institute for evidence-based social work. Research on Social Work Practice 2007;17:408–16.

Page 5: Methodological pluralism in the age of evidence-informed

22 A. Shlonsky and R. Mildon

despite finding what might be a perfect intervention for a specific client, it may be that such services are unavailable or too costly. The political context no doubt influences the organisational mandate in which a provider delivers services. For example, an agency may be funded by children’s services but may be dealing with women’s shelters around issues of domestic violence, and these can sometimes be at odds with respect to such basics as identifying which family member is the client [17]. Finally, and per-haps most importantly, the socio-historical context influences the way communities and their members perceive a service, the provider, and the agency in which they work. For instance, the long and well-documented history of racism and harmful state interventions in the Aboriginal community with respect to child welfare policies and practices [18, 19] might undermine service provision from the very start.

yet even this broad perspective can be further expanded by applying a systems framework to our understanding of the origins of specific social ser-vices and how they sit within the broader network of related services that comprise the welfare state. For example, Wulczyn and colleagues [20] use a systems approach to conceptualise the statutory child

protection system in ways that consider its origins, dynamic and reinforcing properties, and relationship to other systems. Child protection systems are nested within a broader set of interrelated systems (e.g. health, youth justice, education, housing), which in turn operate within a national culture and context. According to these authors [20,p.3], systems for pro-tecting children work best when their “normative context” (societal values and culture as expressed in its laws and policies) align in terms of the systems goals, structures, functions, and capacities.

Within this framework, the system for protecting children has been built, over time, by value-driven goals that are expressed as processes (laws, policies, rules, institutional procedures), these processes cre-ate or utilise governmental and nongovernmental structures that carry them out, and they generate a mix of services that influence an ever-changing set of outcomes for children and families (Figure 4). Furthermore, Wulczyn et al. [20] point out that dif-ficult choices must be made within the child protec-tion system (i.e. choosing to focus more closely on early intervention), but these choices depend upon, influence, and are influenced by the other systems that are arrayed around the lives of children and fam-ilies. Outcomes are a function not just of the child

Figure 4. Systems model of child protection.Based on: Wulczyn F, Daro D, Fluke J, et al. Adapting a systems approach to child protection: key concepts and considerations. New york: UNI-CEF, 2010.

Page 6: Methodological pluralism in the age of evidence-informed

Methodological pluralism and EIP 23

protection system but also of the allied systems (health, education, mental health), making it impor-tant that one of the processes evaluated in a systemic context is the degree to which cross-systems’ values, goals, processes, and structures are aligned with each other in the service of shared societal values.

While having an understanding of the origins and interactions of systems may not help an individual practitioner make a specific clinical decision for a given client (i.e. the range of available decisions in this framework is dictated by the larger context), the framework captures the complexity of preventative efforts across the primary, secondary, and tertiary levels of child protection and can be used to guide reform efforts at all levels of the system (direct clini-cal practice, management, and policy). But in order to undertake such an approach, a systems analysis is needed that extends beyond traditional notions of child protection systems and considers how they can be more effectively embedded within systems. The move from the clinical interaction to the broader context necessitates different approaches to the con-ceptualisation, execution, and use of research in practice. Namely, traditional systematic reviews that focus narrowly on a particular outcome for a specific population are helpful but cannot possibly capture the variations present across contexts, and these may be crucial in terms of the implementation of poten-tially effective services delivered directly to clients.

An integrated approach to evidence-informed practice

Consideration of context is satisfying at one level in that it acknowledges what we all know to be true: social services are complex and successful outcomes are a function of many related factors, some uncon-trollable. At another level, accounting for and con-tending with this complexity can be overwhelming. Similar to statistical procedures that estimate param-eters rather than come up with exact solutions, one way forward is to develop processes that create and gather multiple forms of knowledge and, in a Herculean feat of content expertise, synthesise them in a way that can be understood, critiqued, adjusted, and acted upon in the real world.

Realist synthesis [21] has been proposed as a “third way” (e.g. not a meta analysis and not a narra-tive synthesis) to integrate different methods across a range of contexts [22]. Rather than simply testing whether something is effective, realist synthesis aims to test the theory or theories that underpin observed phenomena (e.g. what works for whom, how, and in what circumstances). In this respect, it is aligned with a post-positivist philosophy (i.e. theories must be

presented and refuted, which means we ought to look at them!) and this fits nicely with the EIP approach. However, there are also challenges. In particular, the process is quite complex and is not particularly well suited to being described in specific terms within a step-by-step guide (though in all fairness, it has taken years for the Cochrane Handbook to be published and, given its page length, it is not a pocket guide). Of greater concern is that it is not at all clear that the process is reliable: i.e., two people doing a realist syn-thesis on the same topic may not come up with the same result(s). That said, the idea that theories should be sufficiently explicated so that they can be tested, that testing them can involve crossing content areas to establish their general success or failure, and that different methods can be used together is appeal-ing if not daunting. The approach to synthesis might require more structure in order to meaningfully include different forms of evidence that answers dif-ferent types of questions.

Practical applications of an expanded view of EIP

The range of contextual information needed for making key decisions can be wide, and with such breadth comes the potential for decision makers to be overwhelmed, incapacitated, and eventually dis-missive of efforts to rigorously integrate different perspectives. Recently, several knowledge transfer/translation and exchange (kTE) and implementa-tion models have been developed that fit with the EIP process articulated in this paper and offer step-by-step processes designed to structure the collec-tion, synthesis, and use of evidence in the delivery of services. A fulsome description of these develop-ments is beyond the scope of this paper and are described elsewhere [23, 24]. However, two of these models, one from health (knowledge-to-Action Cycle) and one from social services (Getting to Outcomes), are particularly relevant since they inte-grate what is known with what is done in practice.

Knowledge-to-Action Cycle. The knowledge-to-Action (kTA) cycle details a process for the gather-ing, use, and continued use of knowledge in healthcare practice (Figure 5). The kTA is used by the Canadian Institutes for Health Research for pro-moting the use of research and as a framework for knowledge translation [25].

The model is centred around knowledge crea-tion, which is depicted here as a funnel that begins with broad inquiries of “first-generation knowl-edge” (primary studies or information of variable quality). This information is then synthesised into

Page 7: Methodological pluralism in the age of evidence-informed

24 A. Shlonsky and R. Mildon

Figure 5. The knowledge-to-Action model.Source: http://ktclearinghouse.ca/knowledgebase/knowledgetoaction. Reproduced with permission from John Wiley and Sons [25].

“second-generation knowledge” through system-atic reviews, meta-analyses, meta-syntheses, scop-ing reviews, and/or realist reviews, relying always on methods that are explicit, transparent, and reproducible to search, appraise, and synthesise evidence. Although inclusive of other forms of evi-dence, RCTs appear to be given greater weight than other forms of research, presumably for effec-tiveness and prevention questions. The funnel ends with a “third-generation knowledge” product con-sisting of user-friendly synopses that are organised into decision aids that are used in practice.

Arrayed around the knowledge funnel is an action cycle that both drives and is informed by the ele-ments within the knowledge funnel. The action cycle consists of seven steps (Table II) that, in the end, transform knowledge into tools that can be effectively implemented in practice. Thus, the model uses both synthesis and implementation to formulate responses to specific practice situations. Such an approach models the complexity that is inherent in decisions involving patients, providers, systems, and even broader contexts. It provides a level of flexibility to the gathering and use of data and, while not explicit

Page 8: Methodological pluralism in the age of evidence-informed

Methodological pluralism and EIP 25

Table II. knowledge-to-Action Cycle and instructions.

1. Identifying the gaps and needsIdentify an important problem, then return to the knowledge funnel to identify, review, and select the knowledge needed to address the problem. Look for information at multiple levels including: a. Population b. Organisational/systems c. Provider2. Adapt knowledge to local context:Assess value, usefulness, and appropriateness of knowledge to the particular setting and circumstances, and customise accordingly. Guidelines are organized into in three stages: a. Set up b. Adaptation c. Finalisation3. Assess barriers to knowledge useAssess for barriers that may limit uptake of knowledge so that these barriers may be targeted by intervention strategies. Use Clinical Practice Guidelines Framework for Improvement and multiple methods including: a. Delphi procedure with panel of experts. b. Qualitative focus groups/interviews/questionnaires c. Statistical analysis of existing observational data and guideline implementation4. Select, tailor, and implement interventionsSelect, tailor, and execute interventions that facilitate and promote awareness and implementation of the knowledge5. Monitor knowledge useMeasure for desired changes in level of knowledge, understanding, and attitudes and/or changes behaviour or practice6. Evaluate outcomesMeasure whether application of the knowledge has accordingly improved health, practitioner, or system outcomes7. Sustain knowledge usePlan or manage changes to implementation strategies in the face of evolving context/barriers by cycling back through the action cycle.

Source: http://ktclearinghouse.ca/knowledgebase/knowledgetoaction

about how to weight different forms of information of varying quality, the built in evaluative and cyclical processes at least ensure that the selected interven-tions and associated tools/processes will be revisited and, in theory, improved or discontinued.

The Getting to Outcomes framework. On the social ser-vices end, the Getting to Outcomes (GTO) frame-work [26, 27] is a results-based accountability approach for change that has the advantage of having been successfully tested with substance abuse interventions using a longitudinal, quasi-experimen-tal design. In this study, programmes using GTO tended to implement substance abuse interventions with greater model fidelity and had increases in pro-gramme performance over comparison sites [28]. GTO takes both developing and mature programmes through the stages of planning, process and out-comes evaluation, programme improvement and sus-tainability using a 10-step procedure. Each step is articulated through a series of corresponding ques-tions geared toward capacity building and achieving the successful implementation of interventions.

In a descriptive example, the GTO framework was applied in a public child welfare agency [39–31] to improve decision making through the selection, adoption, and implementation of a comprehensive practice model, solution-based casework (Table III).

Each of the steps of GTO are articulated in a question and the answers to the questions indicate how the programme was implemented using the GTO framework. As indicated previously, public child welfare is a complex social intervention that can involve statutory (mandatory) services, multiple levels of intervention, disenfranchised clients, ser-vices spanning many sectors, and historical features within communities that can make engagement with clients difficult. Implementing any intervention, let alone a comprehensive practice framework, can be an arduous task. yet, structured approaches such as GTO lend themselves to moving through this com-plexity in ways that are systematic, transparent, and credible.

conclusion

The use of evidence in practice is complicated and there are no easy solutions. The EIP movement, while encouraging, still has a long way to go. yet there is progress. The systematic synthesis of evidence holds great promise for helping us better compile and understand evidence and its application in the social services context. As the methodology of systematic reviews advances, so too will the scope of what can be included in such studies. Frameworks that guide the

Page 9: Methodological pluralism in the age of evidence-informed

26 A. Shlonsky and R. Mildon

Table III. Getting to Outcomes (GTO): implementing solution-based casework (SBC).

1. What are the underlying needs and conditions that must be addressed by the casework practice model?Typically, child welfare agencies identify needs that arise from data or reports – and sometimes lawsuits – that highlight some failure in their attempts to provide for the safety, permanency, or well being of children that come into contact with their system. This provides the motivation to find more effective ways to practice.2. What are the goals and objectives that, if realised, will address the needs and change the underlying conditions?This planning step helps the agency to specifically identify how the future will be different if they are successful.3. Which evidence-informed casework practice model can be used to reach agency goals?At this step, consultants and university partners can help state agencies choose which casework practice model is best for the state and the workforce that the state can afford.4. What actions need to be taken so that the selected programme, practice, or set of interventions fits our child welfare agency?Many aspects of fit are at play during this step, including leadership support, need for infrastructure changes to facilitate the new practice, and training of providers.5. What organisational capacities are needed to implement the programme?Assessing organisational capacity for change occurs in two areas: the human capacity (finding champions for the change and assessing staff clinical skills); and the organisational facilitators of and barriers to change.6. What is the plan for this programme?At this step, the SBC implementation team developed a plan for training the practice model across the system, as well as a plan for the infrastructure changes needed to support the new practice (e.g. financial and personnel resources, relevant policies and procedures, computer system upgrades, changes in the continuous quality improvement/quality assessment tool, work load changes).7. How will the quality of programme implementation be assessed?The question refers to the need to ensure that a process evaluation takes place while the practice model is piloted and implemented across the system. A process evaluation, among other questions, will help to answer whether the practice model is being implemented as it was intended and with fidelity or not, as well as pointing out who adheres to the practice model and who does not.8. How well did the programme work?This question refers to an outcome evaluation that can help to answer whether or not the practice model created significant positive changes for children, youth, and families in the child welfare system.9. How will continuous quality improvement strategies be incorporated?Continuous quality improvement activities, consisting of focus groups, case reviews, and others, enable the agency to examine how adherence to the practice model can be improved.10. If the programme is successful, how will it be sustained?In an environment where child welfare leaders typically stay in their jobs for an average of only 2 years, there need to be short-, medium-, and long-term plans for institutionalising the practice model.

As summarised in: Mildon R, Dickinson N and Shlonsky A. Using implementation science to improve service and practice in child welfare: actions and essential elements. In: Shlonsky A and Benbenishty R From evidence to outcomes in child welfare: an international reader. New york: Oxford University Press, in press.

use of evidence at different points in the decision making process are beginning to be evaluated and some are proving successful in terms of implementa-tion fidelity and individual outcomes. As EIP takes firmer hold in the field, specific questions will prompt methods that can answer them, and these will be woven together to tell a story that is, ultimately, still unfolding.

conflict of interest

The author declares that there is no conflict of interest.

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

notes

1. Modelling the evolution of EBP, these questions were adapted from Gibbs, L. E. (2003). Evidence-based

practice for the helping professions: a practical guide with integrated multimedia. Pacific Grove, Calif.: Brooks/Cole-Thomson Learning. Gibbs, in turn, adapted his questions from Sackett et al (1997). Building on these, the descriptive question has been added to considerably.

2. It should be noted that RCTs can be used to evaluate diagnostic test accuracy.

3. Modelling the evolution of EBP, these questions were adapted from Gibbs, L. E. (2003). Evidence-based practice for the helping professions: a prac-tical guide with integrated multimedia. Pacific Grove, Calif.: Brooks/Cole-Thomson Learning. Gibbs, in turn, adapted his questions from Sackett et al (1997). Building on these, the descriptive question has been added to considerably.

references [1] Guyatt GH. Evidence-based medicine. ACP Journal Club

1991;114:A–16. [2] Sackett DL, Richardson WS, Rosenberg W, et al. Evidence-

based medicine: how to practice and teach EBM. New york: Churchill Livingstone, 1997.

Page 10: Methodological pluralism in the age of evidence-informed

Methodological pluralism and EIP 27

[3] Gambrill E. Evidence-based practice: an alternative to authority-based practice. Families in Society 1999;80:341–50.

[4] Gibbs LE. Evidence-based practice for the helping professions: a practical guide with integrated multimedia. Pacific Grove, CA: Brooks/Cole-Thomson Learning, 2003.

[5] Haynes RB, Devereaux PJ and Guyatt GH. Clinical exper-tise in the era of evidence-based medicine and patient choice. Evidence Based Medicine 2002;7:36–8.

[6] Gambrill E. Evidence-based practice and policy: choices ahead. Research on Social Work Practice 2006;16:338–57.

[7] Shlonsky A and Benbenishty R. From evidence to out-comes in child welfare. In: Shlonsky A and Benbenishty R (eds) From evidence to outcomes in child welfare: An international reader. New york: Oxford University Press, pp. 3–23.

[8] Shlonsky A, Noonan E, Littell JH, et al. The role of system-atic reviews and the Campbell Collaboration in the real-ization of evidence-informed practice. Clinical Social Work Journal 2011;39:362–8.

[9] Shlonsky A and Gibbs L. Will the real evidence-based prac-tice please stand up? teaching the process of evidence-based practice to the helping professions. Brief Treatment and Crisis Intervention 2004;4:137.

[10] Gibbs L and Gambrill E. Evidence-based practice: coun-terarguments to objections. Research on Social Work Practice 2002;12:452–76.

[11] Webb SA. Some considerations on the validity of evidence-based practice in social work. British Journal of Social Work 2001;31:57–79.

[ 12] Mullen EJ, Shlonsky A, Bledsoe SE, et al. From concept to implementation: challenges facing evidence-based social work. Journal of Evidence and Policy 2005;1:61–84.

[13] Rosen A and Proctor Ek. Developing practice guidelines for social work intervention: issues, methods, and research agenda. New york: Columbia University Press, 2003.

[14] Gambrill E. Critical thinking in clinical practice: improving the quality of judgments and decisions. Hoboken, NJ: John Wiley & Sons, 2012.

[15] Littell JH and Shlonsky A. Making sense of meta-analysis: a critique of “effectiveness of long-term psychodynamic psychotherapy”. Clinical Social Work Journal 2010;39: 340–6.

[16] Regehr C, Stern S and Shlonsky A. Operationalizing evi-dence-based practice: the development of an institute for evidence-based social work. Research on Social Work Practice 2007;17:408–16.

[17] Friend C, Shlonsky A and Lambert L. From evolving dis-courses to new practice approaches in domestic violence and child protective services. Children and Youth Services Review 2008;30:689–98.

[18] Australian Human Rights Commission. Bringing them home: the stolen children report 1997. Sydney: AHRC. Available at:

www.hreoc.gov.au/social_justice/bth_report/index.html (1997, accessed October 2013).

[19] Singh V, Trocme N, Fallon B, et al. Kiskisik Awasisak: remem-ber the children. Understanding the overrepresentation of First Nations children in the child welfare system. Ontario: Assembly of First Nations, 2011.

[20] Wulczyn F, Daro D, Fluke J, et al. Adapting a systems approach to child protection: Key concepts and considerations. New york: UNICEF, 2010.

[21] Pawson R. Evidence-based policy: the promise of “realist synthesis”. Evaluation 20028:340–58.

[22] Pawson R and Bellamy JL. Realist synthesis: an explanatory focus for systematic review. In: Popay J (ed.) Moving beyond effectiveness in evidence synthesis: methodological issues in the synthesis of diverse sources of evidence. London: National Insti-tute for Health and Clinical Excellence, 2006. pp.83–94.

[23] Meyers DC, Durlak JA and Wandersman A. The Quality Implementation Framework: a synthesis of critical steps in the implementation process. American Journal of Community Psychology 2012;50:462–80.

[24] Mildon R, Dickinson N and Shlonsky A. Using implementa-tion science to improve service and practice in child welfare: actions and essential elements. In: Shlonsky A and Benben-ishty R (eds) From evidence to outcomes in child welfare: an inter-national reader. New york: Oxford University Press, 2014, pp. 83–103.

[25] Straus SE, Tetroe J and Graham ID. Knowledge translation in health care: moving from evidence to practice. Chichester, Uk; Hoboken, NJ: Wiley-Blackwell/BMJ, 2009.

[26] Wandersman A, Imm P, Chinman M, et al. Getting to Out-comes: a results-based approach to accountability. Evalua-tion and Program Planning 2000;23:389–95.

[27] Wandersman A. Four keys to success (theory, implementa-tion, evaluation, and resource/system support): high hopes and challenges in participation. American Journal of Com-munity Psychology 2009;43:3–21.

[28] Chinman M, Hunter SB, Ebner P, et al. (2008) The getting to outcomes demonstration and evaluation: An illustration of the prevention support system. American Journal of Com-munity Psychology, 41: 206–224.

[29] Antle BF, Barbee AP, Christensen DN, et al. The preven-tion of child maltreatment recidivism through the Solution-Based Casework model of child welfare practice. Children and Youth Services Review 2009;31:1346–51.

[30] Antle BF, Barbee AP, Sullivan DJ, et al. The effects of train-ing reinforcement on training transfer in child welfare. Child Welfare 2009;88:5–26.

[31] Barbee AP, Christensen D, Antle B, et al. Successful adop-tion and implementation of a comprehensive casework prac-tice model in a public child welfare agency: application of the Getting to Outcomes (GTO) model. Children and Youth Services Review 2011;33:622–33.