evaluation planning: what is it and how do you do it?

8
What We Know About…Evaluation Planning What We Know About… reports are a quick summary of new health communication research and trends of interest to CDC and its partners. They intend to keep health communication and marketing professionals up-to-date on new findings and their implications for public health communication. Brought to you by the Marketing and Communication Strategy Branch in the Division of Health Communication and Marketing, Centers for Disease Control and Prevention (CDC). Evaluation Planning: What is it and how do you do it? Imagine that you or your research team has just completed a communication intervention designed to reduce smoking among adolescents. Wouldn’t you want to know if the intervention worked? That is where evaluation comes in. In this case, we would be conducting a summative evaluation (after the intervention) to answer questions such as: (1) Did rates of smoking among adolescents decrease?; (2) Did the radio ads reach enough teens to have statistical power?; (3) Did the ads affect norms about smoking in that age group?; (4) Was the cigarette tax increase during the evaluation period the real reason that smoking decreased?; and (5) Did the ads “boomerang” by making teens think that smoking is more prevalent that it actually is in their age group? If the research team conducted a formative evaluation (before and during the communication intervention), your team would be able to make any necessary changes such as edits to radio ads or when they are played before continuing forward. If you’re still feeling confused, don’t worry; the purpose of this introductory section is to provide you with some useful background information on evaluation planning. What is evaluation? Evaluations are, in a broad sense, concerned with the effectiveness of programs. While common sense evaluation has a very long history, evaluation research which relies on scientific methods is a young discipline that has grown massively in recent years (Spiel, 2001). Evaluation is a systematic process to understand what a program does and how well the program does it. Evaluation results can be used to maintain or improve program quality and to ensure that future planning can be more evidence-based. Evaluation constitutes part of an ongoing cycle of program planning, implementation, and improvement (Patton, 1987). 1 http://www.cdc.gov/healthmarketing/resources.htm

Upload: ngokhanh

Post on 02-Jan-2017

248 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Evaluation Planning: What is it and how do you do it?

What We Know About…Evaluation Planning

What We Know About… reports are a quick summary of new health communication research and trends of interest to CDC and its partners. They intend to keep health communication and marketing professionals up-to-date on new findings and their implications for public health communication.

Brought to you by the Marketing and Communication Strategy Branch in the Division of Health Communication and Marketing, Centers for Disease Control and Prevention (CDC).

Evaluation Planning: What is it and how do you do it?

Imagine that you or your research team has just completed a communication intervention designed to reduce smoking among adolescents. Wouldn’t you want to know if the intervention worked? That is where evaluation comes in. In this case, we would be conducting a summative evaluation (after the intervention) to answer questions such as: (1) Did rates of smoking among adolescents decrease?; (2) Did the radio ads reach enough teens to have statistical power?; (3) Did the ads affect norms about smoking in that age group?; (4) Was the cigarette tax increase during the evaluation period the real reason that smoking decreased?; and (5) Did the ads “boomerang” by making teens think that smoking is more prevalent that it actually is in their age group? If the research team conducted a formative evaluation (before and during the communication intervention), your team would be able to make any necessary changes such as edits to radio ads or when they are played before continuing forward. If you’re still feeling confused, don’t worry; the purpose of this introductory section is to provide you with some useful background information on evaluation planning.

What is evaluation?

Evaluations are, in a broad sense, concerned with the effectiveness of programs. While common sense evaluation has a very long history, evaluation research which relies on scientific methods is a young discipline that has grown massively in recent years (Spiel, 2001). Evaluation is a systematic process to understand what a program does and how well the program does it. Evaluation results can be used to maintain or improve program quality and to ensure that future planning can be more evidence-based. Evaluation constitutes part of an ongoing cycle of program planning, implementation, and improvement (Patton, 1987).

1

http://www.cdc.gov/healthmarketing/resources.htm

Page 2: Evaluation Planning: What is it and how do you do it?

What We Know About…Evaluation Planning

Make evaluation part of your health communication program from the beginning; don’t tack it on at the end! The evaluation experience is likely to be

more positive and its results are likely to be more useful if you build evaluation in from the start and make it an on-going activity. This includes planning the summative evaluation before the intervention begins as part of the planning process, which helps to clarify program goals and reasonable outcomes.

Taken together, you and your research team should know why the evaluation is being undertaken (i.e., performance measurement or improvement) and the type of evidence that would be sufficient for your program and stakeholders. By evidence, we generally mean information helpful in forming a conclusion or judgment. In other words, evidence means information bearing on whether a belief or proposition is true or false, valid or invalid, warranted or unsupported (Schwandt, 2009). Recently there has been some confusion with understanding the term evidence in evaluation because it is often taken to be synonymous with the term evidence-based. Evidence-based, however, has two shortcomings: (1) it is narrowly interpreted to mean that only a specific kind of scientific finding, that is, evidence of causal efficacy, counts as evidence; (2) the idea of an evidence base suggests that evidence is the literal foundation for action because it provides secure knowledge (Upshur, 2002).

What type of evaluation should I conduct?

Evaluation falls into one of two broad categories: formative and summative. Formative evaluations are conducted during program development and implementation and are useful if you want direction on how to best achieve your goals or improve your program. Summative evaluations should be completed once your programs are well established and will tell you to what extent the program is achieving its goals.

Table 1—The types of evaluation within formative and summative evaluation:

Formative

Needs Assessment

Determines who needs the communication program/intervention, how great the need is, and what can be done to best meet the need. Involves audience research and informs audience segmentation and marketing mix (4 P’s) strategies.

Process Evaluation

Measures effort and the direct outputs of programs/interventions – what and how much was accomplished (i.e., exposure, reach, knowledge, attitudes, etc.). Examines the process of implementing the communication program/intervention and determines whether it is operating as planned. It can be done continuously or as a one-time assessment. Results are used to improve the program/intervention.

Summative

Outcome Evaluation

Measures effect and changes that result from the campaign. Investigates to what extent the communication program/intervention is achieving its outcomes in the target populations. These outcomes are the short-term and medium-term changes in program participants that result directly from the program such as new knowledge and awareness, attitude change, beliefs, social norms, and behavior change, etc. Also measures policy changes.

Impact Evaluation

Measures community-level change or longer-term results (i.e., changes in disease risk status, morbidity, and mortality) that have occurred as a result of the communication program/intervention. These impacts are the net effects, typically on the entire school, community, organization, society, or environment.

2

http://www.cdc.gov/healthmarketing/resources.htm

Page 3: Evaluation Planning: What is it and how do you do it?

What We Know About…Evaluation Planning

Table 2--Which of these evaluations is most appropriate depends on the stage of your program:

If you are not clear on what you want to evaluate, consider doing a “best practices” review of your program before proceeding with your evaluation. A best practices review determines the most efficient (least amount of effort) and effective (best results) way of accomplishing a task, based on repeatable procedures that have proven themselves over time for large numbers of people. This review is likely to identify program strengths and weaknesses, giving you important insight into what to focus your evaluation on.

How do I conduct an evaluation?

The following six steps are a starting point for tailoring an evaluation to a particular public health effort at a particular time. In addition, the steps represent an ongoing cycle, rather than a linear sequence, and addressing each of the steps is an iterative process. For additional guidance, consult the Evaluation Planning Worksheet in the Appendix of this document.

As each step is discussed, examples from the “Violence Against Women” campaign implemented in Western Australia will be included. The “Violence Against Women” campaign was the first of its kind to target violent and potentially violent men. The campaign taught men that domestic violence is a problem which has negative effects on children and that specific help is available. Program coordinators decided not to apply traditional interventions often used in domestic violence cases because they have not been successful at changing behavior. As a result, the communication intervention included: Publications, including self-help booklets providing tips on how to control violence and how to contact service providers; mass media advertising; public relations activities with stakeholders, including women’s groups, police, counseling professions and other government departments; and posters and mailing to worksites (Hausman & Becker, 2000).

3

http://www.cdc.gov/healthmarketing/resources.htm

Page 4: Evaluation Planning: What is it and how do you do it?

What We Know About…Evaluation Planning

1. Engage stakeholders—This first step involves identifying and engaging stakeholders. These individuals have a vested interest in the evaluation.

• Find out what they want to know and how they will use the information.

• Involve them in designing and/or conducting the evaluation.

• For less involved stakeholders, keep them informed about activities through meetings, reports and other means of communication (CDC, 1999, 2008; McDonald et al., 2001).

EX: The program planners of the Violence Against Women campaign included internal and external partners as stakeholders. Internal partners were the Director of the Domestic Violence Prevention Unit and the Family and Domestic Violence Task Force. External partners were experts in the field of social marketing/behavior change, health promotions, communication, and women’s issues; the Department of Family and Children’s Services; service providers including trained counselors, therapists and social workers; and the police. The program planners kept in touch with stakeholders and got input from them throughout the campaign (Turning Point Social Marketing Collaborative, Centers for Disease Control and Prevention, and Academy for Educational Development, 2005).

2. Identify program elements to monitor—In this step you and/or the team decides what’s worth monitoring.

• To decide which components of the program to oversee, ask yourself who will use the information and how, what resources are available, and whether the data can be collected in a technically sound and ethical manner.

• Monitoring, also called process evaluation, is an ongoing effort that tracks variables such as funding received, products and services delivered, payments made, other resources contributed to and expended by the program, program activities, and adherence to timelines.

• Monitoring during program implementation will let you know whether the program is being implemented as planned and how well the program is reaching your target audience.

• If staff and representative participants see problems, you are able to make mid-course program corrections (CDC, 1999, 2008).

EX: A needs assessment was conducted using focus groups of general population males and perpetrators. It identified the need for a prevention focus targeting both violent and potentially violent men. The messages would need to avoid an accusatory or blaming tone because that would cause the target audiences to reject the information. Process evaluation would be implemented to monitor the campaign’s reach, the messages’ effectiveness, the audiences’ awareness of the Men’s Domestic Violence Helpline, and changes in attitudes toward domestic violence (Turning Point Social Marketing Collaborative et al., 2005).

3. Select the key evaluation questions—Basic evaluation questions which should be adapted to your program content include:

• What will be evaluated? (i.e., What is the program and in what context does it exist?)

• Was fidelity to the intervention plan maintained?

• Were exposure levels adequate to make a measurable difference?

• What aspects of the program will be considered when judging performance?

• What standards (type or level of performance) must be reached for the program to be considered successful?

• What evidence will be used to indicate how the program has performed?

• How will the lessons learned from the inquiry be used to improve public health effectiveness? (CDC, 1999, 2008).

EX: The evaluation measured the following: (1) General awareness of, attitudes towards, and professed behaviors relating to domestic violence; (2) awareness of how to get help, such as knowledge about available support services and where to telephone for help; (3) inclination to advise others to telephone the Helpline; and (4) advertising reach and impact, message take-away, attitudes toward the campaign, calls to the Helpline, and acceptance of referrals to counseling (Turning Point Social Marketing Collaborative et al., 2005).

4

http://www.cdc.gov/healthmarketing/resources.htm

Page 5: Evaluation Planning: What is it and how do you do it?

What We Know About…Evaluation Planning

4. Determine how the information will be gathered—In this step, you and/or the team must decide how to gather the information.

• Decide which information sources and data collection methods will be used.

• Develop the right research design for the situation at hand. Although there are many options, typical choices include: (1) Experimental designs (use random assignment to create intervention and control groups, intervention is administered to only one group, and then compare the groups on some measure of interest to see if the intervention had an effect); (2) quasi-experimental designs (same as experimental but does not necessarily involve random assignment of participants to groups); (3) Surveys (a quick cross-sectional snapshot of an individual or a group of people on some measure via telephone, Internet, face-to-face, etc.); and (4) case study designs (an individual or a situation is investigated deeply and considered substantially unique).

• The choice of design will determine what will count as evidence, how that evidence will be gathered and processed, and what kinds of claims can be made on the basis of the evidence (CDC, 1999, 2008; Yin, 2003).

EX: In the first seven months of the campaign, a three-wave statewide random telephone survey was conducted. In each wave, approximately 400 males, 18-40 years old who were in a heterosexual relationship were interviewed. The three surveys took place (1) prior to the campaign to serve as a baseline; (2) four weeks into the campaign to assess initial impact, including advertising reach so that any deficiencies could be detected and modified; and (3) seven months into the campaign to identify any significant changes in awareness of sources of assistance, particularly the Men’s Domestic Violence Helpline as well as any early changes in beliefs and attitudes (Turning Point Social Marketing Collaborative et al., 2005).

5. Develop a data analysis and reporting plan—During this step, you and/or the team will determine how the data will be analyzed and how the results will be summarized, interpreted, disseminated, and used to improve program implementation (CDC, 1999, 2008).

EX: Standard research techniques were used to analyze the data and develop a report on the findings. The report was disseminated to the program managers as well as to all partners/stakeholders. Feedback was collected from stakeholders and, as appropriate, used to modify the strategies, messages and interventions. For example, findings from evaluating the first two sets of commercials were used to identify the timing of a third set of ads and their messages. The evaluation results also were used in developing Phase 2 of the campaign (Turning Point Social Marketing Collaborative et al., 2005).

6. Ensure use and share lessons learned—Effective evaluation requires time, effort, and resources.

•Given these investments, it is critical that the evaluation findings be disseminated appropriately and used to inform decision making and action.

•Once again, key stakeholders can provide critical information about the form, function, and distribution of evaluation findings to maximize their use (CDC, 1999, 2008).

EX: Awareness of the Men’s Domestic Violence Helpline increased significantly from none before the campaign to 53% in Wave 2. The research also showed that a number of positive belief and attitude effects began to emerge: By Wave 2, 21% of respondents exposed to the campaign stated that the campaign had “changed the way they thought about domestic violence” and 58% of all respondents agreed that “domestic violence affects the whole family” rather than just the children of the female victim. These results and their implications provided guidance for revising future activities. Phase 2 utilized lessons learned from the first phase and was designed to establish additional distribution channels for counseling services such as Employee Assistance Programs and rural/remote areas (Turning Point Social Marketing Collaborative et al., 2005).

5

http://www.cdc.gov/healthmarketing/resources.htm

Page 6: Evaluation Planning: What is it and how do you do it?

What We Know About…Evaluation Planning

Bottom Line: Why should I conduct an evaluation? Experts’ stress that evaluation can:

1. Improve program design and implementation—It is important to periodically assess and adapt your activities to ensure they are as effective as they can be. Evaluation can help you identify areas for improvement and ultimately help you realize your goals more efficiently (Hornik, 2002; Noar, 2006).

2. Demonstrate program impact—Evaluation enables you to demonstrate your program’s success or progress. The information you collect allows you to better communicate your program’s impact to others, which is critical for staff morale as well as attracting and retaining support from current and potential funders (Hornik & Yanovitzky, 2003).

References Centers for Disease Control and Prevention. (1999). Framework for program evaluation in public health. Morbidity &

Mortality Weekly Report, 48, 1-40. Centers for Disease Control and Prevention. (2008). Introduction to Process Evaluation in Tobacco Use Prevention and

Control. Atlanta, GA: U. S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health.

Hausman, A. J. & J. Becker (2000). Using participatory research to plan evaluation in violence prevention. Health Promotion Practice, 1(4), 331-340.

Hornik, R. C. (2002). Epilogue: Evaluation design for public health communication programs. In Robert C. Hornik (Ed.), Public Health Communication: Evidence for Behavior Change. Mahwah, NJ: Lawrence Erlbaum Associates.

Hornik, R. C. & Yanovitzky, I. (2003). Using theory to design evaluations of communication campaigns: The case of the National Youth Anti-Drug Media Campaign. Communication Theory, 13(2), 204-224.

McDonald et al. (2001). Chapter 1: Engage stakeholders. Introduction to Program Evaluation for Comprehensive Tobacco Control. Retrieved February 25, 2009 at http://www.cdc.gov/tobacco/evaluation_manual/ch1.html.

Noar, S. M. (2006). A 10-year retrospective of research in health mass media campaigns: Where do we go from here? Journal of Health Communication, 11, 21-42.

Norland, E. (2004, Sept.). From education theory to conservation practice. Presented at the Annual Meeting of the International Association for Fish & Wildlife Agencies. Atlantic City, New Jersey.

Pancer, S. M. & Westhues, A. (1989). A developmental stage approach to program planning and evaluation. Evaluation Review, 13, 56-77.

Patton, M. Q. (1987). Qualitative Research Evaluation Methods. Thousand Oaks, CA: Sage Publishers. Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach. Thousand Oaks, CA: Sage

Publications. Schwandt, T. A. (2009). Toward a practical theory of evidence for evaluation. In Stewart I. Donaldson, Christina A. Christie

& Melvin M. Mark (Eds.), What counts as credible evidence in applied research and evaluation practice? Thousand Oaks, CA: Sage Publications, Inc.

Spiel, C. (2001). Program Evaluation. In Neil J. Smelser & Paul B. Baltes (Eds.) International Encyclopedia of the Social & Behavioral Sciences. Oxford: Elsevier Science Ltd.

Turning Point Social Marketing Collaborative, Centers for Disease Control and Prevention, Academy for Educational Development (2005). CDCynergy: social marketing edition, version 2.0 [CD ROM] Atlanta (GA): CDC, Office of Communication.

Upshur, R. E. G. (2002). If not evidence, then what? Or does medicine really need an evidence base? Journal of Evaluation in Clinical Practice, 8(2), 113-119.

Yin, R. (2003). Case Study Research: Design and Methods. Thousand Oaks, CA: Sage Publications.

6

http://www.cdc.gov/healthmarketing/resources.htm

Page 7: Evaluation Planning: What is it and how do you do it?

What We Know About…Evaluation Planning

Appendix: Evaluation Plan Worksheet

Title: Date:

Prepared by:

Step 1: Identify and Engage Stakeholders a. Guiding questions:

• Who can we identify as stakeholders? • How do we engage stakeholders?

b. Outcome of this step: • List of stakeholders

Step 2: Identify program elements to monitor a. Guiding questions:

• Which program elements will you monitor? • What is the justification for monitoring these elements?

b. Outcome of this step: • List of program elements to monitor

Step 3: Select the key evaluation questions. a. Guiding questions:

• What evaluation questions will you address?

b. Outcome of this step: • List of evaluation questions.

Step 4: Determine how the information will be gathered. a. Guiding questions:

• What information sources and data collection methods will you use for monitoring and evaluation?

• What evaluation research design will be used?

b. Outcome of this step: • Description of information sources, data collection methods and research design.

Step 5: Develop a data analysis and reporting plan. a. Guiding questions:

• How will the data for each monitoring and evaluation question be coded, summarized and analyzed?

• How will conclusions be justified? • How will stakeholders both inside and outside the agency be kept informed about the

monitoring and evaluation activities?

7

http://www.cdc.gov/healthmarketing/resources.htm

Page 8: Evaluation Planning: What is it and how do you do it?

What We Know About…Evaluation Planning

• When will the monitoring and evaluation activities be implemented and how will they be timed in relation to program implementation?

• How will the costs of monitoring and evaluation be presented? • How will the monitoring and evaluation data be reported? • What are your monitoring and evaluation timelines and budgets?

b. Outcome of this step: • A data analysis and reporting plan

Step 6: Ensure use and share lessons learned. a. Guiding questions:

• What feedback was received concerning the intervention/program? • What is the evaluation implementation summary? • How can we use this information to revise intervention/program? • How will this information impact internal and external communication plans? • What are the lessons learned?

b. Outcome of this step: • Final summary report that is circulated among evaluation workgroup and stakeholders.

8

http://www.cdc.gov/healthmarketing/resources.htm