guidebook evaluation stigma patrick carrigan
TRANSCRIPT
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
1/66
Ten Steps 1
AUser-Friendly
GUIDEBOOK
for the Ten Steps to Evaluate Programsthat Erase the Stigma of Mental Illness
Patrick CorriganIllinois Institute of Technology
Appendix 1
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
2/66
Ten Steps 2This work was made possible by grants MH62198-01 for the Chicago Consortium of Stigma Research, plusMH66059-01, AA014842-01, and MH08598-01 with P. Corrigan, P.I. All the materials herein solely representthe research and subsequent opinion of the P.I.
Table of Contents
Page
1. Introduction ...................................................................................................3
2. The Role of Research and Program Evaluation ............................................4
3. The Anti-Stigma Program Evaluation Plan (ASPEP-10): Ten Steps ..............7
4. Ten Step Plan Summaries ......................................................................18
5. An Example of an Anti-Stigma Evaluation Plan (ASPEP-10) ......................19
6. Cross Group Differences.................................................................................31
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
3/66
Ten Steps 3
Chapter 1
Introduction
This Guidebookprovides materials that will help most advocates and other stakeholdersto carry out a solid and meaningful evaluation of anti-stigma programs. The Guidebook iswritten for the person who is unfamiliar with research and evaluation strategies and hence iswritten to be user-friendly. It is a companion to Corrigan, Roe and Tsang, (in review)1, a bookthat summarizes effective anti-stigma programs. In addition, many of the instruments used asexamples in the guidebook are provided inA Toolkit of Measures Meant to Evaluate Programsthat Erase the Stigma of Mental Illness (hereafter referred to as the Toolkit). In this Guidebook,we summarize the Ten Steps of an Anti-Stigma Evaluation Plan (ASPEP-10). Central to
research endeavors is Community-Based Participatory Research, inclusion of stakeholders in allstages of evaluation including leadership of the evaluation plan. A worksheet representing theplan is provided in Appendix 2A. The text of the Guidebook carefully summarizes theEvaluation Plan in a step-by-step process. In addition to the Guidebook are fidelity andsatisfaction instruments that parse out and identify the active and effective ingredients of ananti-stigma program.
1 Corrigan. P.W., Roe, D., & Tsang, H. (in review). Challenging the Stigma of Mental Illness: Lessons for Advocates and Therapists.London, Wiley. Available from Amazon.com and other online book stores.
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
4/66
Ten Steps 4
Chapter 2
The Role of Research
and Program Evaluation
In many ways, the steps of the ASPEP-10 parallel the elements of more esoteric research:define the question and hypothesis of the study, specify the measure(s) meant to test thehypothesis, complete statistical analyses, and make inferences based on these analyses. Thislast point is a sit-down moment for most readers; advocates from varied stakeholder groups sit-down and use the data to decide whether the intervention is good, indicating resources should
be sought to continue it, or whether it should be discarded for another or amended in various
ways. We have put together a worksheet in Appendix 2 that is a guide to this approach; it mayseem to be a simple one for sophisticated readers, namely those with some experience in socialscience. We agree that consultants with mastery of this arena should be recruited for theevaluation, assuming inclusion of the expert does not exhaust evaluation resources. We believeavailable funds to hire an expert are beyond resources of most advocacy programs. Hence, wedeveloped the user-friendly ASPEP-10 which can be completed by most eager stakeholders.
The ASPEP-10 is developed in a simple and step-by-step program which most advocatescan understand and implement in absence of a research method expert. Why complete anevaluation of anti-stigma programs? Given the era of evidence-based practices, advocates willwant to use the most effective stigma change approaches. The ASPEP-10 not only helps to
identify these approaches, but also to unpack and revise effective program components.
Community Based Participatory Research Team (CBPR team). CBPR representsprinciples and practices under which researchers partner with consumers and other stakeholdersmay conduct good program evaluations. The CBPR team and its members address all the stepsin good research: making sense of hypotheses, selecting effective designs, analyzing thefindings, making sense of the findings, and, perhaps most importantly, using these findings toenhance the program. Stakeholder is a diverse term that might potentially include many groups
People with mental illness, though this is not a simple group. Including people currently
using treatments, those who see themselves as ex-patients (no longer consume services),or survivors (not just surviving the illness, but also the treatment for the disorder).
Family members. Often family members and people with the illness have opposing
agendas. Families should be included in the CBPR team in cases where they have arelevant interest. Family is also a diverse idea. Often parents of adults with mentalillness are recruited, but other relatives may also have important roles includinggrandparents, siblings, spouses, and children.
Service providers often have a relevant interest. Provider is also a diverse term with
varying views across disciplines including psychiatry, psychology, social work, and
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
5/66
Ten Steps 5
psychiatric nursing. Also relevant here may be administrators (who control the pursestrings of the anti-stigma program) and government authorities, such as representatives ofthe state mental health authority, or even legislators.
o Inclusion of providers, in particular, highlights a concern of some stakeholders.
Psychiatrists and other mental health providers are often seen as part of theproblem, not of the solution. Hence, CBPR team members need to decide whether
to include people from these groups. In making this decision, we are reminded ofthe adage: It is better to have an adversary in the tent looking out, than out of thetent peering in!
There is one last broad group of stakeholders to consider. In many places we encourage
targeted anti-stigma, such as trying to change the prejudice and discrimination ofpowerful groups such as employers, landlords, health care providers, and members offaith communities. It is wise to include people from these spheres on the CBPR team.Anti-stigma projects seeking to change employer attitudes are significantly moresuccessful when including employers in the development and implementation of the
program. Similarly, program evaluation is enhanced with employers on the CBPR team.
As mentioned previously, stakeholder is a demographically diverse idea. Factors such asethnicity, faith-based community, gender, and sexual orientation have all been shown to berelevant to understanding the stigma of mental illness and programs meant to challenge thisstigma. Which among these factors should be included in the CBPR team? The team needs toidentify at start-up which demographics are relevant and important. For example, the Westsideof Chicago is mostly African American; hence African Americans need to have a prominentrole in evaluation.
Two principles illustrate the significance of CBPR in this evaluation plan: perspectiveand politic. Dissimilar stakeholder groups vary in comprehension of stigma and stigma change,and in research experiences used to test these PERSPECTIVES. For example, researchsuggests that interests and goals of people with Western European roots tend to beindividualistic when compared to East Asian cultures, where individuals with mental illness areunderstood in terms of a collective, usually the persons family. Perspectives from thesediverse groups need to be included in the evaluation plan.
POLITIC: Advocates are the group most likely to consume research findings in order toactually try to erase the stigma. They are most likely to have a sense of key policy issues inlocal and regional mental health authorities and to use new information about stigma change to
affect corresponding legislative activity (e.g., passage of budget and other mental health billsthat promote a recovery-oriented system of care) and administrative efforts (e.g., actual, day-to-day directives that make the vision of a recovery-based system a reality). CBPR team membershave a history of interest in and authority with politicians who are likely to respond toconstituent efforts: in the case here, a mental health agenda that is undermined by stigma.
What exactly do we mean when we say stakeholders are to be real partners in evaluatingstigma change programs? At least one, frequently a service consumer, is selected as co-Principal Investigator and directs all aspects of the project with another co-PI who may have astrong research background. Some people wonder whether this is political correctness,
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
6/66
Ten Steps 6
questioning whether the consumer co-PI is just a token. Some CBPR teams provide trainingand practical information about research methods and the decisions needed to better informteam members. Fundamental to evaluation are hypotheses recognizing the priorities and
possibilities that define real world stigma change. Consumer or family stakeholders are oftenmore familiar with this arena than the researcher members of the team.
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
7/66
Ten Steps 7
Chapter 3
The Anti-Stigma Program Evaluation
Plan (ASPEP-10):Ten Steps
The ASPEP-10 is a step-wise, user-friendly approach to evaluating anti-stigma programs.It is comprised of TEN steps meant to guide the reader through tasks to yield meaningfulinformation aimed at the improvement of these anti-stigma programs. The best way to read thissection is to print out the e-file or photocopy the paper version of the ASPEP-10 in Appendix2A and then follow along. ASPEPs ten steps are identified by Roman numerals along the rightmargin of the worksheet. The corresponding text discussion is organized by the same Roman
numerals. The section also includes worksheets for Fidelity Assessment and ProgramSatisfaction. A step-by-step example is provided in the next chapter.
I. What is the Anti-Stigma Program?Anti-stigma programs may address public stigma or self-stigma; hence, indicating the
type of stigma is first on the form. The evaluation then focuses on one essential question: doesthe anti-stigma program of interest have a positive impact on participants? Hence, the first text
box instructs the reader to write in the name of this anti-stigma program. This can be a newprogram developed for this evaluation or one with more of a history, taken off the shelf as itwere. Programs are likely to be more successful when they have a manual that specifies the
behavioral and interactive steps basic to it. Manual name, which may often mirror that of theprogram should be listed on the available line. Of their many benefits, manuals often lead tofidelity ratings, assessing whether individual components of the program were in factcompleted. That item is marked yes when a Fidelity Checklist already exists. In its absence, aform will have to be developed. Also, related to fidelity is participants satisfaction ordissatisfaction with the components on the fidelity form.
II. Who Will be the Target of the Program?Depending on program goals, the target of a public stigma change program may be as
broad a group as the general population or more local intents such as encouraging employers tohire people with mental illness. Programs meant to decrease self-stigma typically target peoplewith serious mental illness. Where will the program be held? Most effective for both theintervention and evaluation are sites convenient and comfortable for research participants.Civic club lunches, for example, are excellent venues for employers. When is the evaluation?The question is asking for the timeline of the overall evaluation, not just the anti-stigma
program. Keep in mind that the stigma change program is embedded in the larger researchenterprise. Several elements may affect evaluation dates, including whether the anti-stigma
program has components over several days, the time between the intervention and the follow-
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
8/66
Ten Steps 8
up, how many subjects are sought for the study, and how many trials are needed to get data forall the subjects?
What exactly is meant by follow-up? Post-test is usually collected immediately after theprogram is over, most likely in the room in which the program was offered. Follow-up isattempting to determine whether any benefits of the anti-stigma program are still present sometime later. This suggests the anti-stigma programs may be evoking real change, showing that
changes found immediately after the program ended did not quickly return to baseline.
III. Who is the CBPR team?An additional who comprises step III; namely, who is responsible for conducting each
stage of the evaluation project. This consideration begins with a list of CBPR team members;such a list reminds us that diverse stakeholder ownership must occur from the beginning of theevaluation. Assignments of the remaining ASPEP-10 steps are listed here. Overall authority inthe science world is called principal investigator, the person who acts as General to the troops,making sure all the elements of the evaluation are completed in proper order. In continuing the
military metaphor, good Generals guide the team through all decisions and activities related tothe Evaluation Plan; they are neither unilateral nor dictatorial. Also from the team is the anti-stigma program facilitator, typically someone who has met some credential for conducting the
program. Data collection may fall to a person who enjoys being obsessively careful in handingout and collecting data. A similar virtue is needed for entering data into an appropriatecomputer program. The person charged with handing out and collecting the data should not bethe program facilitator. Subtle biases occur when the person vested in the program is collectingdata. Someone else is charged with collecting fidelity and satisfaction data. It might fall withinthe purview of the person collecting the outcome data.
Someone needs to analyze the data, and we have greatly simplified the analysis
component in the study. The ASPEP-10 was developed so that people who have completedhigh school algebra can arrive at reasonably valid conclusions about the anti-stigma program.
The last task of the CBPR team is making sense of the data. The kind of to do listsuggested here is the ultimate goal of the evaluation. What needs to be done to improve the anti-stigma program? In some way, this task would seem to nicely return the evaluation to the teamas a whole. Especially important, however, are stakeholders with administrative responsibilityover the anti-stigma program, the person or persons in the role of keeping the program relevantto participants.
IV. QuestionsQuestions and hypotheses are fundamental to research and evaluation activities. Perhapsthe most common question is impact: does the anti-stigma program benefit people who
participate in it? This question obviously varies across public or self-stigma. Are people fromthe general public moved by the anti-stigma approach? Are people with serious mental illnesswho participate in the stigma change program more likely to endorse personal empowerment?Box III also includes questions of singular interest to the specific CBPR team. One area isdifference in anti-stigma programs by cluster: gender, ethnicity and spiritual heritage, sexualorientation, SES or other demographic. For example, programs meant to discredit public stigma
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
9/66
Ten Steps 9
might examine how program effects vary by ethnicity. Do people of South American descent,for example, show less change in prejudice and discrimination compared to those from WesternEurope? In terms of self-stigma, do Muslims with mental illness report more empowermentthan Christians as a result of participating in the stigma change program? A more completeexample of evaluations for group differences is provided in Chapter 5.
Another set of questions might examine program effects across special populations:
people with mental illness who are also homeless, soldiers, ex-felons, or people with substanceabuse problems. Both sets of questions are especially fertile ground for informing the ongoingdevelopment of anti-stigma programs. How must a program be enhanced to meet the needs ofany cluster not currently addressed well?
V. Good Measures and DesignEvaluation research needs good instruments, thermometers, as it were, that are sensitive
to change brought about by the anti-stigma program (for more thorough discussion of theseissues, see Corrigan, Roe & Tsang, (in press) for a comprehensive discussion of measurements
and science related to stigma change. We have identified five domains for measuring stigmachange: attitudes, behaviors, penetration, knowledge, and information processing (Corrigan, inreview)2. We restrict the discussion here to attitudes and behaviors; there are several measuresof attitudes and behavioral intentions. Instruments sensitive to public and self-stigma areaddressed here. More complete discussion of our measures can be obtained from the Toolkit ofMeasures.
The CBPR team may opt to use a repeated measures research design. In its mostcommon form, measures are collected before the anti-stigma program (pre) and after the
program (post). In this guide, positive differences representing subtraction of post from pre,
leads to inferences about positive impact; the evaluation supports the idea that the anti-stigmaprogram in fact, leads to beneficial change. Typically, pre and post assessments are givenimmediately before and after the anti-stigma program when research participants are at handand do not need to be sought out at a separate time and place. Some research plans includefollow-up, repeating post-test measures at a later date. Follow-up addresses the importantquestion whether positive benefits shown between pre and post endure to a later point. Do
benefits of the anti-stigma program disappear at some later time? Conclusions are strongerwhen answers support the affirmative. One week is often used as a follow-up time. Less thanone week is too recent; up to three months is possible, though beyond what is unreasonable tothink impact of a 60 minute program might yield.
Follow-ups are often difficult because research participants do not necessarily want toconnect with the evaluation assistant. Sometimes, data can be obtained via the U.S. mail or by
phone. Unfortunately, many research participants do not respond to these kinds of latercontacts. Alternatively, an e-mail message might do the trick. There is a web-base programcalled Survey Monkey (surveymonkey.com) with which you can type out the survey and e-mailit to participants. A basic account can be set up on the website for free. Survey monkeyinstructions can include one or two additional probes at which time the research participant is
2 Corrigan, P.W., (in review). Measuring the impact of change programs for mental illness stigma.
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
10/66
Ten Steps 10
reminded about the follow-up test. Survey monkey includes straightforward training andcorresponding FAQs for interested members of the CBPR team.
Measures of public stigma. As a reminder, public stigma is the phenomenon in which thegeneral population agrees with the prejudice of mental illness and discriminates against peopleas a consequence. Attitude measures include assessments of stereotype, emotional reactions tothose stereotypes, and behavioral intentions. The Toolkit has several measures that assess
public stigma; we believe the 9 item Attribution Questionnaire (AQ-9) has multiplecharacteristics that commend it here. It is reproduced in the Appendix as a sheet that can bedisseminated to research participants as a pencil-and-paper measure. That does not mean thatother measures in the toolkit or from the broader realm of relevant research might not do a
better job in assessing change, only that the AQ-9 provides a nice example of assessing stigmachange. The AQ-9 is also reproduced in Table 1. It is a reliable and valid short form of thelonger 27 item Attribution Questionnaire (which is also available in the Toolkit). The nineitems of the AQ-9 represent the nine concepts that comprised our model of stigma. Briefly,those who view people with mental illness as responsible or to blame for their disorder are more
___________________________________________________________
Table 1. Items that comprise the Attribution Questionnaire
Harry is a 30 year-old single man with schizophrenia. Sometimes he hears voices and becomes upset. He livesalone in an apartment and works as a clerk at a large law firm. He had been hospitalized six times because of hisillness. Below are nine statements about Harry, on a nine point scale where 9 is very much. Write down howmuch you agree with each item.
1. I would feel pity for Harry.
2. How dangerous would you feel Harry is?
3. How scared of Harry would you feel?
4. I would think that it was Harrys own fault that he is in the present condition.
5. I think it would be best for Harrys community if he were put away in a psychiatric hospital.
6. How angry would you feel at Harry?
7. How likely is it that you would not help Harry?
8. I would try to stay away from Harry.
9. How much do you agree that Harry should be forced into treatment with his doctor even if he does not wantto?
likely to be angry with them, which subsequently undermines their desire to help those withmental illness. Conversely, those who do not blame people for their disorder, who actuallyview people with mental illness as victimized by it, react with pity which enhances helpingresponses. People with mental illness may also be viewed as dangerous. This leads to fear
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
11/66
Ten Steps 11
which results in social avoidance; I do not want to be near people with mental illness, or, Ido not want to work by them. Fear also affects prominent themes about mental health care:segregation, people with mental illness need to be sent away to hospitals or custodialcommunity programs to protect the public, and coercion, treatment decisions need to be made
by authorities so people with mental illness do not harm the public.All the underlined constructs in this paragraph directly correspond with the items of the
AQ-9. The constructs sort nicely into the three components of stigmatizing attitudes:stereotypes (blame and dangerousness), emotional reaction (anger, pity, and fear), and
behavioral intention (help, avoidance, segregation, and coercion). Nine individual scores,component scores, or a single overall score may be used as impact factor.
Measures of self-stigma. Self-stigma occurs when people with mental illness internalizethe prejudice of stigma leading to diminished self-esteem. Personal empowerment is theopposite of self-stigma. Hence, the Rogers et al (1997) test called the Empowerment Scale -5(ES-5) is a useful tool for assessing self-stigma (reproduced in Appendix 2B). It is alsosummarized in Table 2 where it is labeled the Making Decisions Scale consistent with Rogers et
al. The ES-5 yields scores that correspond with five factors: self-esteem/self-efficacy,power/powerlessness, community activism/autonomy, optimism/control, and righteous anger.___________________________________________________________
Table 2. Items that comprise the Making Decisions Scale-5
Below are several statements relating to ones perspective on life and with having to make decisions. Pleasewrite the number that is closest to how your feel about the statement. Indicate how you feel now. Firstimpressions are usually best. Do not spend a lot of time on any one question. Please be honest with yourself sothat your answers reflect your true feelings.
1. I can pretty much determine what will happen in my life.
2. I generally accomplish what I set out to do.
3. People have the right to make their own decisions, even if they are bad ones.
4. People have no right to get angry just because they dont like something.
5. I rarely feel powerless.
Platforms and other operational decisions. The measures may be administered in severaldifferent ways. They can be self-administered as pencil-and-paper tests. The measures arehanded out to research participants with the instructions to complete using a pen or pencil. Thisis an efficient way for participants and experimenters to collect data. However, some peoplewith mental illness may be unable to complete this kind of test alone because of cognitivedifficulties. In that case, a research assistant sits down one-to-one with the research participant
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
12/66
Ten Steps 12
and reads the test as an interview. These kinds of interviews can be completed by telephone, astrategy that allows research contacts for those who are less able to travel to the research site.Online services like Survey Monkey mentioned earlier also accomplish this task, though thisapproach requires the research participant to have access to a personal online account.
Tests reviewed here and included in the Toolkit yield several different factors that can beused as outcomes. Typically, only a subset of items are used for the evaluation. Using too
many may dissuade the research participant from completing the scale. Moreover, makingsense of the data is a bit more difficult with too many measures. For this reason, werecommended no more than three items to be included in assessment. How might these bechosen? The CBPR team should look at a collection of measures and outcomes like that
provided in Table 1 or 2 and identify those that seem relevant to the issues of interest at the timeof the evaluation.
VI. Comparison GroupThe heart of research and evaluation is comparison; i.e., fundamental differences
(subtracting one score from the other ) in two scores as outlined in Box VI of the ASPEP-10. Itcan be assessed over time (is there a difference between pretest and post test?) or across groups(Does the anti-stigma group show more stigma change than another group?). Note that theCBPR team must choose either time or cross group comparisons. The experienced reader mightnote a combination of time and group provides a useful way for analyzing data but exceeds thelimited goals of the ASPEP-10. Comparing pretest and posttest indicates whether the stigmachange program has reduced stigma. Comparing between pretest and follow-up indicateswhether positive benefits due to the anti-stigma program were evident some time later. Overtime decisions include number of days to follow-up and how to obtain it. Research participantsare instructed to return for the follow-up, though many participants may find such a request to
be onerous. More user-friendly approaches are also possible like phone interview, regular mail,or survey monkey. The CBPR team needs to plan follow-ups before beginning an evaluation
because in most instances, use of the strategies requires additional data gathering at post-test.Phone numbers, street addresses, or email addresses are needed for these follow-ups.
Alternatively, group comparisons require specification of a group that will be comparedwith the anti-stigma program. Perhaps the simplest would be a group that received nointervention, called the no intervention control group. Alternatively, another intervention might
be selected as the foil to the indexed anti-stigma program. For example, research may beseeking to understand impact of a contact program by comparing it to an education strategy.
How do research participants end up in one or the other group? Experts would say randomassignment is essential. One way to do random assignment is to take two possible researchparticipants (Mr. A and Ms. B), flip a coin and assign A to the anti-stigma group if heads, or tothe control group if tails. B moves the opposite, to the control group for heads and the anti-stigma group with tails. Random assignment is difficult to do because of many constraints.Employers at a Rotary club meeting, for example, may not be willing to use time to be mixedup for the research. In such a case, there are two considerations to group assignment that theCBPR team should mull over. First, do not assign to group by demographic; e.g., all men go tothe anti-stigma group, women to the control. All Europeans go to the control group, Africans to
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
13/66
Ten Steps 13
the anti-stigma group. Second, make sure both approaches are used at a research spot ormeeting. Do not, for example, use the anti-stigma intervention for Rotarians on Monday andthe control approach for those at Tuesdays Chamber of Commerce meeting. This point may beclearer in the example later in the chapter. Finally, how many research participants are neededin a group? Twenty four research participants should be enough for comparisons of pretest,
posttest, or follow-up. 24 participants are needed PER anti-stigma intervention and control
group.
VII. TableStarting with the Table and Graph are the ASPEP-10 steps that yield most trepidation for
readers. The analyses are then laid out in sections VIIa through VIIc in a straightforwardmanner using a specific example. First is a grid to enter all the data. The Table is organizedinto three columns labeled M1, M2, and M2 (Measurement variables 1 through 3). Enter eachresearch participants responses to each of the three variables in the Attitudes Sheet (reproducedin Appendix 2B) in their respective spaces. The next two rows in the grid represent groups or
time. Three columns are provided for group (Group 1, Group 2, and if used in the study, Group3). Group 1 is always the anti-stigma group of interest. When focusing on group comparisons,at least one other group needs to be listed. Write that group name on the appropriate line.Alternatively, the evaluation might be over time; pre, post, and also perhaps follow-up.Research participant data do not need to be entered in any specific order. The last row is theaverage of all scores in the corresponding column.
Graphs are used to gain an overall sense of the data and determine if there is a truedifference. Space to build each graph is provided in VIIa through VIIc of the ASPEP-10. Notethat graphs correspond solely to one of three measures. These are provided so individual graphscan be completed for up to three measures; measure labels are those listed in the table and
entered in the corresponding space. The figure on the left is used for time comparisons (i.e.,pre, post, and follow-up), the one on the right for group comparisons. The horizontal axis andvertical axis need to be completed before entering the bars. The horizontal axis (the x-axis) laysout comparisons into three possible conditions. One bar is needed for each condition (e.g.,three if pre, post, and follow-up were assessed in the evaluation; two if the indexed anti-stigmagroup is compared to a no intervention control).
The vertical or y-axis is calibrated next. Enter a value slightly larger than the highestvalue in the Table for each measure in the Hi space of the graph. Then divide that number byfive. The results are the units for the y-axis. For example, data in a table indicates the highest
score for Measure 1 is ten. Ten divided by five is two. Hence, each point on the scale is amultiple of two: 0, 2, 4, 6, 8, 10. Lastly averages of each variable for each condition are enteredinto the graph as a vertical bar. An example of the bar is very lightly colored in the graph forthe pretest condition for Measure 1. This is only meant as an example; the bar graphs yougenerate reflect averages from the corresponding columns of the tables based on your data.
How does one know that a difference in heights between two conditions is significant andnot just some error of the sample? This is the basic question of statistics and is answered in thetable on difference, significance, and meaningfulness. The expert may note the steps in thesetables challenge some of the assumptions of statistics. We decided on the rules outlined here as
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
14/66
Ten Steps 14
a way to make a reasonably sound evaluation that is accessible to CBPR team members withoutstatistical expertise. The column representing group or time is checked depending on theresearch design. Differences are then determined for important combinations; for time, thatwould be pre-test minus post-test and pre-test minus follow-up. It is very important that thedirection of the subtraction follows the order in the table (e.g., it is pre-test minus post-test andnot the other way around). Group differences may also be tested and the group differences
column is used in this situation. Once again, the correct order of subtraction is essential: Group1 minus Group 2, or Group 1 minus Group 3. These difference scores are the numerator (thetop) of the ratio in the third column. The denominator (bottom) of the ratio is two.
Cases in which the ratio is larger than one are significant and starred (*) in the far rightcolumn of the difference table. A positive number is good and supports the assumption that
participants in the anti-stigma program showed less stigma after participating in that program.Cases when the ratio are lower than negative one (-1) are also significant and should be markedwith a pound sign (#). Cases that yield a negative one actually show the anti-stigma programmakes stigma WORSE. We especially encourage CBPR team members to heed and carefully
consider findings showing negative effects.
VIII. Making Sense of the Data.Here is where the CBPR team makes sense of the data gathered from the ASPEP-10
evaluation effort. A tick is entered in each of the appropriate places in Section VIII -- positives(*) or negatives (#). Note, three sets of rows are provided in Section VIII corresponding withup to three measures in the study. Lines are then provided for time and group under eachmeasure. Checkmarks are only entered into rows that correspond with the type of comparison.A zero (0) is entered in the space marked none when neither positive (*) nor negative (#)significant differences were found. How many checks are expected in the Making Sense of
Data box? Studies that only tested a pre-post design or only two groups (e.g., anti-stigmacondition versus non-intervention control group), will yield only one tick per measurement(three total) in the box. Studies that comprise a more complex design will result in morechecks. Consider a study that examined pre versus post, pre versus follow-up and post versusfollow-up. This means three ticks per measurement (or nine total possible). Similar
permutations are evident in group designs. All ticks in each column are then summed andentered into the Total boxes. What does this mean?
If sum of positive ticks is greater than the sums of negative or the sums of none,
then the evaluation project suggests the anti-stigma program works, specifically that it
has positive effects on program participants. The CBPR team needs to consider what iseffective in the program and continue to highlight it as keys to productive stigma change.
Situations where none is greater than positives suggest the anti-stigma program may
not be as effective as desired. This suggests the CBPR team needs to carefully considerhow to strengthen the program.
Situations where negatives are higher than the other two sums should raise alarms. Not
only is the anti-stigma program not effective, it actually seems to be doing harm. This
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
15/66
Ten Steps 15
situation calls for a must; namely the CBPR team must make changes. The programcannot remain in its current state.
How does the team decide what to change in the existing program? Two processes arehighlighted in the ASPEP-10: modify the program (and its manual) or teach and supervise staff
to correctly administer it. What aspects to specifically modify or teach is indicated by theFidelity and Satisfaction assessments.
Assessing fidelity. Fidelity is whether group facilitators conducting the anti-stigmaprogram do so in a manner consistent with the guidelines laid out in the manual; essentially thatfacilitators are being faithful to the steps of the program. This is done using the FidelityChecklist in Appendix 2C. To complete the checklist, a research assistant (RA) sitsunobtrusively in the back of the room and checks off behaviors as the facilitators exhibit them.In essence, the RA is answering a series of yes/no questions. Yes or no: did the facilitator showa specific component of the program? For example, did the facilitator say the purpose of theanti-stigma program during the introduction of the stigma program?
There are separate checklists for programs representing contact-based approaches versuseducation-based strategies. The Fidelity Checklist includes generic components of an anti-stigma program and components that are specific to the indexed program. Components aregrouped in terms of purpose. Introduction in the generic list for education programs is meant toorient participants to the facilitator and program. Teaching facts in the education program is thecore of this kind of program. It seeks to increase knowledge about mental illness, specificallyin four areas: illness and symptoms, hope, effective biological treatments, and effective
psychosocial treatments.Rubrics in the shaded rows are only meant as organizing concepts for the Fidelity
Checklist and are not considered in the summary. It is the indented components on which theRA should focus. One of the organizing concepts in the generic column for education is labelavoidance. RAs only examine components listed under it. Yes or no: did the facilitator:
Explain the low use of services even when people might benefit from them?;
Explain how people attribute low service use to avoiding stigma?; and
Identify specific stigma that leads to label avoidance?
The CBPR team should only keep components in the Fidelity Checklist that correspond with
their actual anti-stigma program. The generic column for the education and contact fidelitychecklists has more that 30 possible components meant to be a comprehensive list of behaviorsfrom which the research assistant might check. However, many of these may not be relevantfor the program developed by CBPR advocates. In this case, specific components unrelated tothe program are deleted from the list by using a black marker to strike those items. The RAscratches out components prior to the program session with feedback from the programfacilitator.
In addition, a program may have components specific to it, such as facilitator behaviorsthat make the program unique and different. For example, a spiritually focused anti-stigma
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
16/66
Ten Steps 16
program might incorporate ideas and ceremonies from a specific rite. Facilitators of a contactprogram for employers might focus their stories on work life. Ample spaces are provided foridiosyncratic components but the CBPR team should not feel compelled to fill up all the spaces.The RA, then, checks all the generic components as well as the ones specific to the anti-stigma
program of interest during the program. Ratios are then determined for components under thevarious concepts of the programs. The ratio is the number of observed component behaviors
divided by the total for that section. For example, there are five components under teachingmyths on the Education Fidelity Checklist. The Ratio is the number of these myths discussedduring the program divided by total possible (five). Total corresponding with each of thegeneric concepts are already printed in the denominator of ratios for the generic list. Ratios arereported as percents so the division in the table is multiplied by 100. For example, threecomponents out of a total of five are reported as a percent.
3 / 5 x 100 = 60%The denominator decreases in instances when the CBPR team has blacked out individualcomponents. Hence, if the dangerous myth is removed from the fidelity sheet, then the
denominator reduces to four; in the example, that means 3 / 4 x 100 = 75%. We used fairlyconservative ratios for identifying high and low fidelity items. Components with ratios higherthan 80% suggest well-used program components and are circled in the Table. Those under33%, imply targets of ongoing program development and facilitator training and arehighlighted. These components are not being implemented to the degree expected in the
program.Ratios work with the same strategy for specific components. The CBPR team defines
the number of components under each concept. For example, they may decide to add twomyths to the list under education: moral repugnance and physical disgust. A total of twocomponents now occur under teaches myths, defining the denominator of that ratio. The
resulting ratio of a facilitator observed to demonstrate only one of these two components is 1/2x 100 = 50%. Once again, concept areas should be circled where ratios are below 33%.
Fidelity checklists are easier if the anti-stigma program has a manual prescribing theprogram components. Development of this kind of manual is often beneficial in its own right.It requires program facilitators to take stock of what they will do to help research participantsdiminish stigmatizing attitudes and discrimination. Manuals require facilitators to identifydiscrete behaviors that comprise a well-working stigma change effort. Even in situations where
program facilitators are uninterested in manual development, advancing a fidelity instrumenthelps the CBPR team develop a broader picture as to what the program is supposed to do.
Assessing satisfaction. One of the difficulties of fidelity assessment is the requirement ofa research assistant to monitor program components as they are used in the session. As a result,the CBPR team requires an individual to collect these data, sometimes an unavailable resource.An alternative approach to evaluating the programper se is to assess participant satisfactionwith program components, believing that items rated relatively high in satisfaction are moreeffective, and those rated low have less impact. There are many items on which satisfaction can
be determined. No more than ten should be included because research participants are unlikelyto complete a long satisfaction form. A blank Satisfaction with Program form is included inAppendix 2D. Individual items for the form should be selected by the CBPR team from the
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
17/66
Ten Steps 17
fidelity checklist. The form should comprise items which the team believes to be mostimportant. Research participants are instructed to rate satisfaction with items on a 7-point scale,where seven equals very satisfactory.
Responses are collected and the research assistant then tallies responses. The Satisfactionwith Program Tally Sheet copies verbatim the ten items in the Satisfaction with Program form.The Tally Sheet has cells to check components for which a research participant rated an
individual item greater than 5 (meaning satisfactory) or less than 3 (unsatisfactory). Ratios arethen determined by the number of checks in each box divided by the number of research
participants in the evaluation. Ratios are circled if they are greater than 66%, signaling researchparticipants were satisfied with the individual component. Ratios are highlighted when the ratioof dissatisfactions to total is higher than 66%, suggesting an unsatisfactory component.Highlights and circles are used to fill out the bottom half of Table IX.
IX. Making Sense of Fidelity and Satisfaction DataInformation from the Fidelity Checklist and the Satisfaction With Program Sheet are
entered in Section IX of the ASPEP-10. Program components with the three largest ratios areentered first in the Fidelity Checklist Summary. Those three with the lowest scores are thenentered into the box. Similarly, the three most extreme satisfaction and dissatisfaction ratios areentered into IX. Fidelity and satisfaction add the meat to the evaluation process. In caseswhere some adjustment of the anti-stigma program may be warranted, these indices suggestspecific components that are currently strong in the program and need to be nurtured, versusthose which are weak and may need to be the focus of subsequent program development orfacilitator education.
X. To Do List
The To Do list begins with a bold consideration. Are there so many negatives findingsfrom the data that the program should be discarded? This is meant to be provocative amongother things, but is rarely implemented. It is the last two pieces of business in Section X thatare important here. What tasks are needed to enhance the anti-stigma program? We haveframed these options in two ways. What program components need to be modified to enhancethe programs overall impact? What components should facilitators be taught to enhance theirimpact? Enter the program components that were viewed as absent or least satisfactory in theFidelity Checklist or Satisfaction with Program form. CBPR members then decide how tomove ahead with these findings.
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
18/66
Ten Steps 18
Chapter 4
Ten Step Plan Summaries
The ASEP-10 and supporting documents (i.e., Fidelity Checklists and Satisfaction withProgram forms) are fairly complex and thus may dissuade readers from using it. For thatreason, we provide a succinct summary of the ASPEP-10 users guide in Appendix 2F, whichcan be easily copied and used alongside the three sections of the Guidebook.
the ASPEP-10
the Fidelity Checklist
the Satisfaction with Program
These steps are laid out in the same numbering system as the ASPEP-10 instructions,paralleling the way they are described herein.
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
19/66
Ten Steps 19
Chapter 5
An Example of an
Anti-Stigma Evaluation Plan
(ASPEP-10)
We provide an example of the evaluation plan to illustrate the various parts of theASPEP-10 and related forms. The example is proffered here in the same Roman Numeraloutline that can be found on the ASPEP-10.
I. What is the anti-stigma program? This example investigated a public stigma program calledFirst Person Stories, Inc., a contact program. The program had both a manual and fidelitychecklist.
Type of Stigma (check one) PUBLIC SELF
II. School teachers were the target of the program to be conducted in the faculty dining roomstarting August 12, 2009 and continuing through September 5, 2009.
What is the stigma? public self
What is the anti-stigma program?
First Person Stories, Inc. -- contact
Is there a manual for the program? Yes_X__ No____ Name of manual_ About
Us_________
Does it have a fidelit measure? Yes X No If no one will need to be develo ed.
I
Who is the target of the program?- for public stigma, possible targets are the general public, high school students, employers_schoolteachers_- for self-stigma, targets are usually people with mental illness _____________________________________
How many targets will participate in the study (at least 25 per group) _________________________
Where will the program be provided?__at faculty dining area at school___________________
II
X
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
20/66
Ten Steps 20
III. Six people comprised the CBPR team with the five tasks of the team split among them.
______________________________________________________________________________________
IV. The question guiding the evaluation program was whether the contact program affectedparticipant attitudes.
V. Three items were measurement items that were selected for the study. Dangerousness wasincluded because it is a primary stereotype of mental illness. Dangerousness leads to fear.People who are afraid of those with mental illness avoid them.
WHO
Is the CBPR team Pat Corrigan . Jane Miller
.
Beverly Mills , Bob Mangley .George Williamson .
Fran Olsen . ____________ .___ .
is responsible for the overall evaluation by defining the questions and hypotheses?
___Jane__________
is going to conduct the anti-stigma program(s)? ______
George_________________________________
is oin to collect the outcome data and enter into a com uter file?
Question(s) Examining change due to anti-stigma program
How does First Person Stories, Inc. affect stigmatizing attitudes ofparticipants immediately after the program and two weeks later?.
IV
III
Good Measures and Design
Name of instrument(s) to examine impact of anti-stigma program
M1?______dangerousness_____________________________________
M2?_______fear____________________________________
M3?_____avoidance______________________________________
V
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
21/66
Ten Steps 21
V. This was a time series design with pre, post, and follow-up.
Measure
1_dangerousness__Measure 2_fear_______ Measure 3_avoidance__
Grp 1 Grp 2
_______
Grp 3 ? Grp 1 Grp 2_______
Grp 3 ? Grp 1 Grp 2_______
Grp 3 ?
Pre Post F-up ? Pre Post F-up ? Pre Post F-up ?
1 9 4 5 7 5 7 5 8 52 8 5 3 9 3 4 6 8 63 7 4 3 8 2 5 7 9 54 7 2 4 7 6 7 7 8 65 9 4 3 9 3 9 5 9 56 8 6 1 9 4 8 4 9 77 7 3 5 7 2 7 5 9 88 6 4 4 6 5 9 7 8 79 7 2 5 8 1 9 7 9 5
10 9 5 4 7 2 7 7 9 511 8 1 3 9 3 6 8 8 612 7 2 4 7 6 8 7 7 513 9 3 6 5 3 7 8 9 614 9 6 3 8 2 9 6 9 515 7 3 4 7 4 7 7 9 716 6 2 2 8 3 5 4 8 817 8 4 4 8 4 8 5 8 718 7 3 3 7 4 7 5 7 519 9 4 3 9 3 8 6 8 6
20 7 4 2 8 2 8 5 9 521 5 3 4 7 2 7 6 9 622 8 2 6 9 3 9 5 8 523 7 2 1 4 4 8 7 7 724 8 4 1 6 2 7 8 8 825 8 3 2 8 3 7 7 8 7
Average 7.60 3.40 3.40 7.48 3.24 7.32 6.16 8.32 6.08
Comparison Group
OVER TIME: yes? __X_____ ACROSS GROUPS: yes? _________X___pre Is this a wait-list control group yes? ______
__X___post Name of other comparison group(s)_______________________________________X___follow-up
_______10 days__________ Note that for across group data,number of days from post to follow-up measures are collected one, at posttest.
VI
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
22/66
Ten Steps 22
VI. The study was conducted over time at three measurement points, pre, post, and follow-up.Follow-up data was to be collected 10 days after post-test. Section VII discusses completion ofthe table below.
VII. Data were entered into columns for pre-test, post-test, and follow-up. These threecolumns then fell under the three measures chosen for this study: dangerousness, fear, andavoidance. Data from 25 research participants are provided in the Table. Averages per column
are in the last row of the Table.
VIIa. A bar graph is then completed for pre, post, and follow-up data. This is for data overtime and hence TIME is circled. Note that the graph for group comparisons is crossed out.Graphs correspond with dangerousness, fear, and avoidance respectively. The graph in VIIarepresents averages for dangerousness. Before entering individual bars of the graph, the y(vertical) axis needs to be calibrated. We chose 10.0 as the cap because 8.32 was the highestscore. Zero was chosen as the bottom because it was the lowest conceivable response.
The average pre-test score for dangerousness was 7.60, hence a bar was entered to thishigh point. Post-test scores were 3.40, as were follow-up scores. Bars of the same height were
entered in the graph for post-test and follow-up. What does examination of the graph suggest?Post-test seems a lot lower than pre-test suggesting dangerousness stigma decreased during thecourse of participation in First Person Stories, Inc. No change was evident from post-test tofollow-up suggesting beneficial effects remain over time. Research participants with fewerdangerousness beliefs showed this improvement.
Measure 1___dangerousness_______ COMPARISON IS TIME OR IS GROUP [GRP 1 is always anti-stigma program]
Hi__10__ Hi ____
ME
A
S
U
R
E
# _2__
0 0
ME
A
S
U
R
E
# __
VII8
6
4
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
23/66
Ten Steps 23 Pre Post F-Up Grp 1 anti-stigma Grp 2_______ Grp 3_______
In the subsequent textbox, change is defined as subtraction; for example, pre-test minuspost-test. Differences vary based on a whole bunch of things not worth discussing in theGuidebook. This kind of correction occurs by dividing the subtraction scores by two. Thedifference between pre and post-test divided by two is 2.1. Last to do in the table is the central
goal of the question. The ratio is considered to be significant if it is greater than 1.0, which isfound for both the pre to post-test ratio and the pre to follow-up result. The anti-stigmaprogram had positive effects on dangerousness.
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
24/66
Ten Steps 24
Is this difference significant and meaningful?
VIIb. This section shows a graph and table for fear, the second measure in the study. The y-axis is calibrated similar to the graph in VIIa. Bars for pre-test go as high 7.5. Post-test islower, at 3.2. The bar for follow-up is again, 7.3. The graph suggests a big decrease was found
after research participants completed the First Person Stories, Inc. program. However, thefollow-up score actually went back up to almost the pre-test score. This kind of findingsuggests any type of benefit that occurs immediately after the anti-stigma strategy returns to
baseline. These data show benefits do not have a long-term impact.
Measure 2_____fear___________
COMPARISON IS TIME OR IS GROUP [GRP 1 is always anti-stigma program]Hi__10__ Hi ____
ME
A
S
U
R
E
#
0 0
Pre Post F-Up Grp 1 anti-stigma Grp 2_______ Grp 3_______ name name
Apparent difference in bars in the graph is supported by the Table below. Namely, subtractingpost from pre is 4.24 which, after dividing by 2 yields a ratio (2.12); being higher than 1.0, it isa significant finding. Also note that the ratio of difference in row two is lower than 1.0 and istherefore not starred (*).
Group ___differences
Time _X_differences
ratio= differences2
If >1.0significant (*)
If >-1.0significant (#)
Grp 1 Grp 2 = Pre Post = 4.2 2.1 *Grp 1 Grp 3 = Pre F-up = 4.2 2.1 *Grp 2 Grp 3 =
ME
A
S
U
R
E
# __
VIIb
8
6
4
2
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
25/66
Ten Steps 25
Group ___differences
Time _3__differences
ratio= differences2
If >1.0significant (*)
If >-1.0significant (#)
Grp 1 Grp 2 = Pre Post = 4.24 2.12 *
Grp 1 Grp 3 = Pre F-up = .08 .08
Grp 2 Grp 3 =
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
26/66
Ten Steps 26
VIIc. Finally, the graph in VIIc illustrates an example of harmful effects. Entering averagesfrom the last three columns in the graph yields results of concern. Namely, reports of avoidancewent up from pre to post-test, indicating avoidance got worse. The difference score was -2.16which, after divided by 2, is less than -1.0 and earns an #. Let us stop to consider what this
means. Something about the program provided by First Person Stories, Inc. actually harmsresearch participants. CBPR team members need to critically re-examine anti-stigma programsthat result in negative effects.
Measure 3____avoidance_________________
COMPARISON IS TIME OR IS GROUP [GRP 1 is always anti-stigma program]Hi__10__ Hi ____
M
EA
S
U
R
E
# ___
0 0
Pre Post F-Up Grp 1 anti-stigma Grp 2_______ Grp 3_______
name name
Is this difference significant and meaningful?
VIII. Findings from the graphs and tables are all collapsed into one place, summarized in thetextbox on the next page.
Group ___differences
Time ___differences
ratio= differences2
If >1.0significant (*)
If >-1.0significant (#)
Grp 1 Grp 2 = Pre Post = -2.16 -1.08 #
Grp 1 Grp 3 = Pre F-up = .08 .78
Grp 2 Grp 3 =
M
EA
S
U
R
E
# __
6
4
8
2
VI
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
27/66
Ten Steps 27
Information from the Making Sense of the Data textbox is meant to inform the To Do listAlso of importance is information from the Fidelity and Satisfaction findings, found in SectionIX on the next page. The Fidelity Checklist on the next page was completed by a researchassistant sitting quietly at the back of the room and checking off generic and strategy specificcomponent behaviors as they appeared during the program. Note that several of the generic
FIDELITY CHECKLIST CONTACT_First Person Stories, Inc (name ofprogram)
Making Sense of the Data:
For Measure 1: _dangerousness_____________________
Anti-stigma program showed significant change (CHECK ONE)pre to post pos (*) __X_ neg (#) ___ none____pre to f-up pos (*) __X_ neg (#) ___ none____
add all for a subtotal __2__ __0___0_ Anti-stigma program showed significant change
Grp 1 to Grp 2 pos (*) ____ neg (#) ____ none____
Grp 1 to Grp 3 pos (*) ____ neg (#) ___ none____
Grp 2 to Grp 3 pos (*) ____ neg (#) ___ none____
add all for a subtotal ______ ______ ______
For Measure 2: __fear____________________
Anti-stigma program showed significant change (CHECK ONE)pre to post pos (*) __X_ neg (#) ___ none____pre to f-up pos (*) ____ neg (#) ___ none_X__
add all for a subtotal __1__ ___0_____1__Anti-stigma program showed significant change
Grp 1 to Grp 2 pos (*) ____ neg (#) ___ none____
Grp 1 to Grp 3 pos (*) ____ neg (#) ___ none____
Grp 2 to Grp 3 pos (*) ____ neg (#) ___ none____
add all for a subtotal ______ ______ ______
For Measure 3: ___avoidance___________________Anti-stigma program showed significant change (CHECK ONE)
pre to post pos (*) ____ neg (#) _X_ none___pre to f-up pos (*) ____ neg (#) ___ none_X_
add all for a subtotal __0___ __1_____1___Anti-stigma program showed significant change
Grp 1 to Grp 2 pos (*) ____ neg (#) ____ none____
Grp 1 to Grp 3 pos (*) ____ neg (#) ____ none____
Grp 2 to Grp 3 pos (*) ____ neg (#) ____ none____
add all for a subtotal ______ ______ ______
VIII
Note that all spaces for group
differences are struck from theTable. Difference scores for pre t
post-test and pre to follow-upremain. Positive effects were founfor dangerousness. Stigma lessenefrom beginning to immediately aftthe program and remainedimproved 10 days later at follow-up. A mixed package emerged fofear. Improvement was noted from
pre to post-test, but no change wafound at follow-up. Findings foravoidance were sobering.Avoidance actually worsened from
pre-test to directly after completioof the program. No difference,however, was found between preand follow-up suggesting thenegative effect corrected itself
during the 10 days following. Atthe bottom of the Table are thetotals. What do the six sets offindings show? Half supported
positive effects. A third showedneither good nor bad effects. Onefinding yields worse effects. Asdiscussed earlier, negative findingare especially of concern.
1 2
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
28/66
Ten Steps 28Check
XIf the compo-
nent is observed
GENERIC COMPONENTS
Check
XIf the compo-
nent is observed
COMPONENTS SPECIFIC TOTHIS ANTI-STIGMA PROGRAM
XXXX Introductions XXXX Introductions PLUS
X -- Name of facilitators and program X -- introduce people in the audience
X -- Purpose of meeting --X -- Personal goals --
= x/3 RATIO 1.00 = x/1 RATIO 1.00
XXXX Evaluation XXXX Evaluation PLUS
-- Explain the need for pre-test measure --
X -- Obtain permission to participate --
-- Administer pre-test before program begins --
= x/2 RATIO .50 = x/? RATIO .0
XXXX Stories of facilitator 1 XXXX Stories of facilitator 1 PLUS
X -- On the way down stories X -- stress experiences in county jailX -- On the way up stories X -- review homeless historyX -- Stories of hope --
X -- Stories of recovery --X -- Stories of good treatments --
= x/5 RATIO 1.00 = x/2 RATIO 1.00
XXXX Stories of facilitator 2 XXXX Stories of facilitator 2 PLUS
X -- On the way down stories -- discuss bad experiences withtreatment
-- On the way up stories --
-- Stories of hope --
-- Stories of recovery --
-- Stories of good treatments --
= x/5 RATIO .25 = x/? RATIO .00
XXXX Stories of facilitator 3 XXXX Stories of facilitator 3 PLUS
-- On the way down stories --
-- On the way up stories ---- Stories of hope --
-- Stories of recovery --
-- Stories of good treatments --
= x/5 RATIO XXXX = x/? RATIO XXXX
XXXX Discussion XXXX Discussion PLUS
-- Invites comments from program participants X -- randomly ask questions ofparticipants
-- Asks questions to stimulate conversation --
X -- Reflects back comments --
X -- Refer to facilitator stories to illustrate issues --
= x/4 RATIO .50 = x/1 RATIO 1.00
XXXX Follow-up and homework XXXX Follow-up and homework PLUS
-- Assign some kind of self-monitoring task. --
-- Inform participant of time and place wherehome work will be discussed/reviewed
--
-- Obtain information to seek out participant forfollow-up.
--
= x/3 RATIO .00 = x/? RATIO XXXX
XXXX Conclusion XXXX Conclusion PLUS
-- Summarize key points of program --
= x/1 RATIO XXXX = x/? RATIO XXXX
XXXX Post-test XXXX Post-test PLUS
-- hand out posttest --
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
29/66
Ten Steps 29
components were omitted from the fidelity analysis, including explanation of the needs of pre-test measurement (because the group had participated in other anti-stigma program evaluationsin the past), stories of good treatments from facilitator 2 (in fact, facilitator 2 was concernedabout bad aspects of recent treatment), and all the components related to stories from facilitator3 (because facilitator 3 decided he did not want to participate in First Person Stories, Inc. at the
time of the study). A few specific program components were added to the checklist, includingfacilitator 1 stressing his experiences in the county jail and homeless history, facilitator 2reviewing bad experiences in treatment, and overall, asking questions to participants.
In reviewing marks on the Fidelity Checklist, note that facilitator 1 showed all thecomponents of the program assigned to her. Facilitator 2, however, missed many of theexpected programs, only recounting on the way down stories. None of the follow-up andhomework components were observed during the program presentation. Information from the
Specific items should be identified from the Fidelity Checklist by the CBPR team, includingthose which seem most important or most likely to indicate especially important componentsfor program development. The team should also consider the value of including generic versus
program-specific components or some mix thereof. The ten items are reformatted to fit a self-administered test like the one on the next page. Research participants are instructed to answereach item in the list using the seven-point satisfaction scale. For example, research participants
who were mostly dissatisfied with on the way down stories by facilitator 1 might give thatitem a 2 from the scale. The same person rated asks questions to stimulate conversation a 6.
Best and Worst from
Fidelity Checklist
Best_1__stories of facilitator1______________
_2__discussionplus__________________
_3__introductions____________________
Worst
_1__evaluations______________________
IX
checklist is then used to complete Table IX.Those with the lowest fidelity ratios(highlighted in the Fidelity Checklist) may
be excellent candidates for things to changein the program. The best scores with thehighest ratios may be especially important tocontinue in future uses of the program.
Another way to determine goodversus not so good components of theintervention is completion of theSatisfaction with Program and related forms(starting on the next page). The list ofcomponents from the Fidelity Checklist isreviewed to identify ten items for theSatisfaction with Program form.
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
30/66
Ten Steps 30
SATISFACTION WITH PROGRAM formName or ID Number_______3675_________Using the satisfaction scale, rate your satisfaction or how pleased were you with the following components ofthe program.Very Veryunsatisfactory satisfactory
1 2 3 4 5 6 7
Completed program satisfaction ratings are then summarized in the tally sheet. The sheetticks items in the columns labeled SATISFIED or DISSATISFIED for individual
components. The CBPR team member tallying these ratings checks an item as satisfactory ifthe research participant rated a component as greater than 5 and marks as an unsatisfactory itemif the score was below 2. So, for example, the CBPR, team member would check the tally asks
Tally Sheet
Satisfaction Rating Enter specific items here
2 -- On the way down stories
6 -- On the way up stories
3 -- Stories of hope
2 -- Stories of recovery
5 -- Invites comments from programparticipants
6 -- Asks questions to stimulateconversation
5 -- Assign some kind of self-monitoring task.
3 -- stress experiences in county jail
2 -- review homeless history
1 -- discuss bad experiences withtreatment
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
31/66
Ten Steps 31
Enter specific items here SATISFIEDEnter one tick for each
research participant whorated the item higher than
5
RATIODivide satisfied
by total N(_25_); circle ifgreater than .75
DISSATISFIEDEnter one tick for each
research participant whorated the item less than 3.
RATIODivide dissatisfie
by total N(_25_); highligif greater than .7
-- On the way down stories ///// .20 ////// .24-- On the way up stories //////////// .48 /// .12
-- Stories of hope ////////////////// .72 // .08-- Stories of recovery ////////////////// .72 / .04-- Invites comments fromprogram
participants
///// .20 //////////// .48
-- Asks questions to stimulateconversation
///////////////// .68 /// .12
-- Assign some kind of self-monitoring task.
/// .12 /////////////////// .75
-- stress experiences incounty jail
/////// .28 //////// .32
-- review homeless history 0 ////////////////// .82
-- discuss bad experienceswith treatment //////// .32 //////// .32
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
32/66
Ten Steps 32
questions to stimulate conversation as satisfied for the person filling out the SatisfactionRating sheet above. Ratios are determined by the number of checks in each box divided by thenumber of research participants in the evaluation. In the sample tally sheet, 18 research
participants rated satisfaction with stories of hope greater than 5 which, given that there are25 research participants, yields a ratio of .72. Ratios are circled if they are greater than 0.66signaling research participants were satisfied with the individual component. Ratios are
highlighted when the ratio of dissatisfactions to total is higher than 0.66 suggesting aunsatisfactory component. Highlights and circles are used to fill out the bottom half of TableIX.
significance of this point. It might suggest that facilitators need to specifically keep track ofintroductions and homework. Alternatively, these findings might suggest introductions and
Satisfactory-Unsatisfactory Program
Components
Satisfactory
_1__asks questions to stimulateconversation
_2__stories ofhope___________________
_3__stories ofrecovery________________
Unsatisfactory
_1_assign some kind of self-monitoring task
IXThree components were rated as highly
satisfactory (>0.66) and hence included inSection IX as such. Two components were ratedas highly unsatisfactory and included in thetextbox. The information in Section IX is usedto complete the To Do list in Section X. Notethat the To Do list is not just to focus on what
is wrong with the program, but also what workswell. The latter components are the firm base inwhich program development occurs.
Four possible issues might be relevant tomodifying the program. Introducing each otherand follow-up on homework were identified onthe Fidelity Checklist as least often used in the
program. Consider the
TO DO LIST:replace anti-stigma program: check if yes _______
(review for alternative programs)
MODIFY program based on fidelity and satisfactory/unsatisfactory
_Introduce program participants to each other. Ask questions to stimulateconversation .
Examine specific components of discussion . review role of evaluation.
TEACH facilitators based on fidelity and satisfactory/unsatisfactory
__Consult with facilitator 2 regarding her stories about homelessness.
X
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
33/66
Ten Steps 33
homework are unimportant and can be discarded from the program with no harm. Two otherissues might be important for program modification: inviting comments from participants andassigning self-monitoring homework. These were ranked as least satisfactory, so the CBPRteam should decide whether to alter them so they become more appealing to program
participants.Two issues emerge regarding teachable concerns. An agent of the CBPR team may wish
to consult facilitator 2 regarding her story about homelessness. Other teachable foci are use ofself-monitoring tasks and the role of follow-up and homework. We reiterate the point madeearlier in the guidebook. The To Do list is solely meant to provide suggestions. Regardlessof the findings, facilitators and others involved in the anti-stigma program should beapproached by the CBPR team as mutually respected peers. The anti-stigma program has ahistory based on the efforts of others. Components of the program should not be set aside in acavalier manner. Healthy discussion among CBPR team and facilitators is an important step tofurther understand the program and directions for change.
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
34/66
Ten Steps 34
Chapter 6
Cross Group Differences
Another important goal of program evaluation is to determine how anti-stigma programsdiffer across key groups. When we say key groups, we generally mean demographicsincluding gender, ethnicity, age, marital status, education, annual household income, and workstatus. Appendix 2E has a sheet labeled Information About You, which might beadministered as a way to obtain demographic information. Note that we were purposefullyover-inclusive to provide opportunities for research participants to identify the broadest range ofrelevant group differences. Depending on the situation, the CBPR team might decide to omit afew items or include even more, such as military service, physical health problems, and policearrest history. Examples of group comparisons that are especially important are comparing
gender, male to female, or ethnicity, such as European American to African American. We usedifferences between African and European American here as an example though we recognizethese are only two of many possible ethnic group differences. For example, group differencesin European Americans and Latino Americans may be especially important in someSouthwestern United States communities.
In this example, we include many of the evaluation decisions in the previous section (I-III) on First Person Stories, Inc., targeting teachers in the faculty dining room. We proffer thesame CBPR team (e.g., Jane responsible for the overall evaluation). It is essential here to makesure the CBPR group is diverse. Clearly this means stakeholders from different backgroundsand ethnicities, such as African and European Americans. But even within the idea of
stakeholder are some interesting possibilities. For example, perhaps the CBPR team willspecifically decide to recruit ministers and other members of faith communities from theAfrican American community.
Noticeably different in the evaluation plan would be answers to Questions IV,highlighted in the textbox. Ethnic differences are the primary focus of the approach; do Blacksand Whites differ in their reactions to the anti-stigma program? More useful, however, areanswers to the second question. What characteristics account for the group differences?Answers to that question yield specific directions for revising the approach.
We use the same measures in this group difference evaluation as those in the previouschapter: dangerousness, fear and avoidance. Defining the comparison group is perhaps the keydecision of the evaluation examining African and European American participants (see SectionV on the next page). In more traditional program evaluations, comparison refers to the anti-
Question(s) Examining change due to anti-stigma program
How does First Person Stories, Inc affect African Americans versusEuropean Americans? What are those differences?
IV
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
35/66
Ten Steps 35
stigma program versus a control or some other group. Differences between ethnic groups arethe questions of interest here; hence, the SAME anti-stigma program is used for both ethnicclasses. This point is highlighted in Section V. Once again, the experienced researcher mightargue that these data should be collected at pre AND post-test. Although this may be importantin the most rigorous of designs, use of post-test data only may also yield important findings.
Unclear here is whether the individual anti-stigma program under evaluation would beprovided as a mixed group of participants (African American AND European American) or arelatively homogeneous group (all African American or all European American). There are
pros and cons of mixed versus single groups which the CBPR team might wish to consider.Mixed groups may parallel real world situations where recruiting for such groups is made moredifficult by attempting to identify people of similar ethnic backgrounds. Presenting to single
ethnic groups, however, may increase the race-related positive effects of the program. Forexample, participants may be more forthright about stigma and stigma change when their groupis populated solely by people of their ethnicity. In our example here, groups receiving FirstPersons Stories, Inc. were mixed.
The ethnic group comparison design changes the appearance of the data Table in SectionVII (see the next page). The blacked-out columns reduce possible comparisons from three permeasure to two per measure; African versus European Americans. As in the previous example,the last row of the Table lists the average of responses for the 25 subjects.
Comparison GroupOVER TIME: yes? __ _____ ACROSS GROUPS: yes? __yes_____
__ ___pre Is this a wait-list control group yes?
_no____
__ ___post Name of comparison group(s)_______African American_____________
_____follow-up _______EuropeanAmerican______________
_______ ________________ Note that for across group data,number of days from post to follow-up measures are collected once, at posttest.
V
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
36/66
Ten Steps 36
Means are then entered into the bar graphs (shown for dangerousness on the next page).It seems like the African American response to dangerousness after the anti-stigma condition ishigher than for the European American group. The table that accompanies the graph addressesthis question more rigorously. Group differences are circled in the Table consistent with the
type of design in the study. Division of differences by 20 is positive and greater than one,which suggests European Americans benefited from the intervention more than AfricanAmericans; more specifically, they agreed with the idea of dangerousness less than theirAfrican American counterparts. Though not provided here, analyses of the difference betweenthe two groups for fear was not significant. The difference for avoidance was greater than oneand negative, suggesting that the African American audience endorses ideas of avoidance lessthan the European American group.
Measure
1_dangerousness__Measure 2_fear_______ Measure 3_avoidance__
Grp 1 Grp 2
_______
Grp 3 ? Grp 1 Grp 2
_______
Grp 3 ? Grp 1 Grp 2
_______
Grp 3 ?
African
American
European
American
African
American
European
American
African
American
European
American
1 5 3 5 4 4 62 6 4 3 6 3 83 7 9 7 5 6 74 7 2 5 2 5 85 5 6 7 8 8 56 4 2 2 7 2 77 5 3 4 4 3 68 7 4 5 6 5 99 7 3 3 3 3 8
10 7 8 4 4 6 711 8 5 6 7 1 612 7 7 5 5 1 513 8 2 4 6 3 414 6 4 5 5 5 715 7 3 7 1 2 816 4 4 3 3 4 617 5 3 2 7 3 718 5 3 6 6 5 819 6 4 9 4 5 720 5 3 4 5 3 521 6 4 8 6 4 622 5 5 6 3 2 7
23 7 7 5 8 8 624 8 9 4 4 8 625 7 3 5 7 4 8
Average 6.16 4.40 4.96 5.04 4.12 6.68
V
VII
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
37/66
Ten Steps 37
Measure 1___dangerousness_______ COMPARISON IS TIME OR IS GROUP
Hi____ Hi _10___
M
E
A
SU
R
E
# ___
0 0
Pre Post F-Up Grp 1African American Grp 2European American
Is this difference significant and meaningful?
Making Sense of the Data (in Section VIII on the next page) is markedly different whenexamining differences in ethnicity versus those in intervention versus control groups; specificgroup differences are assessed. Note that in the textbox for Section VIII, differences betweenAfrican American (Afr.Am) and European American (Eur.Am) are listed by measure. Fordangerousness, the difference between groups was positive, so an asterisk (*) is enteredsuggesting African Americans endorsed dangerousness more than European Americans. Nosignificant difference was found for differences related to fear, so a zero (0) is entered.European Americans significantly endorsed avoidance more than African Americans, so a
pound sign (#) is entered. Responses are totaled at the bottom of the box. Note that EuropeanAmericans showed better outcomes than African Americans on one attitude (dangerousness)and African Americans better than European Americans on another (avoidance). This leads tothe next question: what is it about the anti-stigma program that leads to these differences?
One way we answer questions about what component is important to change? is byexamining fidelity; what components facilitators used in the actual presentation of the program.This kind of fidelity analysis does not work in a study with mixed groups (both AfricanAmerican and European American). Another way to determine components relevant to specific
Group _X__differences
Time __Differences
ratio= differences2
If >1.0significant (*)
If >-1.0significant (#)
Grp 1 Grp 2 = Pre Post = 1.76 *Grp 1 Grp 3 = Pre F-up =
Grp 2 Grp 3 =
M
E
A
SU
R
E
# _2_
VIIa
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
38/66
Ten Steps 38
intervention, however, is completion of the Satisfaction with Program form and other relatedforms (starting on the next page). The list of components from the Fidelity Checklist is
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
39/66
Ten Steps 39
reviewed to omit those believed not to be necessary to conduct the program (this process is
summarized in more detail earlier in the Guidebook). In an earlier example, the Satisfactionwith Program form included ten items which we repeat here as an example. Researchparticipants are again instructed to rate each of the items on a 7-point satisfaction scale.
SATISFACTION WITH PROGRAM form
Satisfaction Rating Enter specific items here
4 -- On the way down stories
3 -- On the way up stories
2 -- Stories of hope
3 -- Stories of recovery
1-- Invites comments from program
Participants
7 -- Asks questions to stimulateconversation
5 -- Assign some kind of self-monitoring task.
4 -- stress experiences in county jail
2 -- review homeless history
3 -- discuss bad experiences withtreatment
Making Sense of the Data:
For Measure 1:__dangerousness___________
Afr.Am > Eur.Am pos (*) __*__Eur.Am > Afr.Am neg (#) ____
Eur.Am Afr.Am none ____
For Measure 2: __fear___________Afr.Am > Eur.Am pos (*) ____
Eur.Am > Afr.Am neg (#) ____
Eur.Am Afr.Am none __0_
For Measure 3:__avoidance___________Afr.Am > Eur.Am pos (*) ____
Eur.Am > Afr.Am neg (#) __#__Eur.Am Afr.Am none ____
total
Afr.Am > Eur.Am ___1___
Eur.Am > Afr.Am ___1___
VIII
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
40/66
Ten Steps 40
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
41/66
Ten Steps 41
Completed program satisfaction ratings are summarized in the tally sheet, but this sheet isset up differently. The sheet tracks items rated satisfactorily (greater than 5) separately forresearch participants who were African American versus European American. Differences
between the two ethnic groups (Column II Column I) are entered into the table followed by aratio (the difference divided by total number of research participants). The ratio should be
circled in cases where the absolute value of said ratio is greater than 0.25.
Enter specific items here Column I:
SATISFIEDEnter one tick for
each researchparticipant who isAfrican American.
Column II:
SATISFIEDEnter one tick for each
research participantwho is African
American.
Subtract:Column II
minusColumn I.
Ratio:Difference
divided by N(_50_). Circle
if absolutevalue of ratio >
.xx
-- On the way down stories /////// ///////// 2 0.04-- On the way up stories ///////// /// -6 -0.12-- Stories of hope ///////////// / -12 -0.24
-- Stories of recovery //////////////// / -15 -0.30-- Invites comments fromprogram
participants
/// //////////// 9
0.18-- Asks questions to stimulateconversation
////////////// /// -11-0.22
-- Assign some kind of self-monitoring task.
// ////////////////// 180.36
-- stress experiences incounty jail
////// //////// 20.04
-- review homeless history ////////////////// 18 0.36-- discuss bad experienceswith treatment
//////// //////// 0 0
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
42/66
Ten Steps 42
Circled items are finally entered into the To Do list. Items in the To Do list directthe CBPR team towards components that may need to be modified (or facilitators trained) inorder for those components to better reflect the interests of one of the ethnic groups. Items inthe To Do list in no way suggest the corresponding component is especially troublesome foran ethnic group, only that that component should be considered by the CBPR team in order to
modify the program or train facilitators.
TO DO LIST:Components that differ across groups: African American versus European Americans.
Stories of recovery.
Assign self-monitoring task.
Review homeless history.
X
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
43/66
Ten Steps 43
Appendices A thru Ffor the
User-Friendly GUIDEof the Ten Steps to Evaluate Programs that Erase the Stigma of Mental Illness
A. Ten Steps for Anti-Stigma Program Evaluation
Plan
B. Attitudes Scale (Instrument and Scoring Keys)
Public Stigma
Self-Stigma
C. Fidelity Assessments
Contact
Education
D Satisfaction with Program
E. Information About You
Appendix continued
-
7/28/2019 Guidebook Evaluation Stigma Patrick Carrigan
44/66
Ten Steps 44
Ten Steps for
Anti-Stigma Program Evaluation Plan
(ASPEP-10)
Type of Stigma (check one) PUBLIC SELF
______________________________________________________________________________________WHOis the CBPR team
is responsible for the overall evaluation by defining the questions and hypotheses? _________________
is going to conduct the anti-stigma program(s)? ______________________________________________
is going to collect the outcome data and enter into a computer file? ______________________________
is going to collect the Fidelity and Satisfaction data?___________________________________________
is going to analyze the data?_______________________________________________________________
is going to make sense of the analyses?______________________________________________________
What is the stigma? public self
What is the anti-stigma program?
Is there a manual for the program? Yes____ No____ Name of manual_________________
Does it have a fidelity measure? Yes____ No____ If no, one will need to be developed
also need Satisfaction with Program form
I
WHO IS THE TARGET OF THE PROGRAM?- for public stigma, possible targets are the general public, high school students, employers_______________- for self-stigma, targets are usually people with mental illness __________