evaluation design - paper
TRANSCRIPT
Running Head: EVALUATION DESIGN
Evaluation Design
ECS*4000 F15 (01) Program Development and Evaluation
Danielle Benrubi
0764674
Professor Alice Balter
Monday November 11th, 2015
Pages
1
Running Head: EVALUATION DESIGN
Introduction
Purpose of Program and Description of Organization/Program
“The Children’s Breakfast Club” is a non-profitable and charity organization set
out to provide children with a nutritious breakfast (The Children's Breakfast Club, n.d.).
This organization believes that all children are entitled to receiving a nutritious breakfast
(The Children's Breakfast Club, n.d.). With over 20 clubs available, serving around 4,000
meals per week, this organization aims to meet the community/residential needs and
concerns through a breakfast program (The Children's Breakfast Club, n.d.). The habits
that children develop in their earliest years are ones that will be reflected in their later
years. This program additionally strives to deliver an educational experience. This
Breakfast Club’s main focus is on instilling the early establishment of eating habits, table
manners, health benefits, balanced breakfast and nutritional meals (The Children's
Breakfast Club, n.d.).
Organization and Program Objectives
This program strives to meet and achieve set and developed objectives to succeed in program delivery.
1. Provide children with nutritious, well-balanced and appetizing breakfast meals prepared in accordance with Canada’s Food Guide.
2. Encourage children to develop healthy Nutritional, behavioural and personal hygiene habits.
3. Provide children with emotional support. 4. Encourage and develop the skills of staff, volunteers and other community
members. 5. Provide social and intellectual stimulation.
(The Children's Breakfast Club, n.d.). Program Outcomes
“Children’s Breakfast Club” examines research-based findings that lead to
statistics that make this program possible to result in positive outcomes (The Children's
Breakfast Club, n.d.).
2
Running Head: EVALUATION DESIGN
Further more, this Breakfast Club is an outcome-focused program, which places its
interest on making an impacting effect on individuals in the program (The Children's
Breakfast Club, n.d.). The outcome results that which are reflective in the children’s
learning’s, knowledge and behaviour are the primary interests of this program.
Two Program Outcomes:
1. The children have developed an increased level of nutritional understanding and importance.
2. The children have taken the new learning’s from this program and applied it in their everyday lives, with the motivation to maintain a healthy outlook on healthy balanced meals.
(The Children's Breakfast Club, n.d.).
Type of Evaluation
Description of Summative Evaluation
In this evaluative process summative evaluation will be conducted, as it is the
most suitable to the Children’s Breakfast Club program. “Summative evaluation looks at
the impact of an intervention on the target group” (Evaluation Toolbox, n.d.). In this case,
summative evaluation will be used to examine the impact that this Breakfast Club (the
intervention) has on the individual participants within the program (children being the
targeted population). This type evaluation is implemented to determine and find out what
the project or program achieved (Evaluation Toolbox, n.d.). Based on the details, process
and execution of this program, a summative evaluative approach is the most suitable for
various reasons.
Justification
The Children’s Breakfast Club is already underway and in the process of being
implemented in over 20 clubs across the GTA (The Children's Breakfast Club, n.d.).
3
Running Head: EVALUATION DESIGN
With summative evaluations being used during current project implementation, this
method presents itself as an applicable approach (Balter, A. (2015). This program is
already up, running and in the process of current implementation (Balter, A. (2015). As
previously stated, this program aims to address the needs of residential and community
members by making changes (The Children's Breakfast Club, n.d.). This leads us to
understand that this program is striving to make a difference in this community.
Summative evaluations seek to help better identify and understand the process of change,
as well as find out what works and what doesn’t (Evaluation Toolbox, n.d.). Based ON
one of the 5 objectives listed on “The Children’s Breakfast Club’s” website, this program
“encourages children to develop healthy nutritional, behavioural and personal hygiene
habits” (The Children's Breakfast Club, n.d.). Summative evaluations intentionally seek
to examine and highlight intended effects that have changed children’s attitudes,
knowledge and behaviour (Balter, A. (2015). This is shown through participant’s
involvement throughout the program.
Summative evaluations are characterized as being outcome-focused (Evaluation
Toolbox, n.d.). Inferred from the programs objectives, this program is more concerned
with the outcome of the children after participating in the program, rather than periodic
outcomes (The Children's Breakfast Club, n.d.). Summative evaluations are additionally
implemented to find out if projects or programs have reached/met the goals and
objectives specific to their organizational program (Balter, A. (2015).
4
Running Head: EVALUATION DESIGN
Evaluation Method/Design Description
In efforts to evaluate “The Children’s Breakfast Club” organization/program,
Non-experimental pre-test and post-test will be the chosen evaluative design method.
“In pre-test and post-test design, evaluators survey the intervention group before and after
the intervention” (Types of Evaluation Designs, n.d.). Rather than comparing a child’s
performance or results to a norm (other children in the program), this evaluative method
is interested in measuring change that has resulted from the programs implementation
(Evaluation Toolbox, n.d.). By comparing the pre-test results to the post-test results, it
can be assumed that evaluators will be able to determine whether the objectives and goals
for the program were met or not.
Justification
There are many reasons why this is an ideal design method to be used for this
program. A pre-test and post-test design supports The Children’s Breakfast Club’s
intentions of measuring their programs impact on participants, by educating children
within the program (The Children's Breakfast Club, n.d.). This is shown through the
programs listed objectives, which incorporates their intention to make an education
impact on the participants (The Children's Breakfast Club, n.d.). With the organization
providing the children with the intervention (which is designed to make an impact), this
evaluative design is useful, as it will measure an impacting existence or lack of one
(Types of Evaluation Designs, n.d.). In order for the evaluators to conclude the
interventions impact, they will need to obtain a baseline from each child. Pre-test and
post-test is useful for this programs evaluation, as it provides the program with this
5
Running Head: EVALUATION DESIGN
necessary/required baseline. This intern provides data collectors with something to
compare the final results with.
This results in the programs ability to effectively measure prior and newly gained
information, from the pre and post-test results (Types of Evaluation Designs, n.d.). This
is a useful design method, as it helps us determine what the intended and unintended
impact is on the participants within the program.
The Children’s Breakfast Club is specifically interested in how their program is
individually impacting the participants within it (Children's Breakfast Club, n.d.). With
individual impact being the programs focus, comparing individual participants to a norm
(all children in the program) would not be ideal/useful (Types of Evaluation Designs,
n.d.). In this case, comparing the change in the child’s knowledge, behaviour’s and
attitude would be the suitable approach. Pre-test and post-test design measures individual
changes each participant of this program (Types of Evaluation Designs, n.d.).
Realistic – can this actually work
Pre-test and post-test is a realistic method of design for many reasons. Typically
Non-experimental designs are seen to be disadvantaged, as they lack a comparison or
control group (Types of Evaluation Designs, n.d.). This mentioned disadvantage in non-
experimental designs leads it to be viewed as the weakest study design (Types of
Evaluation Designs, n.d.). Pre-test and post-test design is categorized as non-
experimental, however in this case it is more advantages than a disadvantage (Types of
Evaluation Designs, n.d.). This is because The Children’s Breakfast Club is looking to
compare a child’s pre-test results, to their own post-test results (Types of Evaluation
Designs, n.d.).
6
Running Head: EVALUATION DESIGN
This design is an intervention group, meaning that this design measures the group
of people involved in the intervention or program (Types of Evaluation Designs, n.d.).
The use of an intervention group aids in the designs realistic use, as the determination of
impact on individual participants is the ultimate goal (Children's Breakfast Club, n.d.). It
can be concluded that pre-test and post-test design does not compare test results to a
norm group (Types of Evaluation Designs, n.d.). Post-test and pre-test design provides
the evaluators with a realistic overall picture of the child, before and after the intervention
is implemented (Types of Evaluation Designs, n.d.).
Evaluative Data Collection Method
Questionnaires, Interviews, and Observations as methods of collecting data,
provide evaluators with a well-rounded picture of the participants and the program itself.
Questionnaires
Questionnaires are a standardized tool implemented to gather information. This
information is commonly collected in the format of written questions, which require a
written response that corresponds to what the question is asking (Planning Evaluation:
Summary of Common Evaluation Methods, n.d.). Previously stated, “The Children’s
Breakfast Club” services over 20 clubs across the GTA, and around 4,000 meals per
week. With such a large number of people actively participating in these programs,
questionnaires become useful. A questionnaire is a helpful tool in obtaining information
from a large number of people (Planning Evaluation: Summary of Common Evaluation
Methods, n.d.). With the large number of participants within these breakfast clubs,
questionnaires provide a time saving method, where participants can complete them on
7
Running Head: EVALUATION DESIGN
their own personal time (Planning Evaluation: Summary of Common Evaluation
Methods, n.d.).
This method of data collection additionally provides participant with the
opportunity to stay anonymous (Planning Evaluation: Summary of Common Evaluation
Methods, n.d.). This leads to the participant’s ability to respond freely and more
comfortably, in comparison to the experience they may encounter in an interview
(Planning Evaluation: Summary of Common Evaluation Methods, n.d.). Interviews may
lead participant to say less, as their name is on the line and honesty is not characteristic in
everyone (Planning Evaluation: Summary of Common Evaluation Methods, n.d.). In
conclusion, a questionnaire is one of three useful tools that could be used in the
evaluation of The Children’s Breakfast Club.
Interviews
Interviews are a process of asking participants within the program structured or
unstructured questions (Planning Evaluation: Summary of Common Evaluation Methods,
n.d.). Interviews are a useful method of collecting data in this evaluative process, because
it allows interviewers to ask open - ended questions (Planning Evaluation: Summary of
Common Evaluation Methods, n.d.). Open - ended questions provide interviewers with
more in-depth answers regarding participant’s experiences, beliefs, attitudes, and
perceptions (Data Collection Methods for Program Evaluation: Interview, 2008). With
interviews, evaluators are aiming to get an idea of the participant’s intention as they
participant in the program (Data Collection Methods for Program Evaluation: Interview,
2008). Evaluators will be able to gain an understanding of the child’s level of dedication
8
Running Head: EVALUATION DESIGN
to the program through an interview Data Collection Methods for Program Evaluation:
Interview, 2008).
Observations
Observations “involve direct observations of events, processes, relationships, and
behaviour’s” (Planning Evaluation: Summary of Common Evaluation Methods, n.d.).
The children will be observed in the programs environment to get an idea of their
participation in the activities. “Observations are a useful way of understanding behaviour
in its’ natural setting” (Planning Evaluation: Summary of Common Evaluation Methods,
n.d.). “Participant observation can include whether participants seem attentive, or ask
questions and engage in discussion” (Evaluation Toolbox, n.d.). Interest and lack there
of, will be easily represented through this visual observation. Observations will be useful
during this specific evaluation because they can be executed without the participants
knowing they are being evaluated (Data Collection Methods for Program Evaluation:
Observation, 2008). By evaluating participants in a program without them knowing, this
increases the likeliness that they will behave naturally, leading to more reliable and valid
results (Data Collection Methods for Program Evaluation: Observation, 2008).
Who I will be collecting data from?
Description of stakeholders
“A stakeholder is any person or group who has an interest in the program being
evaluated or in the results of the evaluation” Program Evaluation Toolkit, n.d.). In this
evaluative process, program participants (children), their parents, and program staff, will
be the stakeholders whom will be involved in the data collection process.
9
Running Head: EVALUATION DESIGN
Justifying
Direct or indirect, anyone who is remotely involved in the Children’s Breakfast
Program is an important stakeholder, as they have witnessed the programs
implementation. Children are important stakeholders, as they are the main focus of this
program and its evaluation. It is important that the evaluators gain a realistic and overall
picture of each child in the program, in order to calculate the programs impact. Parents
are important stakeholders because they can often be seen as the primary caregiver. This
role can mean that as a parent, they are present in their child’s life out of this program
environment. This would be considered an outside perspective of their child; where as,
evaluators only have one, which comes from observing the child’s interaction within the
program. Parents will be able to inform evaluators about their child’s knowledge,
character, personalities etc. This can be used hand in hand (to be compared to) with the
results that we receive from the post-tests, after the program has been implemented.
Staff members are just as important as stakeholders, as they are administering the
program and providing the children with the educational material. They are important to
use within the data collection process, because they understand the program best. In
addition to this, they also have first hand experience working with and observing the
children in the program.
Realistic
Using these three stakeholders in the process of collecting data is realistic, as the
goal is to obtain an overall understanding of all participants involved in the program.
Observations, interviews and questionnaires given to these three participants, are
essential for evaluators to get the most beneficial and realistic results. In order to get a
10
Running Head: EVALUATION DESIGN
clear picture and understanding of the impact of this program, it would be useful to have
the Input and perspective from all angles so that nothing is left out or missed. Interviews
will be administered to all stakeholders. Observation will be administered to staff
members and children, and finally questionnaires will only be administered to the
children participating in the program.
Evaluation Result Predictions
After this evaluation process has been implemented and executed, the final step is
to assess the test results. It is possible that evaluation results may provide evaluators with
positive or negative information. After pre-test and post-tests have been conducted, the
results may indicate a lack or presence of an impact of the intervention. These final test
results may inform evaluators of a possible link between the intervention program, and a
change in participant’s post-test results.
With this being a prediction, it is very important that both positive and negative
results are considered. It is very possible that the evaluation results will indicated a
negative outcome, indicting that this program did not prove to be effective in impacting
its intervention group. It is possible that this could be because children were not
motivated to participate, or were not interested in learning about nutrition. Everyone has
their own personal priorities, and with that we all have our own way of ranking them. It
can be understood that maybe health and nutrition was viewed as a high priority for some
children. In contrast to the above prediction, it is also very possible that the evaluation
results could indicate a positive outcome, which would tell us that the program made an
influence on the intervention group. This could be because the children were engaged by
11
Running Head: EVALUATION DESIGN
the topics being discussed or taught. This could also be because the children were
interested in improving their life style, in terms of eating healthy.
This negative effect could be shown through poor attitudes observed as well as
lack of participation in the program. On the other hand, positive effects could be shown
through children changing their eating habits at home, as well as an eagerness to
participate in program activities. In addition to revealing possible positive or negative
evaluation results, they can also inform us of various unintended impacts on program
participants. Thus meaning that additional learning’s may have come from this programs
implementation that program staff did not account for. The implementation of this
intervention could be a possible factor in the children’s lack or presence of knowledge,
attitude, behaviour and nutritional motivation.
Finally, if this program makes a meaning impact, it could then possibly be useful
as an aspect of additional child and youth programs. Incorporating educational and
informational programs within organizations for kids could be proven to be beneficial
through the results that this evaluation reveals. This would depend on whether they are
positive or negative. If the results are on the other hand negative, this would mean that
changes and alterations to the program and its implementation method would need to be
made to strengthen it. One way of starting this change would be to assess and review the
objectives and outcomes of the program. If these objectives and outcomes are not
measurable, this could be an indicator as to why this program did not make the impact
that it was intended to make initially.
In conclusion, with background knowledge and experience in program planning
and evaluation, an effective evaluative program can be developed.
12
Running Head: EVALUATION DESIGN
Reference List
Balter, A. (2015). Introduction to evaluation [PowerPoint slides]. Retrieved from
https://courselink.uoguelph.ca/d2l/le/content/358139/viewContent/1248549/View
Data Collection Methods for Program Evaluation: Observation. (2008). Retrieved from
http://www.cdc.gov/healthyyouth/evaluation/pdf/brief16.pdf
Data Collection Methods for Program Evaluation: Interview. (2008). Retrieved from
http://www.cdc.gov/healthyyouth/evaluation/pdf/brief17.pdf
Evaluation Toolbox. (2010). Retrieved from http://evaluationtoolbox.net.au
The Children's Breakfast Club. (2015). Retrieved from http://breakfastclubs.ca
Types of Evaluation Designs. (n.d.). Retrieved from
https://www.urbanreproductivehealth.org/toolkits/measuring-success/types-
evaluation-designs
Planning Evaluation: Summary of Common Evaluation Methods. (n.d.). Retrieved from
http://www.excellenceforchildandyouth.ca/sites/default/files/docs/
PEtoolkit2013/Pg15_CommonEvalMethods(Example).pdf
Program Evaluation Toolkit. (n.d.). Retrieved from
http://www.excellenceforchildandyouth.ca/sites/default/files/docs/program-
evaluation-toolkit.pdf
13