evaluationprofile
TRANSCRIPT
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 1/22
Evaluation Profile 1
Evaluation Profile
Eric Jacobs, Michael May, Jason Poole, Jessica Robinson
James Madison University
Fall 2011
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 2/22
Evaluation Profile 2
The Key Terms
Often times when we hear the term “evaluation”, the concept and ideas behind
the term can be difficult to understand. Often, one may think of evaluation as a form
of review. It is a way of taking a product or concept and dissecting and analyzing both
the positive and negative aspects of the product. For educators, they are constantly
being evaluated not only on our performance but also on how well their students are
performing. Are the methods of instruction being delivered in a manner that will truly
help benefit the students? All this can be fully realized when defining the concept of
evaluation and finding a better understanding of just what evaluation is.
Webster’s dictionary defines evaluation as, “determining the significance, worth,
or condition of, usually by careful appraisal and study” (Evaluation). However, when
working as evaluators one’s main goal is to monitor progress, find success, and look for
areas of growth.
The concept of evaluation is not a stand-alone term. Aside from this there can
also be self-evaluation. Evaluation Trust defines self-evaluation as, “evaluation which
is owned, controlled, and often carried out by the project‘s participants, primarily for
their own use, as an integral part of the organization’s life. It is a learning process which
actively involves participants in reflecting critically on their organization, and the issues
to which it is responding.”Self evaluation is looking at your own ideas and products and
finding areas of strength as well as areas for improvement. In addition to locating these
areas of improvement self-evaluation also includes suggestions for improvement (“The
Evaluation Trust”).
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 3/22
Evaluation Profile 3
When a product or service is evaluated, one can’t simply gather data all at one
time. It is a process. What one finds on Monday won’t be the same on Wednesday. It’s
important to monitor . Monitoring is a way of tracking data, taking information day by
day and making it a routine. The main purpose for monitoring is to stay connected and
be able to provide constant feedback regarding an organizations ideas and programs.
It has been said that the most generic goal of evaluation is to provide feedback
that is beneficial to the stakeholder. There are two general types of evaluation,
formative and summative. Trochim (2006) states that formative evaluation strengthens
or improves and summative evaluation examines the outcomes. We often use these
terms in our classrooms and work places. Formative assessments as well as formative
evaluations are the day to day observations, records and data we collect when
assessing. Summative evaluations are toward the end of this process acting more as a
summary of what has happened. Formative evaluations take place often and allow the
stakeholder time to implement the suggestions. During these formative and summative
assessments evaluators and stakeholders are constantly recording, maintaining and
collecting artifacts. Reeves defines artifacts as items of interest; they are important
collections of data necessary for review in order for success for the stakeholder and
evaluator.
There are also many key terms related to the functions of evaluation. These
terms include conducting a needs assessment. This process takes place during
the analysis phases. In other words, it is determining the necessary pieces that the
stakeholder will plan to implement. Secondly, formative evaluations are conducted. As
previously mentioned formative evaluations are quick checks to keep the stakeholders
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 4/22
Evaluation Profile 4
and evaluators communicating and on task. The last three concepts in the functions
of evaluation are effectiveness evaluation, impact evaluation and maintenance
evaluation. These three functions determine the benefits, worth and necessary
changes useful for making the evaluated program a success (Reeves &Hedberg, 2003,
p. 57-58). There are some other key terms related to evaluation that don’t necessarily
need to be defined include: accountability, objectives, indicators and outcomes
(Trochim, 2006).
The Nature of Evaluation
Evaluation at its very essence is a systematic approach to assessing value
or worth of a particular method, with the ultimate goal of informed decision making.
Aside from the assessment of value evaluation can add accountability by insuring
appropriate practices are are taking place while also creating a means for improvement
to an ongoing process. The importance of evaluation when focusing on educational
processes should not be overlooked, as at some point all educational programs
should and will need evaluation for merit or worth (Cook, 2010). In terms of education,
evaluation can be applied to varied aspects of learning processes and environments
to make informed decisions on there value to the stakeholder. These evaluations can
take many forms and span all levels of the educational system. The broad nature of
educational evaluation creates an environment in which the form of evaluation takes on
varied modes including student assessment (i.e. tests, quizzes, exams), instructional
evaluation, learning software and systems assessments, curriculum assessment and
program assessment (Dylan, 2001). The goal of each these evaluation processes is to
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 5/22
Evaluation Profile 5
access the data related to the subject of interest and determine the existence of value
to the stakeholder. This value is the the linchpin that keeps effective learning on course
and ensures positive gains to the learning stakeholders.
While there is an obvious value added benefit to performing evaluation these
same evaluations can present drawbacks which can have negative impacts to a project.
For example the evaluation process can add time and extend the the timeline for
completion of a project, which in turn translates to a monetary burden on the project.
Internal evaluation can also present drawback if not approached correctly. The integrity
of internal evaluation is of utmost importance and to prevent the negative impact of bias
the evaluation must be performed by either an external evaluator or care must be taken
to assure a non-biased evaluation process takes place by an internal evaluator (West,
2011).
To create an effective environment for evaluation the process should be
applied to all steps related to the program being evaluated. In the case of educational
projects, areas to address would include: conceptualization, design, development,
implementation, institutionalization and conceptualization (Reeves & Hedberg, 2003).
There exist multiple evaluation frameworks that allow the evaluator to incorporate an
iterative approach to evaluation that assures projects are evaluated at all necessary
points. By addressing evaluation in this iterative manner value added benefit can occur
at the appropriate stage in a project cycle.
When discussing the nature of evaluation it is important to also look at the
similarities and differences in relation to research. While seemingly similar on the
surface, both evaluation and research carry with them distinct purposes in the nature of
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 6/22
Evaluation Profile 6
their focus of inquiry. Research is focused primarily on the discovery and investigation
of a particular phenomenon to satisfy human curiosity or to solve a particular problem
(Phillips, McNaught, Kennedy, 2010). Evaluation in contrast, as stated by Clarke &
Dawson (1999), does not have the primary purpose in line with research, but rather the
goal of analyzing the effectiveness using existing knowledge when used to obtain a
specific purpose.
To sum up the nature of evaluation, it can be stated that evaluation is the
process of analyzing a product or process to ascertain value to the stakeholders.
In much same the way that research validates a particular scientifically based
phenomenon, evaluation validates the value added by undertaking or developing an
artificial phenomenon.
Approaches to eLearning Evaluation and Related Models.
There are a variety of approaches to the evaluation of eLearning.
However, “...eLearning evaluation-research approaches need to be appropriate
for studying designed artefacts, and adapted to a cyclical, continual-improvement,
development process” (Phillips, McNaught, Kennedy, 2012, p. 87). This evaluation
theory is important as it illustrates the learning as a process and not merely as an
activity (Phillips, McNaught, Kennedy, 2012, p. 25). For the purposes of this article,
the theory of evaluation can be defined by the learning environment, learning process,
and learning outcomes (LEPO). The LEPO framework is comprised of the three former
components that act as a model of how learning should be designed around for the
eLearning environment (Phillips, McNaught, Kennedy, 2012, p. 27). Overall, this
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 7/22
Evaluation Profile 7
framework stresses learning a an iterative process. With the scope of the framework for
evaluation theory, one can see that design and development are cyclical in nature and
require an iterative process. In turn, the nature of the approach of the evaluation needs
also to be cyclical in nature and become a dynamic force of change. As the learning
environments change, so do the needs and expectations of the learners. Therefore
eLearning evaluation must also work in the same way within the developmental stage
and the end stage cycling. This is an iterative process that works simultaneously with
the eLearning developmental process. This cyclical approach is beneficial in ways that
help the eLearning environment to constantly be improving through the feedback of the
evaluations.
Two particular paradigms of evaluation that are of particular interest when
focusing on eLearning evaluation are, the interpretivist and the pragmatic paradigms.
While both paradigms are concerned with evaluation, it is the pragmatic paradigm
that holds the most value when addressing eLearning in particular. The interpretivist
paradigm is based in the constructivist theory and approaches the evaluation as a
means of interpreting how the user constructs meaning (Phillips, Mcnaught, Kennedy,
2010). This approach uses qualitative data and the interpretation of social construct
to evaluate and tends to focus more on defining what is happening as opposed to
assessing the value or merit of the system in use. An example in which the interpretivist
paradigm might be at an advantage would be an evaluation that aims to explain how
a particular generation of student interact with a technology used in the learning
environment.
In contrast to the interpretivist paradigm, the pragmatic paradigm focuses on
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 8/22
Evaluation Profile 8
accessing the process or product for value by creating an evaluation model that relies
on the collection and interpretation of a mixture of qualitative and practical data. The
pragmatic paradigm also allows for the inclusion of aspects from other paradigms to
allow for a broader understanding of practical problems. (Phillips, McNaught, Kennedy,
2010). When applied to the previous example of students, the pragmatic approach
would be best suited on evaluating the effectiveness of the technology on the students
learning outcome. The selection of an evaluation paradigm is an important step in the
evaluation process and in turn effects the evaluation model selection.
In addition to these approaches there are also several evaluation models that are
specific and prescriptive in nature. Reeves & Hedberg (2003) also show how evaluation
models are classified in two ways, descriptive or prescriptive (p. 36-37). Prescriptive
models are more common as well as more specific. One such model is that of Ralph
W. Tyler. His model focused solely on objective achievement. The textbook states
that Tyler’s approach is “deceptively simple”-first establish goals, form objectives from
the goals, format your instruction to match the objectives, gather data and determine
effectiveness. Critics have found fault in Tyler’s model stating that one often cannot
depend on initial objectives. Often times the unintended outcomes prove more valuable
then initial objectives (Reeves & Hedberg, 2003, p. 37).
A second model would be that of Michael Quinn Patton. Patton’s model differs
from Tyler’s in that he focuses more on formative observations. Patton’s model focuses
on evaluator involvement. The more the evaluator sees the better understanding he/
she will have and the more feedback he/she can provide. Patton also stresses the
importance of evaluators being prepared and trained (Reeves & Hedberg, 2003, p. 40).
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 9/22
Evaluation Profile 9
All of Patton’s research and writings stress the importance of finding strategies and
interventions to increase the effectiveness of the program.
Reeves states in his text that both models require the evaluator being prepared
and trained. However, Patton’s model focuses more on evaluator-stakeholder cohesion;
the more involved the evaluator can be the better. Patton’s model focuses on the end
results and the approaches necessary to have a successful program whereas Tyler’s
model allows the evaluator to set goals and objectives and see whether these are
met in the end. The major difference in these two models appears to be evaluator
involvement through the program’s process.
In addition to paradigms and models there are also different types of evaluation.
These include program evaluation, project evaluation and product evaluation. These
types of evaluation appear to be self-explanatory. They determine the category under
which the evaluation takes place. One difference between the three is that project
evaluation requires a step by step process of collecting and analyzing data. Reeves
quotes Mark and Shotland, authors of Multiple Methods in Program Evaluation, saying
that “multiple methods are only appropriate when they are chosen for a particular
purpose.” Multiple methods can be used during the evaluation process but only when
specific are determined and guidelines are established (Reeves & Hedberg, 2003, p.
45).
The role of eLearning Evaluators
The role of the evaluator is diverse in nature. The stakeholder will inform the
evaluator exactly what type of evaluation is needed for the current situation. This
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 10/22
Evaluation Profile 10
does not mean that the stakeholder chooses the method or style but lets the evaluator
know exactly what item, tool , person, program etc. is to be evaluated. The plan will
be made with the stakeholders input to determine the overall scope of the project.
An initial analysis of the problem will begin, not with research but the appointment
of a quality project manager. Too many times a good plan and hard work are is all
for nothing without someone to hold and put it all together (Ertmer, Quinn, 2007, p.
230). The evaluators will then move onto documenting the problem to gain a firm
understanding of the situation. In this process the evaluators must step back and see
how the eLearning item fits into the overall picture of the organizational culture and
operational environment. The evaluator will use this information to identify the goal
of the study. The evaluator needs to decide on the research paradigm and then the
correct methodology.
The evaluator must begin the process by drawing on the organizations past
successes and failures in this area. This will allow the the team to begin creating
questions and inquiries that need to be answered. This process can be accomplished
through many methods including but not limited to interviews, questionnaires, reviews,
and observations. It can also include surveys and focus groups. An interesting aspect
of evaluating many types of eLearning items is the ability to glean information from a
server about usage and time engaged with the item in question. Information can be
gathered from any source associated with the company. Stakeholders are also likely to
be participants in the evaluation process. “The participants in an evaluation study are
those who actually provide the data” (Phillips, McNaught, Kennedy, 2012, p. 108). They
are the ones with first hand knowledge of the situation or process being evaluated.
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 11/22
Evaluation Profile 11
The final step is to create the report for the clients along with its presentation.
The report presentation is the cumulative effort of all the work accomplished up until that
time. It can make or break an evaluation. To be clear, concise and well founded, are
major priorities (Reeves & Hedberg, 2003, p. 249).
Rationale for Conducting eLearning Evaluations
The evaluation of eLearning is an integral part of providing a learning experience
that is lasting and effective. . In conducting evaluations for eLearning artefacts,
stakeholders benefit from the valuable information and feedback the an evaluation can
provide. Artefacts are defined as an outcome of a designed activity (Phillips, McNaught,
Kennedy, 2012, p. 5). It is important to provide a thorough evaluation of any eLearning
artefact because the decisions being made about how, what, and why an eLearning
artefact will become can greatly affect the outcome of the learning process in the
student. Reeves states, “Decisions informed by sound evaluation are better than those
on habit, ignorance, intuition, prejudice, or guesswork” (Reeves & Hedberg, 2003, p.
4). In other words, an exhaustive evaluation of eLearning can be highly informative and
ultimately improve the learning process.
To illustrate the importance of the evaluation process, adhering to principles
and guidelines can improve the information and feedback from evaluations. Clark
and Mayer (2008) describe eight such principles as general rules to follow when
evaluating such learning artefacts. The following principles are: multimedia, contiguity,
modality, redundancy, coherence, personalization, and segmenting and pre-training.
The principles are research-based to provide the best working examples to maximize
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 12/22
Evaluation Profile 12
learning. Within these guidelines, one may see how a positive eLearning experience will
be achieved through good design choices.
However, a large part of eLearning should include proper evaluation practices.
Such practices are valuable, but other elements of the evaluation process may not
include proper practice but rather more informal feedback. Without such evaluative
procedures, information and feedback from users and other stakeholders, the quality
of an eLearning system can greatly affect the outcome of the learning process. Without
eLearning evaluation, an artefact may suffer because of the lack of feedback and
information that could hope to improve such a system. The evaluation component
of eLearning is valuable in so much as how it can provide the evaluators avenues
to improve the artefact for learners. So then, it is important to consider the outcome
associated with the beneficiary or learner. With a careful analysis of the eLearning
evaluation, the unique characteristics of the beneficiary stakeholder can be identified as
to help improve and tailor the experience (Ionaşcu, C., & Dorel, B., 2009). An effective
evaluation can help the evaluator identify the phenomena and characteristics of the
learner and cater to those understandings of audience and purpose.
Because eLearning is also a concept that is fast gaining acceptance as a
viable teaching tool, instructors, students, and administrators all expect the eLearning
experience to exhibit a certain quality and standard. “Palloff and Pratt (1999) urge
practitioners to look more closely at the eLearning environment and what it demands in
order to create successful learning outcomes” (taken from Thompson, T., & MacDonald,
C. J. (2005)). Learners expect a quality and educative learning experience and without
this quality, the eLearning environment becomes static and unimportant to the learning
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 13/22
Evaluation Profile 13
experience.
Implementing an evaluative process in the appropriate stages of the
developmental cycle of the eLearning artefact is also crucial. Specifically, during the
development stage and the evaluative stages, the process of evaluation can help to
answer questions about intended learning outcomes and goals of the instructors and
designers of the eLearning system (Ellis, Jarkey, Mahony, Peat, Sheely, 2007). With
this input, decision makers can be well informed on the developmental process of the
eLearning artefact. The feedback from an evaluation can showing the effectiveness and
weaknesses of the eLearning which can be modified for improvement. This shows how
evaluation can be beneficial to the decision makers in the developmental process of
designing instruction to be quality education.
In higher education even, quality must be a part of every stage in the education
process. This is a strategy that should be infused with the formative process of
developing the eLearning artefact (Díaz, W., Questier, F., de Jesús Gallardo López, T.,
& Libotton, A., 2011). If this is true, then the evaluation of eLearning becomes especially
important as the value and the quality of an education is closely tied to the quality of the
instruction. Thus, to reiterate, the evaluative process can greatly enhance the quality of
any eLearning instruction and also through its assessment of the effectiveness of the
extent of the learning outcomes of learners (Phillips, McNaught, Kennedy, 2012, p. 7).
The quality of an evaluation is described as functioning within the interpretivist
paradigm. With the research from an interpretivist perspective, the qualitative elements
of eLearning environments and artefacts can show what is important to and for learners.
This quality then, is important as it functions to serve the most important elements
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 14/22
Evaluation Profile 14
of the eLearning. ‘Quality’ is the key word here, and without a viable and effective
evaluation process, the eLearning environment would simply lack the necessary and
very important information, feedback, and assessment to provide a valuable, formative,
iterative process in improving instruction and the eLearning instruction itself.
Examples of eLearning Evaluation in Practice
One example of eLearning evaluation in practice is a case study of the
implementation of Google Apps into the secondary classroom (Jacobs, 2010). The
primary stakeholders involved in this pilot program were the students in the instructor’s
class, the instructor, the instructional technology resource technician (ITRT), and the
lead network administrator. The students can best be described as an even mixture
of male and female students in the ninth and tenth grades. There were about 30 total
students evaluating the software through a trial use. The instructor worked on the
implementation, piloting and training of the software in the classroom for the students.
The ITRT functioned as a supporting role in training students, managing accounts and
acting as intermediary for our building to the lead network administrator. The network
administrator acted as the technician to provide technical support and access for
bringing the program online to our building.
Google Apps is a widely used suite of productivity programs. Thousands of
public schools and higher education also utilize their software as an alternative to
Microsoft’s Office suite of programs. Google Apps is entirely web-based software that
requires no download or install for the user base. The collection of programs include
Docs (word processing, presentation, and spreadsheet), Sites (a website with wiki
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 15/22
Evaluation Profile 15
functions), Gmail (email), Calendar, and Groups (discussion). All of these tools provide
a robust functionality for the productivity of students as well as providing deep sharing
capabilities. The students are provided free accounts and access to these web-based
tools with the assumption of Internet access at home.
This was a formal evaluation of Google Apps in two ways. First, the students who
were users were also evaluators, in that the class would engage in weekly discussion
on how the program was working in their academic lives. They provided constant
verbal feedback to the instructor throughout the four months of the testing phase and
participated in a formal review with a survey at the end of the course. Second, the ITRT
and the instructor evaluated the training and functionality of the software on a daily
basis. Both macro and micro analyses were used for the software suite working as a
whole.
The pilot program and testing phases took place at the high school and at
local area homes where students reside. Due to the lack of computers provided in the
classroom, testing of the software and its productivity took place within the computer
labs. However, it is informally estimated that over 70% of productivity time of the Google
Apps program took place within the students home. The computer labs at the school
were primarily used for training, direct instruction, and local collaboration.
The teacher and ITRT conducted testing of the training of Google Apps and its
functionality during school hours. Typically, students used the applications after school
hours for homework and online collaboration activities with peers and the instructor.
As formerly described, Google Apps is a robust, web-based productivity suite
that offers many collaborative features. The need for an online productivity suite as
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 16/22
Evaluation Profile 16
an alternative to Microsoft’s Office suite arose as computer lab time was increasing
in classrooms throughout the building. Students were not able to use, study, and
save their work anywhere but the school computer lab. Thus, with Google Apps as an
alternative, the instructor and ITRT wanted to evaluate a web-based software suite that
was highly collaborative in nature, and survey how students used it to implement into
their own academics. After the testing phase of all the students throughout the course,
the final evaluation proved to be highly successful. Over 86% of students believed
this software to be useful and beneficial to their academic career. Also, over 93% of
students surveyed, stated they would use or possibly use the program throughout the
rest of their academic career (Jacobs, 2010). The instructor wanted to test the user
feedback from students in how they saw the program functioning within their own
academic studies. He also wanted to evaluate not just how the students responded to
the program but also how they actually used it by using the back-end analytics tool.
The results were that over 90% of the students were accessing the applications at least
three times a week, communication increased with the instructor through the email
application, and students were also using the applications for other classes as well. The
entire program proved to be mostly successful and the evaluative process in assessing
Google Apps was mostly formative with a couple summative evaluations at the end of
the courses.
The eLearning evaluation functioned on various levels throughout the semester.
The students provided weekly feedback through discussions in the classroom on how
the program was working, what was confusing, and any problems encountered. This
provided the instructor and the ITRT with various approaches to solve problems on a
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 17/22
Evaluation Profile 17
micro level for the students. Students also provided a summative feedback survey at
the end of the course rating the software as a whole, in parts and finally provided their
own personal feelings as to how it affected them. These evaluations were useful as they
provided a student-user perspective as to how they saw the program working in their
academic lives.
The other evaluative process was from the perspective of the instructor and
implementer. Student verbal feedback the teacher implement the software into the
curriculum effectively. The Google analytic data tool helped the evaluators understand
the amount and nature of student usage of the system. The quantitative data was used
to report how often students were logging into the service, how often they were using
certain programs, and how much time they spent using services as well.
Another example of the evaluation process took place in the Computer Aided
Design department of the Wythe County Technology Center. The evaluation centered
around the Virginia Department of Education’s implementation of required industry
certification in the Career and Technical Education program. With the introduction of
the requirements each department was tasked to evaluate potential certification options
and determine the certification that held the greatest value for the students.
The process began with the VDOE giving each program and approved list of
certifications to evaluate. The Computer Aided Design department began evaluation
by analyzing funds available for testing along with enrollment numbers to narrow down
the list of certifications to those monetarily feasible given the departments budget
situation. Once the list had been narrowed a short survey was designed and distributed
to professionals in the field to evaluate how each potential credentialing was viewed by
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 18/22
Evaluation Profile 18
the profession. The sampling was based on convenience due both to time constraints
and the need to survey those professionals that were in the potential hiring circle of
students.
Through the survey of professionals it was determined that software specific
certification was most desired. It was also found that not only was it software specific
certification that was desired but it was also a specific software package that was
desired. It was determined that the most cost effective and desirable certification for
students would be the Autodesk Academic Certification in AutoCAD. The evaluation
used both qualitative (budget and cost) and qualitative (expert opinion) to create a
pragmatic evaluation that lead to the best possible solution for both students and the
school system.
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 19/22
Evaluation Profile 19
References
Clark, R. C., & Mayer, R. E. (2008). E-learning and the science of instruction: Proven
guidelines for consumers and designers of multimedia learning . San Francisco,
CA: Pfeiffer.
Clarke, A. (2009). Learning Online with Games, Simulations, and Virtual Worlds:
Strategies for Online Instruction. San Francisco, CA. Jossey-Bass
Clarke, A. & Dawson, R. (1999). Evaluation research: An introduction to principles,
methods and practice. London:SAGE
Cook, D. A. (2010). Twelve tips for evaluating educational programs. Medical Teacher ,
32 , 296-301. doi:10.3109/01421590903480121
Díaz, W., Questier, F., de Jesús Gallardo López, T., & Libotton, A. (2011). Improving
eLearning Quality Through Evaluation: The Case of the Cuban University.
Proceedings of the International Conference on e-Learning , 449-457. Retrieved
from EBSCOhost .
Dylan, W. (2011). What is assessment for learning? Studies In Educational Evaluation,
37 (1), 3-14. doi:10.1016/j.stueduc.2011.03.001
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 20/22
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 21/22
Evaluation Profile 21
http://www.evaluationtrust.org/evaluation/definitions
Thompson, T., & MacDonald, C. J. (2005). Community Building, Emergent Design and
Expecting the Unexpected: Creating a Quality eLearning Experience. Internet
and Higher Education, 8(3), 233-249. Retrieved from EBSCOhost .
Trochim, W. (2006, October 20). Research methods knowledge base. Retrieved from
http://www.socialresearchmethods.net/kb/intreval.htm
West, R. . (2011, September). James madison evalutation presentation [Video podcast].
Retrieved from http://voicethread.com/
7/31/2019 EvaluationProfile
http://slidepdf.com/reader/full/evaluationprofile 22/22
Evaluation Profile 22