assessing learning in instructional design

Post on 24-Apr-2015

2.998 Views

Category:

Education

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

 

TRANSCRIPT

Prepared by Leesha Roberts, Instructor II

University of Trinidad and Tobago – Valsayn Campus

Evaluating Learner Success and the Instructional Design

Rationale for using Evaluating The Learner in Instruction Design

OverviewWhat is Assessment and Evaluation?How do they differ?What role does it play in the ID ProcessWhen should learner performance be

assessed?How can assessment be made reliable and

validMatching Assessment to ObjectivesHow does an instructional designer determine

when a learner evaluation has been successful

What is Assessment and Evaluation?What is Assessment?

Procedures or techniques used to obtain data about a learner or a product.

What is Evaluation?The process for determining the success level of

an individual or a product on the basis of dataHow are they different?

Assessment collects of information, while Evaluation is the analysis of the assessment pieces

What is Assessment and EvaluationMeasurement – refers to the data collected

which is typically expressed quantitatively (i.e. Numbers)

Instruments - The physical devices used to collect the data e.g. Rating scales, observation sheets, checklists, objectives tests)

What role does Assessment and Evaluation play in the ID Process

Assessment serves as a pedagogical function to:MeasuringDiagnosingInstructing

Information from Assessment can be used as a secondary function in evaluation.

Developing Performance Measurements

Instructional Designers should be capable of developing:TestsWritten questionnairesInterviewsOther methods of measuring performance

Approaches to Assessment

Cognitive AssessmentAffective AssessmentPsychomotor Assessment

Basic Principles of MeasurementTests that measure what a person has learned

to do are called achievement tests.There are two types of achievement tests

Criterion-referenced tests (CRTs), also known as minimum competency or mastery. Basically it allows everyone to know exactly how well students stand relative to a standard.

Norm-referenced tests (NRTs). Basically these tests are designed to “reliably” select the best performers

Reliability and ValidityWhat is Reliability?

Learner evaluation will provide similar results when it is conducted on multiple occasions.

What is Validity?Determines whether the learners have achieved

the intended outcomes of instruction (based on the intended outcomes of the instruction)

Characteristics of Reliability in TestsReliable tests have:

ConsistencyTemporal Dependency

ConsistencyTo increase the consistency of NRT, developers

simply increase the number of items on the test.To increase the consistency of CRT, assess each

competency a test covers.

Characteristics of Reliability in Tests (Cont’d)

The following are factors that affect the number of items developed when ensuring consistency on a test:

1. Consequences of misclassification2. Specificity of the competency3. Resources available for testing

Characteristics of Reliability in Tests (Cont’d)

Temporal Dependency – each time a test is administered, it should produce similar results.

Characteristics of Validity in TestsThere can be no validity without reliabilityThe performance on a CRT must be exactly

the same as the performance specified by the objective.

Achieving validity is not always straightforward.

Matching Assessment to ObjectivesInstructional objectives are a key element in

the development of effective learner assessment.

A direct relationship must exist between the instructional objectives and the learner assessment.

How can you determine whether the intended outcome of an instructional objective is a change in knowledge, skill or attitude?

Matching Assessment to ObjectivesExample:Read the following sentence and identify the

action.The learner will be able to list the three major

warning signs of a heart attach.The action in the instructional objective is to

list – more specifically, to list the three major warning signs of a heart attack.

Cognitive TestsMeasures acquisition of knowledgePaper and Pencil testsRecitationThe six types of test that apply to cognitive

tasks:Multiple-choice

Matching Assessment to Objectives

True-falseFill-in,MatchingShort answeressay

Performance TestsMeasures a student’s ability to do something.There are five types of Performance

Assessment:Performance (examination of actions or

behaviours, that can be directly observed)

Performance TestsProcess (learning ways of doing things such as

problem solving and discussing)

Product (outcome of a procedure is the product which is evaluated against a standard.)

Performance Tests

Portfolios (provides the basis for a product and process review)

Projects (a product assessment which is an in-depth investigation of a topic worth learning more about, according to Katz(1994)

Performance

Process

ProductPortfolios

Projects

Authentic AssessmentFocus is on “real” tasksAchieves validity and reliability by

emphasizing and standardizing the appropriate criteria for scoring such (varied) Products

“test validity” depends on whether the test simulates real-world tests of ability

Involves “ill-structured” challenges and roles that help students rehearse for the complex ambiguities of the “game” of adult and professional life.

Attitudinal Tests

Appropriateness of ItemsHow do you go about writing valid criterion

items?The taxonomies of objectives include sample

test items that can be used as models.

Appropriateness of ItemsAn assessment procedure may be more

appropriate for some learning outcomes than others.

There are several bases on which the logical consistency between assessment and other aspects of design can be determined

Appropriateness of Items

• These are:Matching the objectives to the criterionMatching the type of assessment to the type of

learningMatching the data collection method to the

purpose of the assessment

A successful learner evaluation provides:Data for instructional interventionData as to whether the learner has met the

instructional objectivesRecommendations based of data gathered

Determination of the success of Learner Evaluation

top related