student assessment ah mehrparvar,md occupational medicine department yazd university of medical...
TRANSCRIPT
Student assessment
AH Mehrparvar,MDOccupational Medicine departmentYazd University of Medical Sciences
“Students read , not to learn but to pass the examination. They pass the examination but they do not learn.”
Huxley
What is an assessment?
Any systemic method of obtaining evidence (from test, examination questionnaires, surveys and collateral source) to draw inference about competency of the students for a specific purpose
Evaluation
Evaluation is a judgment regarding the quality or worth of the assessment results
This judgment is based upon multiple source of assessment information
Qualitative and QuantitativeMeasurement of students behaviour
+ Value judgment
= Evaluation
Critical questions in assessment1. WHY are we doing the assessment?2. WHAT are we assessing?3. HOW are we assessing it?4. HOW WELL is the assessment working?
Purpose of assessment To determine whether learning objectives
are met Support of student learning Certification and judgment of competency Development and assessment of teaching
program Understanding the learning process Predicting the future performance
Purpose of assessment
DEFINE THE MINIMUM ACCEPTED LEVEL OF COMPETENCE
Prove (that he/she is a competent doctor) -
Improve(provide feedback regarding shortcomings)
Purpose of assessment For selection of a few students from a
large number of students
Pre-assessment of the need of a learner
For continued monitoring of learning activities for giving a feedback
For competence to complete a course
WHAT are we testing? Elements of competence
Knowledge factual applied: clinical reasoning
Skills communication clinical
Attitudes professional behaviour
How are we doing the assessment? Essays Short answer questions Simulated patient management problems Practical Clinical OSPE OSCE Rating scales Checklists Questionnaires Diary and logbook
What to Assess ?
Domain Method Instrument
Cognitive
(knowledge)
Written tests
Oral
Open-ended or essay questions structured essay or MEQ short answer question objective MCQ simulated patient management problems Assignments questions
Psychomotor
(skills)
Observations Practical – actual and model clinical cases
Objective structured clinical/practical examination
What to Assess?
Domain Method Instrument
Affective
(Attitude)
Observation Rating scales check lists questionnaire log book daily evaluation sheets
What we can assess?
Knows
Shows how
Knows how
Does
Knows Factual tests: SBAs, Essay, SAQ
Knows how (Clinical) Context based tests:SBAs, EMQs, SAQ
Shows how Performance assessment in vitro:OSCEs OSPE
DoesPerformance assessment in vivo: mini-CEX, DOPs
HOW WELL is the assessment working?
Evaluation of assessment systems
•Is it valid? •Is it reliable? •Is it doing what it is supposed to be
doing?
Characteristics of Assessment Relevance: Is it appropriate to the needs
of the society or system Validity: Does the assessment tool really
test what it intend to test Reliability: Accuracy and consistency
Objectivity: Will the scores obtained by the candidate be same if evaluated by two or more independent experts?
Feasibility: Can the process be implemented in practice?
RELEVANCE
Relevance refers to appropriateness of the process of evaluation with reference to the jobs to be performed by the student after qualification and therefore it should reflect the health needs of the society
Relevance of the process should be obvious both to teachers and the students
VALIDITY
Refers to the degree to which a test measures what it intends to measure
In choosing an instrument, the first question that the teacher should consider is the learning outcome sought to be measured
Refers both to the results of the test as well as the instrument
Factors Influencing Validity
Test factors Unclear directions Difficult and ambiguous wording of questions Poorly constructed items Inappropriate level of difficulty Inappropriate question for the outcome being measured Inappropriate arrangements of items Identifiable pattern of answers and clues. Too short or too long a test Errors in scoring Adverse classroom and environmental factors
RELIABILITY
Consistency with which an instrument measures the variable
Reliability is a measure of reproducibility of the test
Reliability is a mathematical concept and is a measure of correlation between two sets of scores
To obtain two sets of scores one of three alternatives are available.
a. Test-retest:
b. Equivalent tests: Two tests of equivalent form can be administered to the students to obtain two sets of scores.
c. Split half method: In this a single test is split into two halves (for example odd and even numbered MCQs) and the two sets of scores for each student compared
OBJECTIVITY
Degree of agreement between the judgment of Independent and competent examiners
Objectivity of the Evaluation process should be maintained
Step to increase objectivity of scoring of conventional examinations
Structuring of questions Preparation of model answers Agreeing on the marking scheme Having papers independently valued by two or more
examiners
FEASIBILITY
Considering the ground realities, an evaluation process should be feasible
Factors to be considered in deciding feasibility
are Time and resources required Availability of an equivalent form of the test for
measuring reliability Ease of administration, scoring and interpretation
Systematically designed assessment Relevant to the curriculum Focus on important skills Promote learning skills Discriminate good and poor students Provide feedback