psychometrics and intellectual assessment - …€¦  · web viewpsychometrics and intellectual...

29

Click here to load reader

Upload: vuongtu

Post on 12-Apr-2018

218 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

Psychometrics and Intellectual AssessmentWinter 2006 CPSE 647Tuesday 12:15 -2:50Room: 359 MCKB

Instructors: Melissa Heath, Ph.D. Rachel E. Crook Lyon, PhDOffice: 340-K MCKB 340-Q MCKBOffice Hours: By appointment By appointmentOffice Phone: 422-1235 or 422-3857 422-4375Home Phone: 491-8386 407-6414E-mail: [email protected] [email protected]

REQUIRED TEXT BOOKSGroth-Marnat, G. (2003). Handbook of Psychological Assessment- 4th Edition. New York: John Wiley & Sons.

ISBN # 0471052205

Flanagan, D.P. (2004). Essentials of WISC-IV Assessment. New York: John Wiley & Sons.

SUPPLEMENTARY READING: Assessment of Children (Author: Jerome M. Sattler) ISBN # 0961820926 School Psychology Review, Volume 26, No. 2, 1997: Issues in the Use and Interpretation of Intelligence Tests in the

Schools. APA Ethical Standards http://www.apa.org/ethics/code2002.html APA standards for testing language minority and culturally different children http://www.apa.org/pi/psych.html NASP Website for testing guidelines http://www.nasponline.org/culturalcompetence/index.html

http://www.nasponline.org/culturalcompetence/ortiz.pdf

American Educational Research Association, APA, & NCME (1999). Standards for educational and psychological testing. Washington, DC:Author. [ethics guidelines for assessments & test comparison report]

COURSE GRADING SYSTEMPOINTS ACTIVITY 100 Written Assessment Reports (25 points per report-- 4 reports due during semester.

The first two reports due are “mock reports.” Instructors provide the outline for your report.

120 Protocols (12 graded protocols --10 points per protocol)2 WAIS-III, 2 WJ COG, 1 KAIT, 1 UNIT, 4 WISC-IV, 1 Stanford-Binet 5th ed, 1 WPPSI-IIINote: All protocols must be peer-reviewed

110 Quizzes (10 points per quiz--- 12 quizzes: drop 2 lowest and “double score” the highest score)

50 Critical Test Review

200 Videotaped Test Administrations (50 points per test administration-- 4 taped test administrations)1 WJ COG, 2 WISC-IV, 1 WAIS-IIINote: All videotaped testing requires a self-review.

70 Midterm (Study Guide will be provided.)

175 Final Exam

175 Final Test Administration-WISC-IVTurn in the video of the test administration, protocol, and written reportTurn in self and peer-review of the video, protocol, and report.

__________________________________________________________________________________________ 1000 Total points

4.0 A 940 - 1000 points (94 - 100%)3.7 A- 900 - 939 points (90 - 93%)

1

Page 2: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

3.4 B+ 870 - 899 points (87 - 89%)3.0 B 830 - 869 points (83 - 86%) 2.7 B- 800 - 829 points (80 - 82%)

During the first 10 – 15 minutes of each class, a short quiz will be administered on the assigned readings or test manuals. In special situations (illness or family/personal emergency), arrangements will be made to accommodate the student’s needs.

Evaluation of knowledge, skills, and disposition:Student performance, specifically in the areas of knowledge, skills, and professional disposition, will be assessed during the course. This information will be reviewed during the semester faculty evaluations of student progress. The student will be apprised of their standing midway through the course and after all course assignments are graded. If a student’s performance is unsatisfactory in any of these three major areas (knowledge, skills, and disposition), the professor will set up an interview with the student to discuss a possible remediation plan. (1) Knowledge base: Students earning a semester total of less than 83% on their summed quiz scores or below 86% on their final exam score will be considered unsatisfactory in their knowledge base. Marginal performance will be designated to students earning 83-86% on their semester’s total quiz scores or 86-89% on their final exam. (2) In order to assess skills, students will be provided with both peer and professor’s feedback to videotaped test administration, protocols, and report writing. Students will also be self-evaluating their own work, making plans for improvement, and setting goals for improvement. NOTE: During practicum and intern, students’ assessment/intervention skills will continue to be evaluated.(3) Professional disposition will be assessed in terms of promptness to class; quality of preparation for class (completing readings and contributing to class discussion); sensitivity and responsiveness to ethical and legal matters; sensitivity to multicultural considerations and individual diversity; consistency of attention and interpersonal involvement in class; openness/responsiveness to feedback; cooperation and collaboration in group learning activities; and peer-feedback regarding professional disposition.

Note: Attending class and arriving on time reflects your professional disposition. Those who miss class (2 or more classes) and are consistently late/consistently leaving early (late is defined as arriving 6 or more minutes late – and consistently is defined as 3 or more times during the semester) will receive a negative review during semester student evaluations. Behaviors considered to be unprofessional include responding to or making cell phone calls –except for emergency calls, reading the newspaper, sleeping, and completing other tasks not related to CPSE 647. If you must miss class or a substantial portion of class, you will turn in a 2-page minimum reflection paper on the readings for that day at or before the next class period. If you miss class, it is your responsibility to obtain class lecture notes and assignments from classmates.

Each student is required to keep track of their class attendance and tardies on the class point sheet (attached to the syllabus). For each absence, the student is required to complete a five page (single spaced) review of the lecture topic (must include a review of the lecture notes and assigned readings). The student is responsible for getting lecture notes from a classmate and is responsible for reading the assigned readings.

A set of criteria for professional disposition will be reviewed in class (Performance Appraisal System, PAS).

Feedback to students:Students will be apprised of their progress throughout the semester (grades on quizzes, feedback on videotaped assessments, feedback on peer-reviewed work, etc.) and will receive formal written feedback from the professor midway through the course and upon completing course assignments/requirements. Regarding their performance in CPSE 647, students will receive a written summary of information to be shared in faculty meeting at the end of the semester.

POLICY: Late work turned in after the due date will receive a maximum of 70% of the possible points for the assignment. However, in situations involving a personal emergency, circumstances will be considered and appropriate accommodations made.

NOTE : Because of the critical importance of the knowledge and skills associated with intellectual assessment and the responsibility associated with outcomes based on data-based decision making, students earning a grade below a “B” (829 points and below) will be required to retake the course.

2

Page 3: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

Summary of Information Regarding Student Semester Evaluations:Students earning a grade below 83% on the final and/or midterm or for the entire course (total points) will receive an “unsatisfactory” rating for the semester student evaluation of “knowledge.” Students receiving a grade of 83% to 86% on the final, midterm, or course grade (total points) will receive a “marginal” rating in the area of “knowledge.”Students arriving late to class (6 or more minutes late for three or more class periods) will receive a marginal rating on their faculty evaluation in the area of disposition.Students earning a score below 83% on the test administrations will receive an “unsatisfactory” rating for the semester student evaluation in the area of “skills.” Students earning a grade of 83% to 86% on the test administrations will receive a “marginal” rating in the area of “skillsPrerequisite Courses and Remediation Plans:A class in undergraduate statistics is a prerequisite for this course. Students with a limited background in statistics may be required to take an undergraduate statistics course prior to enrolling in this course. Poorly written reports, graded at or below 86%, will need to be re-written using the feedback from the professor. Students demonstrating limited proficiency in writing skills will be required to successfully complete a remedial writing class prior to internship. This recommendation will be reviewed by at least two core faculty members, including the program coordinator.

Respecting individual and group differences is not only a professional issue it is a basic tenet of Brigham Young University’s honor code. Disrespect or discrimination will not be tolerated. Preventing Sexual HarassmentTitle IX of the Education Amendments of 1972 prohibits sex discrimination against any participant in an educational program or activity receiving federal funds. The act is intended to eliminate sex discrimination in education. Title IX covers discrimination in programs, admissions, activities, and student-to-student sexual harassment. BYU's policy against sexual harassment extends not only to employees of the university but to students as well. If you encounter unlawful sexual harassment or gender based discrimination, please talk to your professor; contact the Equal Employment Office at 378-5895 or 367-5689 (24-hours); or contact the Honor Code Office at 378-2847.Students With DisabilitiesBrigham Young University is committed to providing a working and learning atmosphere that reasonably accommodates qualified persons with disabilities. If you have any disability, which may impair your ability to complete this course successfully, please contact the Services for Students with Disabilities Office (378-2767). Reasonable academic accommodations are reviewed for all students who have qualified documented disabilities. Services are coordinated with the student and instructor by the SSD Office. If you need assistance or if you feel you have been unlawfully discriminated against on the basis of disability, you may seek resolution through established grievance policy and procedures. You should contact the Equal Employment Office at 378-5859, D-282 ASB.================================================================================

CPSE 647 Winter 2006CALENDER OF LECTURES

Assignments & Quiz Topics (Schedule is subject to change)

(1) QUIZZESQuizzes are administered during the first 10-15 minutes of class. The class lecture starts promptly at 12:15.

The QUIZ will consist of multiple choice, true/false, or short essays covering the assigned reading topic (listed in the syllabus) or may review previous lectures/reading assignments.

(2) VIDEOTAPING OF TESTING, SELF-REVIEW AND PEER-REVIEWYou will videotape and peer review your own work. Your testing partner (partners are rotated throughout the semester) will peer-review your protocols and reports. For the final videotaping, you will self-review all of your work and your testing partner will peer-review all of your work (this includes videotape, report, and protocol).

(3) ASSIGNED READING In addition to the assigned text book reading, all students are responsible for reading the testing manuals.

==========================================January 10

History of Intelligence Testing: Key Names and DatesOverview of Course Syllabus Pre-Test: vocabulary term and concepts

3

Page 4: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

------------------------------------------------------------------------------January 17

Critical Review of WISC-IV:_______________________WISC-IV Scoring and administration of subtests

Essentials of WISC-IV Assessment, Chapters 1-2Handbook of Psychological Assessment, pgs 37-68; 682-689TOPIC: ETHICS and Issues related to assessment: APA & NASP guidelines

Prior to January 24, Practice WISC-IV testing (full battery) with testing partnerEach person should take turns being the examiner and the examinee.

------------------------------------------------------------------------------January 24

Theoretical Underpinnings of Intellectual AssessmentEssentials of WISC-IV Assessment, Chapters 3-4 Handbook of Psychological Assessment- 4th Edition. Read pgs 129-195Report writing Handbook of Psychological Assessment- 4th edition. Read pgs. 621-671.Mock Report writing in class (#1 REPORT will be completed in class)#1 PROTOCOL DUE: WISC-IV PROTOCOL due (Score ahead of time and review in class. Peer review in class.)------------------------------------------------------------------------------

January 31 Assessing adults with learning disabilities in an university setting: Guest lecturer: Dr. Ed MartinelliReview information in Handbook of Psychological Assessment- 4th Edition, 129-195. Assessment Interview, Handbook of Psychological Assessment- 4th Edition, 69-101.#2 PROTOCOL DUE: WISC-IV PROTOCOL; #1 VIDEO: WISC-IV (self review of video and protocol, peer review of protocol)-------------------------------------------------------------------------------

February 7Essentials of WISC-IV Assessment, Chapters 5-6In class: case study and mock report writing (#2 REPORT will be completed in class)

WMS-III & WPPSI-IIIHandbook of Psychological Assessment- 4th edition. Read pgs. 197- 212.Critical Review of WMS-III:_______________________Critical Review of WPPSI-III:______________________#3 PROTOCOL DUE: WISC-IV

------------------------------------------------------------------------------February 14WJ-COG, WJ-ACH

Critical Review of WJ-COG: ______________________Critical Review of WJ-ACH:_______________________Class Discussion: Differing definitions of “Learning Disability” review handouts on identification of learning disability, and specific interventions. Discuss the comparison of IQ standard scores with achievement standard scores.TOPIC: The Functional Utility of Intelligence Tests with Special Education Populations #4 PROTOCOL DUE: WISC-IV PROTOCOL and #3 REPORT (self-review of video, peer-review of protocol & report) #2 VIDEO: WISC IV,

------------------------------------------------------------------------------February 21 NO CLASS – MONDAY INSTRUCTION

Check out WJ-COG and WJ-ACH kits and protocols during week (share a kit with a partner). Bring kits to class on Feb 28. Read and review test manuals and protocols prior to class.

4

Page 5: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

February 28Guest Lecturer: Michael HerbertWJ-ACH------------------------------------------------------------------------------

March 7Guest Lecturer: Michael HerbertEssentials of WJ-III ACH, Chapters 5-7DUE: # 5 PROTOCOL-WJ-COG ( 7 core subtests only)------------------------------------------------------------------------------

March 14DUE: # 6 PROTOCOL WJ-COG-(all subtests)VIDEO #3 WJ-COG (self-review of video- all subtests)

Stanford-Binet-5 Critical Review of Stanford-Binet, 5th ed:________________________WIAT Critical Review of WIAT: ________________________________________WAIS-III Critical Review of WAIS-III: ______________________________________

----------------------------------------------------------------------------- March 21

DUE: Mid-term ExaminationDUE: # 7 PROTOCOL Stanford-Binet-5 all subtests (peer-reviewed)TOPIC: Assessing older students and adults: WAIS-IIIREVIEW WAIS kits in class- begin to administer to partner in class and complete protocol by next week.-----------------------------------------------------------------------------

March 28 DUE: # 8 PROTOCOL: WAIS VIDEO #4 WAIS

LEITER-R : Critical Review of LEITER-R: ________________________

UNIT Critical Review of UNIT: ________________________TOPIC: Special considerations in assessment issues, nonverbal tests and testing students of diverse background. VIDEO: Portraits of the children: Culturally competent assessment (NASP, 2003)(1) MINORITY ISSUES IN INTELLECTUAL ASSESSMENT, THE TRIPLE QUANDARY OF RACE, CULTURE, AND SOCIAL CLASS IN STANDARDIZED COGNITIVE ABILITY TESTINGChapter 26, Contemporary Intellectual Assessment, pgs 517-531. (chapter will be provided)

(2) SPECIAL TESTING CONSIDERATIONS: COGNITIVE ASSESSMENT OF LIMITED ENGLISH PROFICIENT AND BILINGUAL CHILDREN Chapter 25 -Contemporary Intellectual Assessment, pgs. 503-513. (chapter will be provided).

----------------------------------------------------------------------------- April 4

DUE: #9 PROTOCOL: UNIT & #4 REPORT (peer reviewed)KAIT Critical Review of KAIT: ________________________KABC Critical Review of K-ABC:_______________________

-----------------------------------------------------------------------------CLASS ACTIVITY: Review KAIT MANUAL and conduct in-class administration of KAIT

----------------------------------------------------------------------------- April 11

TOPIC: Current Topics in Intellectual Assessment: Research in the past 5 years related to cognitive assessmentDUE: #10 PROTOCOL: KAIT ---from the assessment conducted in class last week

-----------------------------------------------------------------------------

5

Page 6: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

April 18: REVIEW FOR FINAL---Discussion and review of important points.Review study guide for final.

FINAL EXAM MONDAY April 24 11:00 a.m. – 2:00 pmDue on or before 2:00 April 24: FINAL WISC-IV VIDEO (self & peer-review), PROTOCOL (peer-review), and REPORT (peer-review): -----------------------------------------------------------------------------

6

Page 7: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

CRITICAL REVIEW OF TEST

Each student will choose 1 test to review. The due date is listed in the syllabus.

________________________________________WJ-COG

________________________________________WJ-ACH

_________________________________________KAIT

__________________________________________K-ABC

__________________________________________LEITER-most recent edition

__________________________________________STANFORD-BINET-5th EDITION

__________________________________________WPPSI-III

__________________________________________WMS-III

__________________________________________WIAT

__________________________________________UNIT

__________________________________________WAIS-III

__________________________________________WISC-IV====================================================DIRECTIONS: Prepare a brief 2-page summary of the test. Please use the following outline to organize your information. Your summary will be due on the day we discuss that particular test in class. Make copies for all class members. You will be the class expert. Please prepare a 10-20 minute presentation. (1) History of Test Development (underlying theory of test, key people in development of test, need for/use of test, previous editions of the test)(2) Test Construction- ( a) format of test:Types of questions/activities?How were the test questions/activities selected? ( b) norming (identify norming sample: -# and age of subjects in norming sample, ethnicity, location of sites used to norm the test),( c) reliability and validity (this part of the critique should show evidence of your knowledge of the different types of reliability and validity) Does this test measure what it purports to measure?How stable are the test results?How do the test results compare with other iq tests?(3) Current Use of TestWhat are the lower and upper age limits of those individuals who can be tested with this instrument? Who uses this test? What are the test results typically used for? Would the results of this test stand up in a court case?What do current practitioners/professionals think of this test? (Ask a few school psychologists, a few clinical psychologists, and a college professor-who teaches an assessment course). Look in the current critical reviews for critiques of this test (in assessment journals, test review articles, and letters to editors)(4) Pros and Cons(a) ease and length of time to administer test( b) $$$$: cost of test and protocol( c) time: time to score test --also, is the test fairly easy to score?( d) training/qualifications to administer test (who can administer this test?)( e) does the test have a unique use with a specific group that is difficult to test with other assessment instruments ?( f) overall, how does this test stand up against other similar tests in use?( g) how current are the test norms?

7

Page 8: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

( h) how has this test withstood the test of time?

8

Page 9: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

"Critical Review" Resources from the Harold B. Lee LibraryNOTE: Collected & Summarized by Steve Bird

The Supplement to the Eleventh Mental Measurements YearbookJane Close Conoley and James C. Impara, Eds.BF 431 .X1 C66 1994

Buros Desk Reference: Psychological Assessment in the SchoolsJames C. Impara and Linda L. Murphy, Eds.General Note: "Reviews and information for over 100 frequently used instruments."LB 3051 .B87x 1994

The Mental Measurements Yearbook on CD-ROM and Master Index to Test InformationJack J. Kramer and Jane Close Conoley, Eds.LB 3050 .M46x CD-ROM (1993-)

Tests in Print V: An Index to Tests, Test Reviews, and the Literature on Specific TestsLinda L. Murphy, James C. Impara, Barbara S. Plake, Eds.Summary: "The most comprehensive index to commercially available tests published in the English language, Tests in Print V contains information on over four-thousand instruments. Along with a brief description, entries include population, scoring, pricing, publisher information, and a reference list of professional literature citing articles relevant to individual instruments. Indexes of titles, classified subjects, names, and scores, as well as a publishers directory and index are included, with notations for out-of-print instruments. Information is given for tests in a wide range of areas, including education, psychology, counseling, management, health care, career planning, sociology, personnel, child development, social science, and research. Tests in Print V also provides a comprehensive index to the Mental Measurements Yearbook by directing readers to the appropriate volume or volumes for reviews of specific tests."BF 431 .T47x 1999 Vol. 1BF 431 .T47x 1999 Vol. 2 (1994 edition is also available as Tests in Print IV)

Test Critiques Compendium: Reviews of Major Tests From the Test Critiques SeriesDaniel J. Keyser, Richard C. Sweetland, Eds.BF 176. T4195 1987 (1984 edition also available)

Tests: A Comprehensive Reference For Assessments in Psychology, Education, and BusinessRichard C. Sweetland, Daniel J. Keyser, Eds.BF 176 .T43 1986 (1983 edition also available)

Educational and Psychological Measurement and EvaluationKenneth D. Hopkins, Julian C. Stanley, B.R. HopkinsLB 3051 .R59x 1990

Tests and MeasurementsW. Bruce WalshBF 176 .W34 1989

Measures for Clinical Practice: A SourcebookKevin Corcoran, JoelBF 176 .C66 2000 Vol. 1 (1994, 1987 editions also available)

Journal of Psychoeducational AssessmentLB 1131.J8

Journal of Personality AssessmentBF 698.4 .J67

9

Page 10: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

ASSESSMENT VOCABULARYPlease memorize the meaning of the following terms:

------------------------------------------------------------------------------------------------------------------------------------

Intelligence: One of the most popular definitions of intelligence was purposed by Wechsler-1958. He stated that intelligence is “the aggregate or global capacity of the individual to act purposefully, to think rationally and to deal effectively with his environment.”Validity- the degree to which a test truly measures what it was intended to measure.

construct validity- test score’s tendency to relate systematically with the construct of interest. Test scores are correlated with an operationally defined representative of the construct--such as a well accepted measure of IQcontent validity- breadth and adequacy with which a test measures a constructcriterion related validity instrument=s test score correlates with a similar instrument (correlation based validity)(a) predictive validity: test score correlates with a measure of performance at some

point in the future (b) concurrent validity: test score correlates with a current measure of performanceface validity: test looks like what it is intended to measure.The trend is to put all types of validity under the broad heading of construct validity.

Messick (1993) called this “unified validity.”Messick states: “the essence of unified validity is that the appropriateness, meaningfulness, and usefulness of score-based inferences are inseparable and that the unifying force behind this integration is the trustworthiness of empirically grounded score interpretation.”------------------------------------------------------------------------------------------------------------------------------------Reliability- “stability” of test scores, replicability of resultsTypically there are three basic markers used when describing reliability: (1) internal consistency(2) test-retest(3) alternate formsReliability is one of the primary qualities examiners should consider when selecting a test.Reliability sets the limits for validity. pg 488 Contemporary Intellectual AssessmentRELIABILITY CRITERION SET BY BRACKEN (1987) AND SALVIA, NUNNALLY, AND YSSELDYKE (1978, 1988) SUBTEST SCORES- median subtest internal consistency criterion : .80TOTAL TEST SCORE- total test internal consistency: .90------------------------------------------------------------------------------------------------------------------------------------ADEQUATE FLOOR The extent to which there are sufficient number of easy items for younger children or lower functioning individuals to discriminate low levels of functioning from the average and below average level of functioning.

ADEQUATE CEILING An adequate ceiling is the extent to which there are sufficient number of difficult items on a test or subtest for older children and higher functioning individuals to discriminate higher levels of ability.

STANDARD ERROR OF MEASUREMENT

STATISTICAL SIGNIFICANCE VERSUS RARE OR UNUSUAL

STANDARD SCORESZ SCOREMEANNORMAL DISTRIBUTIONSTANDARD DEVIATION

10

Page 11: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

GRADE-SHEET----CPSE 647Winter Semester 2006NAME: _______________________________

CLASS ATTENDANCE & WEEKLY QUIZT=TARDY, A=ABSENT, P=PRESENTattendance Quiz (max 10 points per quiz)NOTE: Drop 2 lowest scores and “double score” the highest score.______Jan 10 pre-test______Jan 17 ____________Jan 24 ____________Jan 31 ____________Feb 7 ____________Feb 14 ____________Feb 28 ______ ______Mar 7 ____________Mar 14 ____________Mar 21 MIDTERM______Mar 28 ____________April 4 ____________Apr 11 ____________April 18 ______

FINAL EXAM Monday April 24 11:00 a.m. – 2:00 p.m.

_______ QUIZ TOTAL (max 110 points)

_______ MIDTERM (max 120 points)

_______ FINAL EXAM (max 175 points)

11

Page 12: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

Name: VIDEO TAPED TEST ADMINISTRATIONS (50 points per tape)Videotaped Test Administrations (50 points per test administration-- 4 taped test administrations)1 WAIS, 1 WJ COG, 2 WISCNote: All videotaped testing requires a self-review.

______ #1 WISC self-reviewed_______#2 WISC self-reviewed_______#3 WJ-COG self-reviewed_______#4 WAIS self-reviewed_______TOTAL VIDEO TAPED TEST ADMINISTRATION (max 200 points)

Protocols (12 points per protocol) ______ #1 WISC Practice protocol: SCORE IN CLASS ______ #2 WISC Peer-reviewed______ #3 WISC Peer-reviewed______ #4 WISC Peer-reviewed______ #5 WJ-COG Peer-reviewed ______ #6 WJ-COG Peer-reviewed ______ #7 SB-5 Peer-reviewed______ #8 WAIS-III Peer-reviewed______ #9 UNIT Peer-reviewed______ #10 KAIT Peer-reviewed

______ TOTAL PROTOCOL POINTS (max 120 points)

WRITTEN REPORTS (25 points per report) ______ Report #1 WISC- Mock report CLASS ACTIVITY______ Report #2-WISC- Mock report CLASS ACTIVITY______ Report #3 WISC- Peer reviewed ______ Report #4 UNIT- Peer reviewed

______ TOTAL WRITTEN REPORTS (max 100 points)

FINAL TEST ADMINISTRATION- WISC-IV ______ WISC IV Video (Self & Peer-reviewed)-----------75 points______ WISC IV Protocol (Self & Peer-reviewed)--------50 points______ WISC IV Written report (Self & Peer-reviewed)--50 points______ TOTAL FINAL TEST ADMINISTRATION (max 175 points) =========================================================______TOTAL POINTS FOR THE 647 COURSE (max 1000 points)______ GRADE 4.0 A 940 - 1000 points (94 - 100%)3.7 A- 900 - 939 points (90 - 93%)3.4 B+ 870 - 899 points (87 - 89%)3.0 B 830 - 869 points (83 - 86%) 2.7 B- 800 - 829 points (80 - 82%)

12

Page 13: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

Final video, protocol and reportCPSE 647, Winter 2006

Student: ____________________====================================================================______ 75 Video (self & peer reviewed) ______ 50 Protocol--fill out as directed on previous protocols- record responses

(self & peer reviewed)______ 50 WRITTEN REPORT (self & peer reviewed)

____ 175 TOTAL ====================================================================____ VIDEO: 75 points______ 20 points Accurate directions (word for word for each subtest)- starting & stopping at correct place

______ 20 points Watch for timing. Be accurate.

______ 20 points Smoothness of testing administrationNOTE: Before testing make sure you read the sections that require you to determine 0, 1, or 2 point responses. This will make you familiar with the responses.

______ 15 points Quality of recording (SO I CAN HEAR IT and SEE IT)

====================================================================

____ PROTOCOL: 50 points______ 25 points Accurate recording of responses

______ 25 points Accurate scoring of responses, accurate addition /arithmetic, accurate conversion to standard scores

====================================================================

____ REPORT: 50 points______ 5 points Accuracy of info taken from the protocol

______ 10 points Using the outline/skeleton and filing in appropriate information

______ 15 points Appropriate interpretation

______ 5 points Professional language- but understandable to the average person

______ 5 points Grammar/style/ clarity of writing

______ 5 points Clean concise summary ______ 5 points Recommendations that are on target, research based, and practical

Accuracy is VERY important. I will make students re-do their testing/video/report if the student earns below 93%.If there are questions regarding the grading, please come in and talk with me.

13

Page 14: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

Glossary of Measurement Terms

BASIC MEASUREMENT CONCEPTS

Ability: A characteristic indicative of an individual’s competence in a particular field. The word "ability" is frequently used interchangeably with aptitude, although many psychologists use "ability" to include what others term "aptitude" and "achievement." (See Aptitude

Academic Aptitude (See Scholastic Aptitude.)

Achievement/Ability Comparison (AAC): The relationship between an individual’s score on a subtest of the Stanford Achievement Test Series or the Metropolitan Achievement Tests and the scores of other students of similar ability as measured by the Otis-Lennon School Ability Test. If a student’s achievement test score is higher than those of students of similar ability, the AAC is HIGH. If the achievement score is about the same as the scores of similar-ability students, the AAC is MIDDLE; if the score is lower, the AAC is LOW.

Age Norms: The distribution of test scores by age of test takers. For example, a norms table may be provided for 9 year olds. This age-norms table would present such information as the percentage of 9 year olds who score below each raw score on the test. (See Norms.)

Anecdotal Data: Data obtained from a written description of a specific incident in an individual’s behavior (an anecdotal record). The written report should be an objective account of behavior considered significant for the understanding of the individual.

Aptitude: A combination of characteristics, whether native or acquired, that are indicative of an individual’s ability to learn or to develop proficiency in some particular area if appropriate education or training is provided. Aptitude tests include those of general academic (scholastic) ability; those of special abilities, such as verbal, numerical, mechanical, or musical; tests assessing "readiness" for learning; and tests that measure both ability and previous learning, and are used to predict future performance—usually in a specific field, such as foreign language, shorthand, or nursing.

Calibrated Difficulty Level: A scale value that expresses how difficult a test item is. This value differs from the conventional difficulty index. (See Difficulty Index.) The origin of the scale is arbitrary, but the lower the value, the easier the item.

Construct Validity (See Validity.)

Content Validity (See Validity.)

Correlation: The degree of relationship between two sets of scores. A correlation of 0.00 denotes a complete absence of relationship. A correlation of plus or minus 1.00 indicates a perfect (positive or negative) relationship. Correlation coefficients are used in estimating test reliability and validity.

Criterion-Referenced (Content-Referenced) Test: Terms often used to describe tests that are designed to provide information about the specific knowledge or skills possessed by a student. Such tests usually cover relatively small units of content and are closely related to instruction. Their scores have meaning in terms of what the student knows or can do, rather than in (or in addition to) their relation to the scores made by some norm group. Frequently, the meaning is given in terms of a cutoff score, for which people who score above that point are considered to have scored adequately ("mastered" the material), while those who score below it are thought to have inadequate scores.

Criterion-Related Validity (See Validity.)

Cumulative Percent (See Percentile Rank.)

Deviation IQ (DIQ): An age-based index of general mental ability. It is based on the difference between a person’s score and the average score for persons of the same chronological age. Deviation IQ scores from most current scholastic aptitude tests are standard scores with a mean of 100 and a standard deviation of 15 or 16 for each defined age group. Thus, the DIQ is a transformed score equal to 15 (or 16) z + 100. (See z-score and Standard Score.) Some people are moving away from calling such a score on a mental or scholastic ability test an IQ. The Otis-Lennon School Ability Test, for example, reports a School Ability Index. (See School Ability Index.)

14

Page 15: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

Deviation Score (x): The score for an individual minus the mean score for the group; i.e., the amount a person deviates from the mean .

Diagnostic Test: A test used to "diagnose" or analyze; that is, to locate an individual’s specific areas of weakness or strength, to determine the nature of his or her weaknesses or deficiencies, and, if possible, to suggest their cause. Such a test yields measures of the components or subparts of some larger body of information or skill. Diagnostic achievement tests are most commonly prepared for the skill subjects.

Difference Score: Difference between two scores for the same individual.

Difference Score Reliability: Reliability of the distribution of differences between two sets of scores. These scores could be on two different subtests, or on a pre- and posttest, where the difference score is typically called a gain score. The meaning of the term "reliability" is the same for a set of difference scores as for a distribution of regular test scores. (See Reliability.) However, since difference scores are derived from two somewhat unreliable scores, difference scores are often quite unreliable. This must be kept in mind when interpreting difference scores.

Difficulty Index: The percent of students who answer an item correctly, designated as p. (At times defined as the percent who respond incorrectly, designated as q.)

Discrimination Index: The extent to which an item differentiates between high-scoring and low-scoring examinees. Discrimination indices generally can range from -1.00 to +1.00. Other things being equal, the higher the discrimination index, the better the item is considered to be. Items with negative discrimination indices are generally items in need of rewriting.

Distracters: An incorrect choice in a multiple-choice or matching item (also called a foil).

Equivalent Forms: Any of two or more forms of a test that are closely parallel with respect to content and the number and difficulty of the items included. Equivalent forms should also yield very similar average scores and measures of variability for a given group. Also called parallel or alternate forms.

Error of Measurement: The amount by which the score actually received (an observed score) differs from a hypothetical true score. (See also Standard Error of Measurement.)

Frequency: The number of times a given score (or a set of scores in an interval grouping) occurs in a distribution.

Frequency Distribution: A tabulation of scores from low to high or high to low showing the number of individuals who obtain each score or fall within each score interval.

Gain Score: Difference between a posttest score and a pretest score.

Grade Equivalent (G.E.): A norm-referenced score; the grade and month of the school year for which a given score is the actual or estimated average. A grade equivalent is based on a 10-month school year. If a student scores at the average of all fifth graders tested in the first month of the school year, he/she would obtain a G.E. of 5.1. If the score was the same as the average for all fifth graders tested in the eighth month, the grade equivalent would be 5.8. There are some problems with the use of grade equivalents, and caution should be used when interpreting this type of score. For example, if a student at the end of fourth grade obtains a G.E. of 8.8 on a math subtest, this does not mean that the child can do eighth-grade work. Rather, it means that the child obtained the same score as an average student in the eighth month of the eighth grade, had the eighth-grade student taken the fourth-grade test.

Grade Norms: The distribution of test scores by the grade of the test takers. (See Age Norms and Norms.)

Item Analysis: The process of examining students’ responses to test items to judge the quality of each item. The difficulty and discrimination indices are frequently used in this process. (See Difficulty Index and Discrimination Index.)

Item Difficulty: (See Difficulty Index.)

Item Discrimination: (See Discrimination Index.)

15

Page 16: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

Latent-Trait Scale: A scaled score obtained through one of several mathematical approaches collectively known as Latent-Trait procedures or Item Response Theory. The particular numerical values used in the scale are arbitrary, but higher scores indicate more knowledgeable people or more difficult items. (See Scaled Score.)

Local Percentile (See Percentile.

Mastery Level: The cutoff score on a criterion-referenced or mastery test. People who score at or above the cutoff score are considered to have mastered the material; people who score below the cutoff score are considered to be nonmasters. "Mastery" in this sense is an arbitrary judgment. A cutoff score can be determined by several different methods. Each method often results in a different cutoff score.

Mastery Test: A test designed to determine whether a student has mastered a given unit of instruction or a single knowledge or skill; a test giving information on what a student knows, rather than on how his or her performance relates to that of some norm group.

Mean ( ): The arithmetic average of a set of scores. It is found by adding all the scores in the distribution and dividing by the total number of scores.

Median (Md): The middle score in a distribution or set of ranked scores; the point (score) that divides a group into two equal parts; the 50th percentile. Half the scores are below the median, and half are above it.

Mode: The score or value that occurs most frequently in a distribution.

N: The symbol commonly used to represent the number of cases in a group.

National Percentile (See Percentile.)

Normal Curve Equivalents (NCEs): Normalized standard scores with a mean of 50 and a standard deviation of 21.06. (See Standard Score.) The standard deviation of 21.06 was chosen so that NCEs of 1 and 99 are equivalent to percentiles of 1 and 99. There are approximately 11 NCEs to each stanine. (See Stanines.)

Normal Distribution: A distribution of scores or other measures that in graphic form has a distinctive bell-shaped appearance. In a normal distribution, the measures are distributed symmetrically about the mean. Cases are concentrated near the mean and decrease in frequency, according to a precise mathematical equation, the farther one departs from the mean. The assumption that many mental and psychological characteristics are distributed normally has been very useful in test development work.

Figure 1 below is a normal distribution. It shows the percentage of cases between different scores as expressed in standard deviation units. For example, about 34% of the scores fall between the mean and one standard deviation above the mean.

Figure 1. A Normal Distribution.

Norms: The distribution of test scores of some specified group called the norm group. For example, this may be a national sample of all fourth graders, a national sample of all fourth-grade males, or perhaps all fourth graders in some local district.

Norms vs. Standards: Norms are not standards. Norms are indicators of what students of similar characteristics did when confronted with the

16

Page 17: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

same test items as those taken by students in the norms group. Standards, on the other hand, are arbitrary judgments of what students be able to do, given a set of test items.

Norm-Referenced Test: Any test in which the score acquires additional meaning by comparing it to the scores of people in an identified norm group. A test can be both norm- and criterion-referenced. Most standardized achievement tests are referred to as norm-referenced.

Objectives: Stated, desirable outcomes of education.

Out-of-Level Testing: The activity of administering a test level that is different from the one designated for a student of a particular age or in a particular grade. For example, a fourth grader might be given a test level designated for use in Grade 2. Out-of-level testing is used so that students can be tested on the content appropriate to their current level of functioning; that is, above or below their grade placement or age.

p-Value: The proportion of people in an identified norm group who answer a test item correctly; usually referred to as the difficulty index. (See Difficulty Index.)

Percentile: A point on the norms distribution below which a certain percentage of the scores fall. For example, if 70% of the scores fall below a raw score of 56, then the score of 56 is at the 70th percentile. The term "local percentile" indicates that the norm group is obtained locally. The term "national percentile" indicates that the norm group represents a national group.

Percentile Band: An interpretation of a test score that takes into account measurement error. These bands, which are most useful in portraying significant differences between subtests in battery profiles, most often represent the range from one standard error of measurement below the obtained score to one standard error of measurement above it. For example, if a student had a raw score of 35, and if the standard error of measurement were 5, the percentile rank for a score of 30 to the percentile rank for a score of 40 would be the percentile band. We would be 68% confident the student’s true percentile rank falls within this band. (See Standard Error of Measurement and True Score

Percentile Rank: The percentage of scores falling below a certain point on a score distribution. (Percentile and percentile rank are sometimes used interchangeably.)

Profile: A graphic presentation of several scores expressed in comparable units of measurement for an individual or a group. This method of presentation permits easy identification of relative strengths or weaknesses across different tests or subtests.

Quartile: One of three points that divided the scores in a distribution into four groups of equal size. The first quartile [equation], or 25th percentile, separates the lowest fourth of the group; the middle quartile [equation], the 50th percentile or median, divides the second fourth of the cases from the third; and the third quartile [equation], the 75th percentile, separates the top quarter.

Raw Score: A person’s observed score on a test, i.e., the number correct. While raw scores do have some usefulness, they should to make comparisons between performance on different tests, unless other information about the characteristics of the tests is known. For example, if a student answered 24 items correctly on a reading test, and 40 items correctly on a mathematics test, we should not assume that he or she did better on the mathematics test than on the reading measure. Perhaps the reading test consisted of 35 items and the arithmetic test consisted of 80 items. Given this additional information we might conclude that the student did better on the reading test (24/35 as compared with 40/80). How well did the student do in relation to other students who took the test in reading? We cannot address this question until we know how well the class as a whole did on the reading test. Twenty-four items answered correctly is impressive, but if the average (mean) score attained by the class was 33, the student’s score of 24 takes on a different meaning.

Readiness Test: A measure of the extent to which an individual has achieved the degree of maturity, or has acquired certain skills or information, needed to undertake some new learning activity successfully. For example, a reading readiness test indicates whether a child has reached a developmental stage at which he may profitably begin formal reading instruction.

Regression Effect: Tendency of a posttest score (or a predicted score) to be closer to the mean of its distribution than the pretest score is to the mean of its distribution. Because of the effects of regression, students obtaining extremely high or extremely low scores on a pretest tend to obtain less extreme scores on a second administration of the same test (or on some predicted measure).

Reliability: The extent to which test scores are consistent; the degree to which the test scores are dependable or relatively free from random errors of measurement. Reliability is usually expressed in the form of a reliability coefficient or as the standard error of measurement derived from it. The reliability of a major classroom achievement test should be at least .60. The reliability of a standardized achievement or aptitude test should be at least .85. The higher the reliability coefficient the better, because this means there are smaller random errors in the scores. A test (or a set of test scores) with a reliability of 1.00 would have a standard error of zero and thus be perfectly reliable. (See Standard Error of

17

Page 18: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

Measurement.)

Reliability Coefficients: Estimated by correlation between scores on two equivalent forms of a test, by the correlation between scores on two administrations of the same test, or through procedures known as internal-consistency estimates. Each of the three estimates pertains to a different aspect of reliability. One of the easier and more commonly used (by teachers) estimates of reliability is known as the Kuder-Richardson Formula #21 estimate. The formula is

Reliability of Difference Scores (See Difference Score Reliability.)

Scaled Score: A mathematical transformation of a raw score. Scaled scores are useful when comparing test results over time. Most standardized achievement test batteries provide scaled scores for such purposes. Several different methods of scaling exist, but each is intended to provide a continuous score scale across the different forms and levels of a test series.

Scaled-Score Band: An individual’s scaled score plus and minus one standard error of measurement on the scaled-score metric. We can be 68% confident that the person’s true scaled score is between the two end points of this band. (See Standard Error of MeasurementScore.)

Scholastic Aptitude: The combination of native and acquired abilities that are needed for school learning; the likelihood of success in mastering academic work as estimated from measures of the necessary abilities.

School Ability Index (SAI): Obtained from the Otis-Lennon School Ability Test, normalized standard score with a mean of 100 and a standard deviation of 16. (See Deviation IQ and Standard Score.) An individual who had a School Ability Index of 116 would be one standard deviation above the mean, for example. This person would be at the 84th percentile for his or her age group. (See Normal Distribution

Standard Age Scores: Normalized standard scores provided for specified age groups on each battery of a test. Typically, standard age scores have a mean of 100 and a standard deviation of 15.

Standard Deviation (S.D.) A measure of the variability, or dispersion, of a distribution of scores. The more the scores cluster around the mean, the smaller the standard deviation. In a normal distribution of scores, 68.3% of the scores are within the range of one S.D. below the mean to one S.D. above the mean. Computation of the S.D. is based upon the square of the deviation of each score from the mean. One way of writing the formula is as follows:

 

(See Normal Distribution.)

Standard Error of Measurement (SEM): The amount an observed score is expected to fluctuate around the true score. For example, the obtained score will not differ by more than plus or minus one standard error from the true score about 68% of the time. About 95% of the

18

Page 19: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

time, the obtained score will differ by less than plus or minus two standard errors from the true score.

 

The SEM is frequently used to obtain an idea of the consistency of a person’s score or to set a band around a score. Suppose a person scores 110 on a test where the S.D. = 20 and [equation] = .91. Then:

We would thus say we are 68% confident the person’s true score was between (110–1 SEM) and (110+1 SEM) or between 104 and 116.

Standard Score: A general term referring to scores that have been "transformed" for reasons of convenience, comparability, ease of interpretation, etc. The basic type of standard score, known as a z-score, is an expression of the deviation of a score from the mean score of the group in relation to the standard deviation of the scores of the group. Most other standard scores are linear transformations of z-scores, with different means and standard deviations. (See z-Score.)

Standards (See Norms vs. Standards)

Stanines: Expressed as a nine-point normalized standard score scale with a mean of 5 and a standard deviation of 2. Only the integers 1 to 9 occur. The percentage of scores at each stanine is 4, 7, 12, 17, 20, 17, 12, 7, and 4, respectively. While stanines are popular, they are actually less informative than, say, percentiles. For example, for three students with percentiles of 39, 41, and 59, the first would receive a stanine of 4, and the next two stanines of 5. We would thus be misled into inferring that the latter two students were the same, and different from the first with respect to the characteristic measured, whereas in reality the first two individuals are essentially the same, and different from the third.

Sometimes, the first three stanines are interpreted as being "below average," the next three as "average," and the top three stanines as "above average." This can be quite misleading. Suppose twins, Joe and Jim, have percentiles of 22 and 24, respectively. Joe would have a stanine of 3 and be considered "below average" whereas Jim would have a stanine of 4 and be considered average.

T-Score: A standard score with a mean of 50 and a standard deviation of 10. Thus a T-score of 60 represents a score one standard deviation above the mean. T-scores are obtained by the following formula:

True Score: A score entirely free of error; a hypothetical value that can never be obtained by testing, since a test score always involves some measurement error. A person’s "true" score may be thought of as the average of an infinite number of measurements from the same or exactly equivalent tests, assuming no practice effect or change in the examinee during the testing. The standard deviation of this infinite number of scores is known as the standard error of measurement. (See Standard Error of Measurement.)

Validity: The extent to which a test does the job for which it is intended. The term validity has different connotations for different types of tests and, therefore, different kinds of validity evidence are appropriate for each.

1. Content validity: For achievement tests, content validity is the extent to which the content of the test represents a balanced and adequate sampling of the outcomes (domain) about which inferences are to be made.

Typically, but not always, we wish to make inferences about the degree to which students have learned the material in a course. In those cases, the question of content validity is a question of the match and balance between the test items and the course content. At other times we wish to make different inferences. For example, we may wish to know (make inferences about) how well a group of students can perform the basic arithmetic functions even though we have not been teaching them directly but have been teaching set theory, different number bases, exponents, etc. In such a case, the content validity of a test would be the degree to which the test questions represent a balanced and adequate sampling of the domain of "arithmetic functions." The match is always between the questions asked and the domain of behavior about which

19

Page 20: Psychometrics and Intellectual Assessment - …€¦  · Web viewPsychometrics and Intellectual Assessment. ... Handbook of Psychological Assessment- 4th Edition. New ... psychology,

inferences are to be made.

2. Criterion-related validity: The extent to which scores on the test are in agreement with (concurrent validity) or predict (predictive validity) some criterion measure.

Predictive validity refers to the accuracy with which a test is indicative of performance on a future criterion measure, e.g., scores on an academic aptitude test administered in high school to grade-point averages over four years of college. Evidence of concurrent validity is obtained when no time interval has elapsed between the administration of the test being validated and collection of data. Concurrent validity might be obtained by administering concurrent measures of academic ability and achievement, by determining the relationship between a new test and one generally accepted as valid, or by determining the relationship between scores on a test and a less objective criterion measure.

3. Construct validity: The extent to which a test measures some relatively abstract psychological trait or construct; applicable in evaluating the validity of tests that have been constructed on the basis of an analysis of the trait and its manifestation.

Test of personality, verbal ability, mechanical aptitude, critical thinking, etc., are validated in terms of their constructs by the relationships between their scores and pertinent external data.

Variability: The spread of dispersion of test scores, most often expressed as a standard deviation. (See Standard Deviation.)

Variance: The square of the standard deviation.

Weighting: The process of assigning different weights to different scores in making some final decision. To do weighting correctly, one must convert all scores to a common scale or metric. For example, we cannot average temperatures measured with both the Celsius and Fahrenheit scale until the temperatures from one scale are converted to the other scale. For educational data, we should first convert all data to a common scale such as a z-score, a T-score, or some other standard score. Then, to combine scores, we must determine how much weight to give each score. Weights are usually assigned subjectively, based on the importance and/or quality, e.g., reliability, of the data.

z-Score: A type of standard score with a mean of zero and a standard deviation of one. (See Standard Score.)

Thus, for example, if three individuals have z-scores of -0.5, 0, and 1.0, we know the first scored one-half a standard deviation below the mean, the second scored right at the mean, and the third scored one standard deviation above the mean. If the distribution were normal these z-scores would have percentiles of about 33, 50, and 84, respectively. (See Normal Distribution.)

from: Harcourt Assessment, Inc. | 19500 Bulverde Road | San Antonio, Texas 78259 | 1-800-211-8378Copyright © 2004 Harcourt Assessment, Inc. All rights reserved.

20