2001-2002 edition...2001-2002 edition 2001-2002 edition the tse program does not operate, license,...

38
TEST OF SPOKEN ENGLISH & SPEAKING PROFICIENCY ENGLISH ASSESSMENT KIT 2001-2002 EDITION 2001-2002 EDITION www.toefl.org www.toefl.org

Upload: others

Post on 25-Jun-2020

12 views

Category:

Documents


0 download

TRANSCRIPT

  • TEST OFSPOKENENGLISH

    &SPEAKING

    PROFICIENCYENGLISH

    ASSESSMENT KIT

    2 0 0 1 - 2 0 0 2E D I T I O N2 0 0 1 - 2 0 0 2E D I T I O N

    www.toefl.orgwww.toefl.org

  • The TSE program does not operate, license, endorse, orrecommend any schools or study materials that claim toprepare people for the TSE or SPEAK test in a short time orthat promise them high scores on the test.

    Educational Testing Service is an Equal Opportunity/Affirmative Action Employer.Copyright © 2001 by Educational Testing Service. All rights reserved.

    EDUCATIONAL TESTING SERVICE, ETS, the ETS logos, SPEAK, the SPEAK logo, TOEFL, the TOEFL logo, TSE, the TSE logo, and TWE are registered trademarks ofEducational Testing Service. The Test of English as a Foreign Language, Test of Spoken English, and Test of Written English are trademarks of Educational Testing Service.

    No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storageand retrieval system, without permission in writing from the publisher. Violators will be prosecuted in accordance with both United States and international copyright andtrademark laws.

    Permissions requests may be made online at www.toefl.org/copyrigh.html or sent to:Proprietary Rights OfficeEducational Testing ServiceRosedale RoadPrinceton, NJ 08541-0001, USAPhone: 1-609-734-5032

    ®

  • Preface

    This 2001 edition of the TSE Score User Guide supersedesthe TSE Score User’s Manual published in 1995.

    The Guide has been prepared for foreign studentadvisers, college deans and admissions officers,scholarship program administrators, departmentchairpersons and graduate advisers, teachers of Englishas a second language, licensing boards, and othersresponsible for interpreting TSE scores. In addition todescribing the test, testing program, and rating scale, theGuide discusses score interpretation, TSE examineeperformance, and TSE-related research.

    Your suggestions for improving the usefulness of theGuide are most welcome. Please feel free to send anycomments to us at the following address:

    TSE Program OfficeTOEFL Programs and ServicesEducational Testing ServicePO Box 6157Princeton, NJ 08541-6157, USA

    Language specialists prepare TSE test questions. These specialists follow careful, standardized procedures developed toensure that all test material is of consistently high quality. Each question is reviewed by several members of the ETS staff.The TSE Committee, an independent group of professionals in the fields of linguistics and language training that reportsto the TOEFL Board, is responsible for the content of the test.

    After test questions have been reviewed and revised as appropriate, they are selectively administered in trial situationsand assembled into test forms. The test forms are then reviewed according to established ETS and TSE programprocedures to ensure that the forms are free of cultural bias. Statistical analyses of individual questions, as well as of thecomplete tests, ensure that all items provide appropriate measurement information.

  • Overview of the TSE Test . . . . . . . . . . . . . . . . . . 4Purpose of the TSE test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Relationship of the TSE test to the TOEFL program . . . . . . . . . . . 4

    Development of the Original TSE Test . . . . . . . . 5Revision of the TSE Test . . . . . . . . . . . . . . . . . . 6

    The TSE Committee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Overview of the TSE test revision process . . . . . . . . . . . . . . . . . . . 6Purpose and format of the revised test . . . . . . . . . . . . . . . . . . . . . . 6Test construct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7Validity of the test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7Reliability and SEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    Content and Program Format of the TSE Test . 10Test content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Test registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Administration of the test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11Individuals with disabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11Measures to protect test security . . . . . . . . . . . . . . . . . . . . . . . . . 11TSE score cancellation by ETS . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    Scores for the TSE Test . . . . . . . . . . . . . . . . . . 13Scoring procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Scores and score reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Confidentiality of TSE scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Requests for TSE rescoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15TSE test score data retention . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    Use of TSE Scores . . . . . . . . . . . . . . . . . . . . . . 16Setting score standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16TSE sample response tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Guidelines for using TSE test scores . . . . . . . . . . . . . . . . . . . . . . . 16

    Statistical Characteristics of the TSE Test:Performance of Examinees on the Test ofSpoken English . . . . . . . . . . . . . . . . . . . . . . . 17

    Speaking ProficiencyEnglish Assessment Kit (SPEAK) . . . . . . . . 21

    Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22TOEFL research program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22Research and related reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    References . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

    A. TSE Committee Members . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28B. TSE Rating Scale, TSE and SPEAK Band Descriptor Chart . 29C. Glossary of Terms Used in TSE Rating Scale . . . . . . . . . . . . 31D. Frequently Asked Questions and Guidelines

    for Using TSE or SPEAK Scores . . . . . . . . . . . . . . . . . . . . . . 32E. Sample TSE Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

    Where to Get TSE Bulletins . . . . . . . . . . . . . . . 36

    Table of Contents

  • 4

    Overview of the TSE Test

    Purpose of the TSE test

    The primary purpose of the Test of SpokenEnglish (TSE®) is to measure the ability ofnonnative speakers of English to communicateorally in a North American English context. TheTSE test is delivered in a semidirect format,which maintains reliability and validity whilecontrolling for the subjective variables associatedwith direct interviewing. Because it is a test ofgeneral oral language ability, the TSE test isappropriate for examinees regardless of nativelanguage, type of educational training, or field ofemployment.

    There are two separate registration categorieswithin the TSE program: TSE-A and TSE-P.

    TSE-A is for teaching and research assistantapplicants who have been requested to take theTSE test by the admissions office or departmentchair of an academic institution. TSE-A is also forother undergraduate or graduate school applicants.

    TSE-P is for all other individuals, such as thosewho are taking the TSE test to obtain licensure orcertification in a professional or occupationalfield.

    The TSE test has broad applicability becauseperformance on the test indicates how orallanguage ability might affect the examinee’sability to communicate successfully in eitheracademic or professional environments. TSEscores are used at many North American institu-

    tions of higher education in the selection ofinternational teaching assistants (ITAs). Thescores are also used for selection and certificationpurposes in the health professions, such asmedicine, nursing, pharmacy, and veterinarymedicine, and for the certification of Englishteachers overseas and in North America.

    TSE scores should not be interpreted aspredictors of academic or professional success,but only as indicators of nonnative speakers’ability to communicate in English. The scoresshould be used in conjunction with other types ofinformation about candidates when makingdecisions about their ability to perform in anacademic or professional situation.

    Relationship of the TSE test to theTOEFL program

    The TSE program is administered by EducationalTesting Service (ETS) through the Test of En-glish as a Foreign Language (TOEFL) program.

    Policies governing the TOEFL, TSE, and Testof Written English (TWE�) programs are formu-lated by the TOEFL Board, an external group ofacademic specialists in fields related to interna-tional admissions, student exchange and languageeducation, and assessment. The Board was estab-lished by and is affiliated with the College Boardand the Graduate Record Examinations Board.

  • 5

    Development of the Original TSE Test

    The original Test of Spoken English was devel-oped during the late 1970s in recognition of thefact that academic institutions often needed anaccurate measure of speaking ability in order tomake informed selection and employment deci-sions. At that time there was an emphasis in thefields of linguistics, language teaching, andlanguage testing on accuracy in pronunciation,grammar, and fluency. The test was designed tomeasure these linguistic features and to evaluate aspeaker’s ability to convey information intelligi-bly to the listener. Test scores were derived forpronunciation, grammar, fluency, and overallcomprehensibility.

    In 1978 the TOEFL Research Committee andthe TOEFL Board sponsored a study entitled “AnExploration of Speaking Proficiency Measures inthe TOEFL Context” (Clark and Swinton, 1979).The report of this study details the measurementrationale and procedures used in developing theTSE test, as well as the basis for the selection ofthe particular formats and question types in-cluded in the original form of the test.

    A major consideration in developing a mea-sure of speaking ability was for it to be amenableto standardized administration at worldwide testcenters. This factor immediately eliminated thesubjective variables associated with direct, face-to-face interviewing. Providing the necessarytraining in interviewing techniques on a world-wide basis was considered impractical.

    Another factor addressed during the develop-ment of the original TSE test was its linguisticcontent. Because the test would be administered inmany countries, it had to be appropriate for allexaminees regardless of native language or culture.

    A third factor in test design considerationswas the need to elicit evidence of general speak-ing ability rather than ability in a particularlanguage-use situation. Because the test would beused to predict examinees’ speaking ability in awide variety of North American contexts, it couldnot use item formats or individual questions thatwould require extensive familiarity with aparticular subject matter or employment context.

    Two developmental forms of the TSE testwere administered to 155 examinees, who alsotook the TOEFL test and participated in an oralproficiency interview modeled on that adminis-tered by the Foreign Service Institute (FSI). Thespecific items included on the prototype formswere selected with the goal of maintaining thehighest possible correlation with the FSI ratingand the lowest possible correlation with theTOEFL score to maximize the usefulness of thespeaking test.

    Validation of the TSE test was supported byresearch that indicated the relationship betweenthe TSE comprehensibility scores and FSI oralproficiency levels, the intercorrelations amongthe four TSE scores, and the correlation ofuniversity instructors’ TSE scores with studentassessments of the instructors’ language skills(Clark and Swinton, 1980).

    Subsequent to the introduction of the test foruse by academic institutions in 1981, additionalresearch (Powers and Stansfield, 1983) validatedTSE scores for selection and certification inhealth-related professions (e.g., medicine, nurs-ing, pharmacy, and veterinary medicine).

  • 6

    Since the introduction of the original TSE test in1981, language teaching and language testingtheory and practice have evolved to place agreater emphasis on overall communicativelanguage ability. This contemporary approachincludes linguistic accuracy as only one ofseveral aspects of language competence relatedto the effectiveness of oral communication. Forthis reason, the TSE test was revised to betterreflect current views of language proficiency andassessment. The revised test was first adminis-tered in July 1995.

    The TSE Committee

    In April 1992 the TOEFL Board approved therecommendation of the TOEFL Committee ofExaminers to revise the TSE test and to establisha separate TSE Committee to oversee the revisioneffort.

    TSE Committee members are appointed bythe TOEFL Board Executive Committee. The TSECommittee includes specialists in applied linguis-tics and spoken English language teaching andtesting, TSE chief raters, and representative scoreusers. As the TSE test development advisorygroup, the TSE Committee approves the testspecifications and score scale, reviews testquestions and item performance, offers guidancefor rater training and score use, and makessuggestions for further research, as needed.

    Members of the TSE Committee are rotatedon a regular basis to ensure the continued intro-duction of new ideas and perspectives related tothe assessment of oral language proficiency.Appendix A lists current and former TSE Com-mittee members.

    Overview of the TSE testrevision process

    The TSE revision project begun in 1992 was ajoint effort of the TSE Committee and ETS staff.This concentrated three-year project required

    articulation of the underlying theoretical basis ofthe test and the test specifications as well asrevision of the rating scale. Developmentalresearch included extensive pilot testing of bothtest items and rating materials, a large-scaleprototype research study, and a series of studiesto validate the revised test and scoring system.Program publications underwent extensiverevision, and the TSE Standard-Setting Kit wasproduced to assist users in establishing passingscores for the revised test. Extensive rater train-ing and retraining were also conducted to setrating standards and assure appropriate imple-mentation of the revised scoring system.

    Purpose and formatof the revised test

    At the outset of the TSE revision project, it wasagreed that the test purpose remained unchanged.That is, the test would continue to be one ofgeneral speaking ability designed to evaluate theoral language proficiency of nonnative speakers ofEnglish who were at or beyond the postsecondarylevel of education. It would continue to be ofusefulness to the primary audience for the origi-nal TSE test (i.e., those evaluating prospectiveITAs [international teaching assistants] andpersonnel in the health-related professions). Inthis light, it was designed as a measure of theexaminee’s ability to successfully communicate inNorth American English in an academic orprofessional environment.

    It was also determined that the TSE testwould continue to be a semidirect speaking testadministered via audio-recording equipment usingprerecorded prompts and printed test books, andthat the examinee’s recorded responses, or speechsample, would be scored independently by at leasttwo trained raters. Pilot testing of each test formallows ETS to monitor the performance of all testquestions.

    Revision of the TSE Test

  • 7

    Test construct

    The TSE Committee commissioned a paper byDouglas and Smith (TOEFL MS-9, 1997) toprovide a review of the research literature,outline theoretical assumptions about speakingability, and serve as a guide for test revision. Thispaper, Theoretical Underpinnings of the Test of SpokenEnglish Revision Project, described models oflanguage use and language competence, emphasiz-ing how they might inform test design andscoring. The paper also acknowledged the limita-tions of an audio-delivered test compared to adirect interview.

    As derived from the theory paper, theconstruct underlying the revised test iscommunicative language ability. The TSE testwas revised on the premise that language is adynamic vehicle for communication, driven byunderlying competencies that interact in variousways for effective communication to take place.For the purposes of the TSE, this communicativelanguage ability has been defined to includestrategic competence and language competence,the latter comprising discourse competence,functional competence, sociolinguisticcompetence, and linguistic competence.

    Critical to the design of the test is the notionthat these competencies are involved in the act ofsuccessful communication. Using language for anintended purpose or function (e.g., to apologize,to complain) is central to effective communica-tion. Therefore, each test item consists of alanguage task that is designed to elicit a particularfunction in a specified context or situation.

    Within this framework, a variety of languagetasks and functions were defined to provide thestructural basis of the revised test. The scoringsystem was also designed to provide a holisticsummary of oral language ability across thecommunication competencies being assessed.

    Validity of the test

    A series of validation activities were conductedduring the revision of the TSE test to evaluate theadequacy of the test design and to provide evi-dence for the usefulness of TSE scores. Theseefforts were undertaken with a process-orientedperspective. That is, the accumulation of validitydata was used to inform test revision, makemodifications as indicated, and confirm the

    appropriateness of both the test design andscoring scale.

    Validity refers to the extent to which a testactually measures what it purports to measure.*Although many procedures exist for determiningvalidity, there is no single indicator or standardindex of validity. The extent to which a test canbe evaluated as a valid measure is determined byjudging all available evidence. The test’s strengthsand limitations must be taken into account, aswell as its suitability for particular uses andexaminee populations.

    Construct validity research was initiated inthe theory paper commissioned by the TSECommittee (Douglas and Smith, TOEFL MS-9,1997). This document discusses the dynamicnature of the construct of oral language ability inthe field of language assessment and points theway to a conceptual basis for the revised test. As aresult of the paper and discussion among expertsin the field, the basic construct underlying the testwas defined as communicative language ability.This theoretical concept was operationalized inthe preliminary test specifications.

    To evaluate the validity of the test design,Hudson (1994) reviewed the degree of congru-ence between the test’s theoretical basis and thetest specifications. This analysis suggested agenerally high degree of concordance. The testspecifications were further revised in light ofthis review.

    In a similar vein, the prototype test wasexamined by ETS staff for its degree of congru-ence with the test specifications. This review alsoled to modest revisions in the test specificationsand item writing guidelines in order to provide ahigh degree of congruence between the theory,specifications, and test forms.

    As a means of validating the test content, adiscourse analysis of both native and nonnativespeaker speech as elicited by the prototype testwas conducted (Lazaraton and Wagner, TOEFLMS-7, 1996). The analysis indicated that thelanguage functions intended were reliably andconsistently elicited from both native and nonna-tive speakers, all of whom performed the sametypes of speech activities.

    * The reader is referred to the American Psychological Association’sStandards for Educational and Psychological Testing (1999), as well asWainer and Braun’s Test Validity (1988), for a thorough treatment ofthe concept of validity.

  • 8

    The test rating scale and score bands werevalidated through another process. ETS ratingstaff wrote descriptions of the language elicited inspeech samples which were compared to therating scale and score bands assigned to thesamples. This was to determine the degree ofagreement between elicited speech and thescoring system. The results confirmed the validityof the rating system.

    The concurrent validity of the revised TSEtest was investigated in a large-scale researchstudy by Henning, Schedl, and Suomi (TOEFLRR-48, 1995). The sample for this study con-sisted of subjects representing the primary TSEexaminee populations: prospective universityteaching assistants (N=184) and prospectivelicensed medical professionals (N=158).

    Prospective teaching assistants representedthe fields of science, engineering, computerscience, and economics. Prospective licensedmedical professionals included foreign medicalgraduates who were seeking licenses to practiceas physicians, nurses, veterinarians, or pharma-cists in the United States. The subjects in bothgroups represented more than 20 native lan-guages.

    The instruments used in the study includedan original version of the TSE test, a 15-itemprototype version of the revised test, and an orallanguage proficiency interview (LPI). Theoriginal version and revised prototype wereadministered under standard TSE conditions.

    The study utilized two types of raters: 16linguistically “naive” raters who were untrainedand 40 expert, trained raters. The naive raters,eight from a student population and eight from apotential medical patient population, wereselected because they represented groups mostlikely to be affected by the English-speakingproficiency of the nonnative candidates for whompassing TSE scores are required. These raterswere purposely chosen because they had littleexperience interacting with nonnative English

    speakers, and scored only the responses to theprototype. The naive raters were asked to judgethe communicative effectiveness of the revisedTSE prototype responses of 39 of the subjects aspart of validating the revised scoring method. Thetrained raters scored the examinees’ performanceon the original TSE test according to the originalrating scale and performance on the prototyperevised test according to the new rating scale.(The rating scale used in this study to score therevised TSE test was similar though not identicalto the final rating scale approved by the TSECommittee in December 1995, which can befound in Appendix B.)

    The use of naive raters in this study served tooffer additional construct validity evidence forinferences to be made from test scores. That is,untrained, naive raters were able to determineand differentiate varying levels of communicativelanguage ability from the speech performancesamples elicited by the prototype test. Theseresults also provided content validity for therating scale bands and subsequent scoreinterpretation.

    Means and standard deviations were com-puted for the scores given by the trained raters.In this preliminary study, the mean of the scoreson the prototype of the revised test was 50.27 andthe standard deviation was 8.66. Comparisonsmade of the subjects’ performance on the originalTSE test and the prototype of the revised testshowed a correlation between scores for the twoversions was .83.

    As part of the research study, a subsample of39 examinees was administered a formal orallanguage proficiency interview recognized by theAmerican Council on the Teaching of ForeignLanguages, the Foreign Service Institute, and theInteragency Language Roundtable. The correla-tion between the scores on the LPI and theprototype TSE test was found to be .82, providingfurther evidence of concurrent validity for therevised test.

  • 9

    Reliability and SEM

    Reliability can be defined as the extent to whichtest scores are free from errors in the measurementprocess. A variety of reliability coefficients canexist because errors of measurement can arise froma number of sources. Interrater reliability is anindex of the consistency of TSE scores assigned bythe first and second raters before adjudication. Testform reliability is an index of internal consistencyamong TSE items and provides information aboutthe extent to which the items are assessing the sameconstruct. Test score reliability is the degree towhich TSE test scores are free from errors whenthe two sources of error variation are accounted forsimultaneously, that is, the variations of examinee-and-rating interaction and of examinee-and-iteminteraction. Reliability coefficients can range from.00 to .99.* The closer the value of the coefficient tothe upper limit, the less error of measurement.Table 1 provides means of interrater, test form, andtest score reliabilities for the total examinee group

    * This reliability estimate was reached by the use of the Spearman-Brown adjustment, which provides an estimate of the relationshipthat would be obtained if the average of the two ratings were used asthe final score.

    and the academic/professional subgroups over the54 monthly administrations of the TSE test betweenJuly 1995 and January 2000.

    The standard error of measurement (SEM) isan index of how much an examinee’s actualproficiency (or true score) can vary due to errorsof measurement. SEM is a function of the testscore standard deviation and test score reliability.An examinee’s TSE observed score is expected tobe within the range of his or her TSE true scoreplus or minus the two SEMs (i.e., plus or minusapproximately 4 points on the TSE reportingscale) about 95 percent of the time. The averageSEM is also shown in Table 1.

    Table 1. Average TSE Reliabilities and Standard Errors ofMeasurement (SEM) — Total Group and Subgroups(Based on 64,701 examinees who took primary TSE and SPEAK forms

    between July 1995 and January 2000.)

    Total Academic Professional(N = 64,701) (N = 29,254) (N = 35,447)

    Interrater Reliability 0.92 0.91 0.92Test Form Reliability 0.98 0.97 0.98Test Score Reliability 0.89 0.89 0.90SEM 2.24 2.26 2.22

  • 10

    Test content

    The TSE test consists of 12 questions, each ofwhich requires examinees to perform a particularspeech act. Examples of these speech activities,also called language functions, include narrating,recommending, persuading, and giving andsupporting an opinion. The test is delivered viaaudio-recording equipment and a test book. Aninterviewer on the test tape leads the examineethrough the test; the examinee responds into amicrophone, and responses are recorded on aseparate answer tape.

    The time allotted for each response rangesfrom 30 to 90 seconds, the timing is based on pilottesting results. All the questions asked by theinterviewer, as well as the response time, areprinted in the test book. The questions on the testare of a general nature and are designed to informthe raters about the candidate’s oral communica-tive language ability.

    At the beginning of the test, the intervieweron the test tape asks some general questions thatserve as a “warm up” to help examinees becomeaccustomed to speaking on tape and to allow foradjustment of the audio equipment as needed.These initial, unnumbered questions are notscored. Next, the examinees are given 30 secondsto study a map and then are asked some questionsabout it. Subsequently, the examinees are askedto look at a sequence of pictures and tell the storythat the pictures show. Then the examinees areasked to discuss topics of general interest and todescribe information presented in a simple graph.Finally, the examinees are asked to presentinformation from a revised schedule and indicatethe revisions.

    A short video, Test of Spoken English: AnOverview, provides general information aboutthe background, purpose, and format of thetest. The video is approximately 20 minuteslong and is available upon request. It is alsoincluded in the TSE Standard-Setting Kit.

    Test registration

    The TSE test is administered 12 times a year attest centers throughout the world. TSE adminis-tration dates are published in the InformationBulletin for TSE.* The Bulletin includes a registra-tion form, a general description of the test, thetest directions, and a sample test. TSE candidatesmust complete the registration form and return itto TOEFL/TSE Services with the appropriate testfee. Copies of the Bulletin are distributed to TSEtest centers, to American embassies, binationalcenters, language academies, and additionalagencies and individuals who express interest inTSE. Often institutions or departments andemployers that require TSE scores of applicantsinclude copies of the Bulletin when responding toinquiries from nonnative speakers. A supply ofBulletins can also be obtained from TOEFL/TSEServices, PO Box 6151, Princeton, NJ 08541-6151, USA.

    * Individuals who plan to take the TSE test in India, Korea, or Taiwanshould refer to the Information Bulletin for TSE — India, Korea,Taiwan Edition. In the People’s Republic of China (PRC), where theTest of English as a Foreign Language is administered in the paper-based format, examinees must obtain the PRC Edition of Bulletin ofInformation for TOEFL, TWE, and TSE.

    Content and Program Format of the TSE Test

  • 11

    accommodations that can be provided are ex-tended testing time, breaks, test reader, signlanguage interpreter, other aids customarily usedby the test taker, large print, nonaudio (withoutoral stimulus), and braille. All requests foraccommodations must be approved in accordancewith TSE policies and procedures.

    Nonstandard scoresThe TSE Program Office recommends thatalternative methods of evaluating English profi-ciency be used for individuals who cannot takethe TSE under standard conditions. Criteria suchas past academic record, recommendations fromlanguage teachers or others familiar with theapplicant’s English proficiency, and/or a personalinterview are suggested in lieu of TSE scores.

    However, as noted earlier, the TSE ProgramOffice will make special arrangements to adminis-ter the test under nonstandard conditions forindividuals with disabilities. Because the indi-vidual circumstances of nonstandard administra-tions vary so widely, the TSE Program Office isnot able to compare scores obtained at suchadministrations with those obtained at standardadministrations.

    Measures to protect test security

    To protect the validity of the test scores, the TSEProgram Office continually reviews and refinesprocedures designed to increase the security ofthe test before, during, and after its administra-tion. Because of the importance of TSE scores toapplicants and to institutions, there are inevitablysome individuals who engage in practices de-signed to increase their reported scores. Thecareful selection of supervisors, a low examinee-to-proctor ratio, and the detailed administrationprocedures given in the Supervisor’s Manual areall designed to prevent attempts at impersonation,theft of test materials, and the like, and thus toprotect the integrity of the test for all examineesand score recipients.

    Administration of the test

    The TSE test is administered under strictlycontrolled testing procedures. The actual testingtime is approximately 20 minutes. The test can beadministered to individuals with cassette taperecorders or to a group using a multiple-recordingfacility such as a language laboratory.

    Because the scores of examinees are compa-rable only if the same procedures are followed atall test administrations, the TSE Program Officeprovides detailed guidelines for test centersupervisors to ensure uniform administrations.The TSE Supervisor’s Manual is mailed with the testmaterials to test supervisors well in advance ofthe test date. This publication describes thearrangements necessary to prepare for the testadministration, discusses the kind of equipmentneeded, and gives detailed instructions for theactual administration of the test.

    TSE regulations, as listed in the InformationBulletin, are enforced to prevent cheating andattempts at impersonation.

    At the beginning of the administration, beforethe start of the actual test, examinees are givensealed test books. Once the test begins, examineeslisten to a tape recording containing the generaldirections and test questions. The tape recorderson which examinees’ responses are recorded arenot stopped at any time during the test unless anunusual circumstance related to the test adminis-tration is identified by the administrator.

    IMPORTANT: The TSE test is NOT admin-istered as part of the TOEFL test. It is admin-istered separately, at the present time.

    Individuals with disabilities

    The TSE Program Office, in response to requestsfrom individuals with disabilities, will makespecial arrangements with test center supervi-sors, where local conditions permit, to administerthe TSE test with accommodations. Among the

  • 12

    Identification requirementsStrict admission procedures are followed at alltest centers to prevent attempts by some examin-ees to have others with greater proficiency inEnglish impersonate them at a TSE administra-tion. To be admitted to a test center, everyexaminee must present an official identificationdocument with a recognizable photograph, suchas a valid passport.

    Although the passport is the basic documentaccepted at all test centers, other specificphotobearing documents are acceptable forindividuals who may not be expected to havepassports or who are taking the test in their owncountries. Through foreign embassies in theUnited States and TSE supervisors in foreigncountries, TOEFL/TSE Services verifies the typesof official photobearing identification documentsused in each country, such as national identitycards, registration certificates, and work permits.Detailed information about identification require-ments is included in the Information Bulletin.

    Photo file recordsThe photo file record contains the examinee’sname, registration number, test center code, andsignature as well as a recent photo that clearlyidentifies the examinee. The form is collected bythe test center supervisor from each examineebefore he or she is admitted to the testing room.In addition to verifying the photo identity of theexaminee, the supervisor verifies that the nameon the official identification document is exactlythe same as the name on the photo file record.

    Supervision of examineesSupervisors and room proctors are instructed toexercise extreme vigilance during a test adminis-tration to prevent examinees from giving orreceiving assistance in any way. While taking thetest, examinees may not have anything on theirdesks but their test books, tape recorders, andadmission tickets. They are not permitted to makenotes or marks of any kind in their test books.

    If a supervisor is certain that someone hasgiven or received assistance on the test, theexaminee is dismissed from the testing room andhis or her score is not reported. If a supervisor

    suspects someone of cheating, a description of theincident is written on the Supervisor’s Irregular-ity Report (included in the Supervisor’s Manual),which is returned to ETS with the examinee’stape. Suspected and/or confirmed cases ofcheating are investigated by the Test SecurityOffice at ETS.

    Preventing access to test materialsTo ensure that examinees have not seen the testmaterial in advance, new forms of the test aredeveloped regularly.

    To help prevent the theft of test materials,procedures have been devised for the securedistribution and handling of these materials.Test tapes and test books (individually sealedand packed in sealed plastic bags) are sent totest centers in sealed boxes that supervisorsare required to place in locked storage that isinaccessible to unauthorized persons. Supervisorscount the test books upon receipt, after theexaminees have begun the test, and at the end ofthe administration. No one is permitted to leavethe testing room until all test books and examineeanswer tapes have been accounted for.

    TSE supervisors return the test materials toETS, where they are counted upon receipt. TheETS Test Security Office investigates all cases ofmissing test materials.

    TSE score cancellation by ETS

    TSE Services, on behalf of Educational TestingService, seeks to report scores that accuratelyreflect the performance of the test taker. ETS hasdeveloped test administration and test securitystandards and procedures with the goals ofassuring that all test takers have equivalentopportunities to demonstrate their abilities, andpreventing some test takers from gaining unfairadvantage over others. ETS reserves the right tocancel any test score if, in ETS’s judgment,there is an apparent discrepancy in photoidentification, the test taker has engaged inmisconduct in connection with the test, there isa testing irregularity, or there is substantialevidence that the test score is invalid foranother reason.

  • 13

    Scoring procedures

    TSE answer tapes are scored by trained TSEraters who are experienced teachers and special-ists in the field of English or English as a secondlanguage. Raters are trained at qualifying work-shops conducted by ETS staff. Prior to each testscoring session, raters review answer tapes atvarious points on the TSE rating scale to main-tain accurate scoring. Raters undergo retraining ifscore discrepancies indicate that it is warranted.

    Each TSE tape is rated independently by tworaters; neither knows the scores assigned by theother. Each rater evaluates each item responseand assigns a score level using descriptors ofcommunicative effectiveness that are delineatedin the TSE rating scale (see Appendix B). Exam-inee scores are produced from the combinedaverage of these independent item ratings. If thetwo ratings do not show adequate agreement,the tape is rated by a third independent rater.Final scores for tapes requiring third ratings arebased on averaging the two closest averages anddisregarding the discrepant average. The TSE andSPEAK Band Descriptor Chart (Appendix B) isused by raters.

    Scores and score reports

    The TSE test yields a single holistic score ofcommunicative language ability reported on ascale of 20 to 60. Assigned score levels areaveraged across items and raters, and the scoresare reported in increments of five (i.e., 20, 25, 30,35, 40, 45, 50, 55, and 60). Score level perfor-mance is described below.

    Scale Description60 Communication almost always effective:

    task performed very competently5550 Communication generally effective: task

    performed competently4540 Communication somewhat effective: task

    performed somewhat competently3530 Communication generally not effective:

    task performed poorly2520 No effective communication: no

    evidence of ability to perform task

    Scores for the TSE Test

    If responses to more than one of the items aremissing, no test score is reported and the exam-inee is offered a retest at no charge.

    Two types of score records are issued for theTSE: the examinee’s score record, which is sentdirectly to the examinee, and official scorereports, which are sent directly by ETS to institu-tions or agencies specified by the examinee on theTSE admission ticket. Payment of the test feeentitles the examinee to designate two recipientsof the official score report. The official scorereport includes the examinee’s name, registrationnumber, native country, native language, date ofbirth, test date, and TSE score. (See samplereport on page 14.)

    Additional score reportsTSE examinees may request that official scorereports be sent to additional institutions at anytime up to two years after they take the test.

    Additional score reports, for which there is afee, are mailed within two weeks after receipt ofthe Score Report Request Form found in the TSEBulletin.

    Confidentiality of TSE scores

    Information retained in the TSE files is the sameas the information printed on the examinee’sscore record and on the official score report. Anofficial score report will be sent only to thoseinstitutions or agencies designated on the admis-sion ticket by the examinee on the day of the test,on a score report request form submitted at alater date, or otherwise specifically authorized bythe examinee.

    The scores are not to be released by institu-tional recipients without the explicit permissionof the examinees.

    The TSE program recognizes the right ofexaminees to privacy with regard to informationthat is stored in data or research files held byEducational Testing Service and the program’sresponsibility to protect information in its filesfrom unauthorized disclosure. Therefore, ETSdoes not fax or give TSE results by telephone toexaminees or institutions. The TOEFL/TSE officewill not release TSE scores or other informationwithout the examinee’s written consent.

  • 14

    REGISTRATIONNUMBER

    NAME (Family or Surname, Given, Middle)

    DEPARTMENT

    NATIVE LANGUAGE

    NATIVE COUNTRY

    SEXMonth/Day/Year

    DATE OF BIRTHCENTERNUMBER

    DEPARTMENTCODE

    INSTITUTIONCODE

    Month Year

    TEST DATE

    SEE OTHER SIDE FOREXPLANATION OF SCORES.

    Test of Spoken English, P.O. Box 6157, Princeton, NJ 08541-6157, USA

    EXAMINEE’S ADDRESS:

    TSE SCORE

    Test of Spoken EnglishOFFICIAL SCORE REPORT

    NOTE: If you have any reason to believe that someone has tampered with thisscore report, please call toll free, 800-257-9547 to have the scores verified.Remember, scores more than two years old cannot be verified. Photostat copiesshould not be accepted.

    Examinee identification serviceThe examinee identification service providesphoto identification of examinees taking the TSE.If there is reason to suspect an inconsistencybetween a high test score and relatively weakspoken English proficiency, an institution oragency that has received either an official scorereport from ETS or an examinee’s score recordfrom an examinee may request a copy of thatexaminee’s photo file record for up to 18 monthsfollowing the test date shown on the score report.The written request for examinee identificationmust be accompanied by a photocopy of theexaminee’s score record or official report.

    Requests for photo file records should be sent to:

    TOEFL/TSE Program OfficeEducational Testing ServicePO Box 6157Princeton, NJ 08541-6157USA

    DOs and DON’Ts

    DO verify the information on an examinee’sscore record by calling TOEFL/TSEServices at

    1-800-257-9547(8:30 am – 4:30 pm New York time)

    DON’T accept scores that are more thantwo years old.

    DON’T accept score reports from otherinstitutions that were obtained under theSPEAK program. SPEAK scores are only validfor the institution that administered the test.

    DON’T accept photocopies of score reports.

    Score reports are valid only if received directly from Educational Testing Service. TSE test scoresare confidential and should not be released by the recipient without written permission from theexaminee. All staff with access to score records should be advised of their confidential nature.

  • 15

    Requests for TSE rescoring

    An examinee who questions the accuracy of thereported score may request to have the responsetape rated again by a rater who did not score thetape previously. If the TSE score increases ordecreases, a revised examinee’s score record isissued, and revised official score reports are sentto the institutions that received original scores.This revised score becomes the official TSE score.If rescoring confirms the original TSE score, theexaminee is so notified by letter from TOEFL/TSE Services-Princeton.

    Requests must be received within six monthsof the test date, and there is a fee for this service.The results of the rescoring are available aboutthree weeks after the receipt at TOEFL/TSE

    Services-Princeton of the TSE Rescoring RequestForm and fee. The form is available in the TSEBulletin. Experience has shown that very fewscore changes result from this procedure.

    TSE test score data retention

    Because language proficiency can change consid-erably in a relatively short period, TOEFL/TSEServices-Princeton will not report or verify scoresthat are more than two years old. Individuallyidentifiable test scores are retained for only twoyears.

    TSE test score data that may be used atany time for informational, research, statistical,or training purposes are not individuallyidentifiable.

  • 16

    Use of TSE Scores

    Setting score standards

    Educational Testing Service does not set passingor failing scores on the TSE. Each institution oragency that uses TSE scores must determine whatscore is acceptable, depending on the level of oralcommunicative language ability it deems appro-priate for a particular purpose. It should be notedthat scores on the revised TSE and the originaltest are different in meaning. Because the testsare different, there cannot be a score-by-scorecorrespondence on the two measures. The TSEprogram has prepared the TSE Standard-Setting Kitto assist institutions and agencies in arriving atscore standards for the revised test.

    TSE sample response tape

    The TSE program has developed a TSE SampleResponse Tape as a supplement to this guide. The30-minute audio tape contains selected sampleresponses from the revised TSE and is intended toprovide score users with a better understandingof the levels of communicative effectivenessrepresented by particular TSE scores. The tapeincludes several speech samples elicited fromnonnative English speakers of different nativelanguage backgrounds. The speech samplesrepresent various levels of spoken English profi-ciency derived from the TSE rating scale and arearranged from high score to low score.

    Guidelines for usingTSE test scores

    The following guidelines are presented to assistinstitutions in the interpretation and use of TSEscores.

    1. Use the TSE score only as a measure of abilityto communicate orally in English. Do not useit to predict academic or work performance.

    2. Base the evaluation of an applicant’s potentialfor successful academic work or job perfor-mance on all available relevant informationand recognize that the TSE score is only oneindicator of ability to perform effectively in agiven academic or professional context.

    3. Consider the kinds and levels of English orallanguage required at different levels of studyin different academic disciplines or in variedprofessional assignments. Also consider theresources available at the institution forimproving the English speaking proficiencyof nonnative speakers.

    4. Consider that examinee scores are based on a20-minute tape that represents spontaneousspeech samples.

    5. Review the TSE rating scale and TSE SampleResponse Tape. The scale appears in AppendixB and the tape can be ordered from ETS.

    6. Conduct a local validity study to assure thatthe TSE scores required by the institution areappropriate.

    It is important to base the evaluation ofinternational candidates’ potential performanceon all available relevant information, not solelyon TSE scores. The TSE measures an individual’soral communicative language ability in English ina North American context, but does not measurelistening, reading, or writing skills in English.The TOEFL and TWE tests may be used to mea-sure those skills.

    General oral communicative effectiveness isonly one of many qualities necessary for success-ful academic or job performance. Other qualitiesmay include command of subject matter, interper-sonal skills, and interest in the field or profes-sion. The TSE test does not provide informationabout aptitude, motivation, command of subjectmatter or content areas, teaching ability, orcultural adaptability, all of which may havesignificant bearing on the ability to performeffectively in a given situation.

    As part of its general responsibility for thetests it produces, the TSE program is concernedabout the interpretation and use of TSE scores byrecipient institutions. The TSE Program Officeencourages individual institutions to request itsassistance with any questions related to theproper use of TSE scores.

  • 17

    This section contains information about the performance of examinees who took the Test of SpokenEnglish between July 1995 and January 2000. The psychometric data were collected during the first fiveyears of the administration of the revised TSE.

    Statistical Characteristics of the TSE Test:Performance of Examinees on theTest of Spoken English

    Contents

    Reliability and SEM ....................................................................................... 9

    Table 1: Average TSE Score Reliabilities and SEMs —Total Group and Subgroups .............................................................................. 9

    Performance of Examinees on the TSE Test ............................................. 17

    Table 2: Percentile Ranks for TSE Scores — Total Group .......................................... 18

    Table 3: Percentile Ranks for TSE Scores — AcademicExaminees ........................................................................................................ 18

    Table 4: Percentile Ranks for TSE Scores — Applicants forProfessional License ........................................................................................ 18

    Table 5: TSE Total Score Means and Standard Deviations —All Examinees Classified by Geographic Region andNative Language............................................................................................... 19

    Table 6: TSE Total Score Means and Standard Deviations —All Examinees Classified by Geographic Region andNative Country ................................................................................................ 20

  • 18

    The data presented here are based on TSE testscores obtained by 82,868 examinees betweenJuly 1995 and January 2000. It should be notedthat this test record database includes both first-time test takers and repeating examinees.

    These tables summarize the performance ofself-selected groups of examinees who took theTSE test during the period specified; the data arenot necessarily representative of the general TSEpopulation.

    Table 2 gives the percentile ranks for thetotal scale scores for the total group between July1995 and January 2000.

    Tables 3 and 4 show the percentile ranks forthe total scale scores for the total groups ofacademic and professional license examinees, aswell as for the four largest language groups ineach of these categories, between July 1995 andJanuary 2000.

    Table 3. Percentile Ranks for TSE Scores —Academic Examinees*

    60 98 >99 99 98 9755 91 98 97 91 8450 76 91 93 73 5345 53 72 81 41 2240 25 36 49 12 535 6 7 15 1

  • 19

    Tables 5 and 6 may be useful in comparing the performance on the TSE test of a particular examinee withthat of other examinees from the same country and with that of examinees who speak the same language.It is important to point out that the data do not permit the generalization that there are fundamentaldifferences in the ability of the various national and language groups to learn English or in the level ofEnglish proficiency they can attain. The tables are based simply on the performance of those examineesnative to particular countries and languages who happened to take the TSE test.

    * (1) Because of the unreliability of statistics based on small samples, means are not reported for subgroups of fewer than 25 examinees.

    (2) Includes 573 examinees who did not report their native languages and 979 examinees who reported “other” languages.

    Native LanguageNative LanguageAFRICAN Afrikaans 374 56 5

    Amharic 120 44 6Bemba * * *Berber * * *Chichewa * * *Efik-Ibibio * * *Ewe 34 46 7Fula (Peulh) * * *Ga * * *Ganda (Luganda) * * *Hausa * * *Ibo (Igbo) 420 46 5Kanuri * * *Kikuyu 72 45 5Kirundi * * *Lingala * * *Luba-Lulua * * *Luo 29 47 5Malagasy * * *Malinke-Bambara-Dyula * * *Mende * * *Nyanja * * *Oromo (Galla) * * *Ruanda * * *Sesotho * * *Setswana * * *Shona 56 49 5Siswati * * *Somali 34 43 8Swahili 76 46 7Tigrinya 40 43 5Twi-Fante (Akan) 98 46 6Wolof * * *Xhosa * * *Yoruba 454 46 6Zulu * * *

    ASIAN Assamese 31 49 7Azeri * * *Bengali 782 49 6Bhili * * *Bikol 182 44 5Burmese 36 47 7Cebuano (Visayan) 2,491 46 5Chinese 15,066 42 5Georgian * * *Gujarati 1,501 46 6Hindi 2,547 49 6Ilocano 960 44 5Indonesian 240 43 6Japanese 3,080 41 6Javanese 27 41 7Kannada (Kanarese) 384 50 5Kashmiri 27 51 7Kazakh * * *Khmer (Kampuchean) * * *Konkani 171 51 5Korean 9,192 40 5Kurdish * * *Lao * * *Malay 114 47 7Malayalam 914 46 6Marathi 873 49 5Mongolian * * *Nepali 58 44 7Oriya 76 47 6Panay-Hiligaynon 1,139 45 5Pashto 27 49 6Punjabi 772 46 6Samar-Leyte 138 45 5Sindhi 118 49 6

    Num

    ber o

    fEx

    amin

    ees

    Mea

    n

    Stan

    dard

    Devi

    atio

    n

    ASIAN (continued) Sinhalese 101 46 6Sundanese * * *Tagalog 12,268 46 5Tamil 1,759 50 6Tatar * * *Telugu 1,121 48 5Thai 464 41 6Tibetan * * *Tulu 27 51 6Urdu 825 48 6Uzbek * * *Vietnamese 1,032 41 6

    EUROPEAN Albanian 39 46 6Armenian 43 49 6Basque (Euskara) * * *Belarussian * * *Bulgarian 173 48 6Catalan (Provencal) 33 45 6Czech 105 49 5Danish 62 55 5Dutch 444 53 6English 2,065 56 6Estonian * * *Finnish 143 49 7French 1,511 48 7Galician * * *German 1,412 53 6Greek 478 49 6Hungarian (Magyar) 223 48 6Icelandic 28 53 5Italian 395 48 6Latvian * * *Lithuanian 34 46 8Macedonian 29 47 6Maltese * * *Norwegian 127 51 7Polish 1,044 46 5Portuguese 878 47 6Romanian 526 48 6Russian 1,099 46 6Serbo-Croatian 477 47 6Slovak 77 47 6Slovene * * *Spanish 3,598 46 7Swedish 289 53 6Turkish 689 46 6Turkmen * * *Ukrainian 141 47 6Yiddish * * *Yupiks * * *

    MIDDLE EASTERN Arabic 3,218 46 6Farsi (Persian) 754 45 6Hebrew 435 51 6

    OTHER/NOT Not Reported 573 45 7REPORTED Other 979 45 6

    PACIFIC REGION Fijian * * *Madurese 31 40 5Marshallese * * *Minankabau * * *Pidgin * * *Samoan * * *Tongan * * *

    SOUTH AMERICAN Guarani * * *Quechua * * *

    Num

    ber o

    fEx

    amin

    ees

    Mea

    n

    Stan

    dard

    Devi

    atio

    n

    Table 5. TSE Total Score Means and Standard Deviations(1) —All Examinees Classified by Geographic Region and Native Language

    (Based on 82,868 examinees who took TSE between July 1995 and January 2000)(2)

  • 20

    Geographic Regionand Native Country

    Geographic Regionand Native CountryAFRICA Algeria 36 44 7

    Angola * * *Benin * * *Botswana * * *Burkina Faso * * *Burundi * * *Cameroon 52 44 6Comoros * * *Congo Republic * * *Cote d’Ivoire * * *Egypt 1,712 45 5Eritrea 29 43 5Ethiopia 143 44 6Gabon * * *Gambia * * *Ghana 175 46 6Guinea * * *Kenya 199 46 46Lesotho * * *Liberia * * *Lybia 34 46 5Madagascar * * *Malawi * * *Mali * * *Mauritania * * *Morocco 66 44 5Mozambique * * *Namibia * * *Nigeria 1,071 46 6Reunion * * *Rwanda * * *Sao Tome and Principe * * *Senegal * * *Seychelles * * *Sierra Leone * * *Somalia 33 42 8South Africa 774 56 5Sudan 97 46 5Swaziland * * *Tanzania 32 49 8Togo * * *Tunisia * * *Uganda * * *Zaire (Congo-DRC) 27 51 7Zambia * * *Zimbabwe 71 51 6

    AMERICAS Anguilla * * *Argentina 536 46 6Aruba * * *Bahamas * * *Barbados * * *Belize * * *Bolivia 29 47 8Brazil 754 47 6Canada 1,467 55 7Chile 154 46 7Colombia 643 46 6Costa Rica 75 50 7Cuba 125 41 6Dominica(Commonwealth of) * * *

    Dominican Republic 55 46 7Ecuador 61 45 7El Salvador * * *Grenada * * *Guadeloupe * * *Guatemala 41 47 6Guyana * * *Haiti 100 42 7Honduras 30 47 7Jamaica 31 53 5Maldives * * *Mexico 480 47 7Netherlands Antilles * * *Nicaragua 156 38 6Northern Mariana Islands * * *Panama 61 45 7Paraguay * * *Peru 233 44 6Puerto Rico 145 47 7St. Vincent andthe Grenandines * * *

    Suriname * * *Trinidad and Tobago 60 52 5United States of America 257 51 8Uruguay 33 47 7Venezuela 210 46 7

    ASIA Afghanistan 66 45 5Azerbaijan * * *Bangladesh 247 47 6Brunei Darussalam * * *Cambodia (Kampuchea) * * *China (People’s Republic of) 10,493 42 5Hong Kong 2,010 44 6India 10,802 48 6Indonesia 256 43 6

    Num

    ber o

    fEx

    amin

    ees

    Mea

    n

    Stan

    dard

    Devi

    atio

    n

    ASIA (continued) Japan 3,133 41 6Kiribati * * *Korea (DPR) 46 40 7Korea (ROK) 9,150 40 5Kyrgyzstan * * *Laos * * *Macau 27 44 8Malaysia 182 47 7Mauritius * * *Mongolia * * *Myanmar (Burma) 37 46 8Nepal 52 44 7Pakistan 783 48 6Philippines 17,540 46 5Singapore 196 49 7Sri Lanka 301 45 6Taiwan 2,503 42 5Tajikistan * * *Thailand 463 41 6Uzbekistan 36 45 5Vietnam 1,036 40 6

    EUROPE Albania 31 46 6Andorra * * *Armenia * * *Austria 114 52 5Azores * * *Belarus 53 47 6Belgium 228 50 6Bosnia/Herzegovina 140 45 6Bulgaria 173 48 6Croatia 90 48 6Cyprus 126 49 6Czech Republic 105 49 6Denmark 62 55 6England 136 56 5Estonia * * *Finland 151 49 7Former YugoslavRep. of Macedonia 30 47 6

    France 697 47 6Georgia * * *Germany 1,133 53 6Greece 367 48 6Hungary 179 48 6Iceland 29 53 5Ireland 26 59 3Italy 386 48 6Kazakstan 37 46 5Latvia 64 44 6Lithuania 38 46 8Luxembourg * * *Malta * * *Moldova 30 46 7Monaco * * *Netherlands * * *Northern Ireland * * *Norway 128 51 7Poland 1,038 46 5Portugal 130 49 6Romania 554 48 6Russia 653 47 7Scotland * * *Slovak Republic 69 47 6Slovenia * * *Spain 516 47 6Sweden 280 53 6Switzerland 219 51 6Turkey 676 46 6Ukraine 345 46 6United Kingdom 27 53 6Wales * * *Yugoslavia 301 47 6

    MIDDLE EAST Iran 722 46 6Iraq 320 46 5Israel 489 50 6Jordan 281 46 6Kuwait 38 47 7Lebanon 194 50 7Oman * * *Saudi Arabia 128 47 6Syria 364 48 6United Arab Emirates * * *Yemen * * *

    OTHER/NOT Not Reported 370 46 7REPORTED Other 85 46 6

    PACIFIC REGION American Samoa * * *Australia 63 57 6Fiji * * *Marshall Islands * * *New Caledonia * * *New Zealand * * *Papua New Guinea * * *Western Samoa * * *

    Num

    ber o

    fEx

    amin

    ees

    Mea

    n

    Stan

    dard

    Devi

    atio

    n

    * (1) Because of the unreliability of statistics based on small samples, means are not reported for subgroups of fewer than 25 examinees.

    (2) Includes 370 examinees who did not report their country of birth or who reported English as their native language.

    Table 6. TSE Total Score Means and Standard Deviations(1) —All Examinees Classified by Geographic Region and Native Country

    (Based on 80,218 examinees who took TSE from July 1995 and January 2000)(2)

  • 21

    SPEAK

    The TSE program offers the Speaking ProficiencyEnglish Assessment Kit (SPEAK), which enablesinstitutions to administer at their own conve-nience retired forms of the TSE test for localevaluation purposes.

    SPEAK was developed by the TOEFLprogram to provide a valid and reliable instru-ment for assessing the English speaking profi-ciency of people who are not native speakers ofthe language. It can be used for selection ofthose who are employed as teaching assistantsor in other capacities. It can also be used byintensive English language programs to placetheir students at appropriate levels.

    SPEAK is available for direct purchase for on-site testing by university-affiliated Englishlanguage institutes, institutional or agency testingoffices, intensive English language programs,government departments, and other organizationsserving public or private educational programs. Itis important to remember that SPEAK is designedfor internal use only.

    Although the test design of the TSE andSPEAK is the same, the scores on these two testsare not equivalent because the TSE is adminis-tered and scored under standardized conditions.The SPEAK test is administered and scoredfollowing standards set by each institution usingthe test. Consequently, a SPEAK score is validonly in the institution where SPEAK was admin-istered. Additional information about SPEAK isavailable upon request.

    The TSE Standard-Setting Kit is available toassist institutions in arriving at score standardsfor the revised TSE/SPEAK test.

    Launched in the early 1980s, SPEAK wasrevised in 1996. It includes:

    � SPEAK Rater Training Kit — the kit includesmaterials for training staff to rateexaminees’ oral responses and general testadministration information.

    � Test Forms — six SPEAK test forms (A, B,C, D, E, and F) are available in exercisesets. Each form contains 30 test books,one cassette test tape, the rating scale, anda pad of score sheets.

    � Examinee Practice Set — the set contains 15identical practice test books and 15practice test cassettes. The test providedis the disclosed sample TSE test found inTSE Bulletins and on the TOEFL Website, with the audio component deliveredvia audio cassettes. The materials enableexaminees to become familiar with theformat of the SPEAK test.

    Speaking Proficiency English Assessment Kit (SPEAK)

  • 22

    Research

    TOEFL research program

    The purpose of the TOEFL research program is tofurther knowledge in the field of language assess-ment and second language acquisition aboutissues related to psychometrics, language learningand pedagogy, and the proper use and interpreta-tion of language assessment tools.

    In light of these diverse goals, the TOEFLresearch agenda calls for continuing researchin broad areas of inquiry, such as test valida-tion, information, reliability, use, construction,implementation, examinee performance, andapplied technology. The areas of inquiry forcompleted research projects are highlighted inthe schema on page 26.

    Since the studies are usually specific to theTOEFL tests and associated testing programs,most of the actual research work is conducted byEducational Testing Service staff members ratherthan by outside researchers. Many projects,however, include outside consultants and thecooperation of other institutions, particularlythose with programs in the teaching of English asa foreign or second language.

    The TOEFL Board supports this ongoingprogram. The TOEFL Committee of Examiners,an external committee of specialists in linguistics,language testing, or the teaching of English as a

    foreign or second language and language researchspecialists from the academic community, setsguidelines for the scope of the TOEFL researchprogram and reviews and approves TOEFLfunded research projects.

    Research and related reports

    An ongoing series of research studies and activi-ties related to the revised TSE test continues toaddress issues of importance to the TSE andSPEAK programs, examinees, and score users. Asneeded, the TSE Committee suggests further TSEor SPEAK research. The results of researchstudies conducted under the direction of theTOEFL programs are available to the public inpublished reports.

    To date, there are several TSE or SPEAK-related listings in the TOEFL Research ReportSeries, the TOEFL Technical Report Series, andthe TOEFL Monograph Series. Additionalprojects are in progress and under consideration.When a new research, technical, or monographreport is published, an abstract and orderinginformation are posted on the TOEFL Web site.The complete list of available research studiescan be found at:http://www.toefl.org/research/rrpts.html, andhttp://www.toefl.org/research/rschindx.html.

  • 23

    Research Reports

    RR–4. An Exploration of Speaking ProficiencyMeasures in the TOEFL Context. Clark andSwinton. October 1979. Describes a three-year studyinvolving the development and experimental ad-ministration of test formats and item types aimed atmeasuring the English-speaking proficiency of non-native speakers; results grouped into a prototypeTest of Spoken English.

    RR–7. The Test of Spoken English as a Measureof Communicative Ability in English-MediumInstructional Settings. Clark and Swinton.December 1980. Examines the performance ofteaching assistants on the Test of Spoken Englishin relation to their classroom performance as judgedby students; reports that the TSE® test is a validpredictor of oral language proficiency for nonna-tive English-speaking graduate teaching assistants.

    RR–13. The Test of Spoken English as a Measureof Communicative Ability in the Health Profes-sions. Powers and Stansfield. January 1983. Pro-vides results of using a set of procedures for deter-mining standards of language proficiency in testingpharmacists, physicians, veterinarians, and nursesand for validating the use of the TSE test in health-related professions.

    RR–18. A Preliminary Study of Raters for theTest of Spoken English. Bejar. February 1985.Examines the scoring patterns of different TSEraters in an effort to develop a method for predict-ing disagreements; reports that the raters varied inthe severity of their ratings but agreed substantiallyon the ordering of examinees.

    RR–36. A Preliminary Study of the Nature ofCommunicative Competence. Henning andCascallar. February 1992. Provides information onthe comparative contributions of some theory-basedcommunicative competence variables to domains oflinguistic, discourse, sociolinguistic, and strategiccompetencies and investigates these competencydomains for their relation to components of languageproficiency as assessed by the TOEFL, TWE, andTSE tests.

    RR–40. Reliability of the Test of Spoken EnglishRevisited. Boldt. November 1992. Examineseffects of scale, section, examinee, and rater as wellas the interactions of these factors on the TSE test;offers suggestions for improving reliability.

    RR–46. Multimethod Construct Validation of theTest of Spoken English. Boldt and Oltman.December 1993. Uses factor analysis and multidi-mensional scaling to explore the relationshipsamong TSE subsections and rating dimensions;results show the roles of test section and profi-ciency scales in determining TSE score variation.

    RR–48.* Analysis of Proposed Revisions of theTest of Spoken English. Henning, Schedl, andSuomi. March 1995. Compares a prototype revisedTSE with the original version of the test withrespect to interrater reliability, frequency of raterdiscrepancy, component task adequacy, scoring ef-ficacy, and other aspects of validity; results under-score the psychometric quality of the revised TSE.

    RR–49. A Study of the Characteristics of theSPEAK Test. Sarwark, Smith, MacCallum, andCascallar. March 1995. Investigates issues of reli-ability and validity associated with the originallocally administered and scored SPEAK test, the“off-the-shelf” version of the original TSE; resultsindicate that this version of the SPEAK test isreasonably reliable for local screening and is anappropriate measure of English-speaking proficiencyin U.S. instructional settings.

    RR–58.* Using Just Noticeable Differences toInterpret Test of Spoken English Scores. Stricker.August 1997. This study explored the value ofobtaining a Just Noticeable Difference (JND) —the difference in scores needed before observersdiscern a difference in examinees’ English profi-ciency — for the current Test of Spoken English asa means of interpreting scores in practical terms,using college students’ ratings of their internationalteaching assistants’ English proficiency and adapt-ing classical psychophysical methods. The test’sconcurrent validity against these ratings was alsoappraised. Three estimates of the JND were ob-tained. They varied considerably in size, but allwere substantial when compared with the stan-dard deviation of the TSE scores, the test’s stan-dard error of measurement, and guidelines for theeffect size for mean differences. The TSE test cor-related moderately with the rating criterion. TheJND estimates appear to be meaningful and usefulin interpreting the practical significance of TSEscores, and the test has some concurrent validity.

    * Studies related to current versions of the TSE and SPEAK testslaunched in July 1995 and July 1996, respectively.

  • 24

    RR–63.* Validating the Revised Test of SpokenEnglish Against a Criterion of CommunicativeSuccess. Powers, Schedl, Wilson-Leung, and Butler.March 1999. A communicative competenceorientation was taken to study the validity of testscore inferences derived from the current Test ofSpoken English. To implement the approach, asample of undergraduate students, primarily nativespeakers of English, provided a variety of reactionsto, and judgments of, the test responses of a sampleof TSE examinees. The TSE scores of these examin-ees, previously determined by official TSE raters,spanned the full range of TSE score levels. Under-graduate students were selected as “evaluators” be-cause they, more than most other groups, are likelyto interact with TSE examinees, many of whombecome teaching assistants.

    The objective was to determine the degree towhich official TSE scores are predictive of listeners’ability to understand the messages conveyed byTSE examinees. Analyses revealed a strongassociation between TSE score levels and thejudgments, reactions, and understanding oflisteners. This finding applied to all TSE tasks andto nearly all of the several different kinds ofevaluations made by listeners.

    RR–65.* Monitoring Sources of VariabilityWithin the Test of Spoken English AssessmentSystem. Myford and Wolfe. June 2000. An analysisof TSE data showed that, for each of two TSEadministrations, the examinee proficiency measureswere found to be trustworthy in terms of theirprecision and stability. The standard error ofmeasurement varied across the score distribution,particularly in the tails of the distribution.

    The items on the TSE appear to work together;ratings on one item correspond well to ratings onthe other items. Consequently, it is appropriate togenerate a single summary measure to capture theessence of examinee performance across the 12items. However, the items differed little in terms ofdifficulty, thus limiting the instrument’s ability todiscriminate among levels of proficiency.

    The TSE rating scale functions as a five-pointscale, and the scale categories are clearly distin-guishable. Raters differed somewhat in the levels ofseverity they exercised when they rated examineeperformances. The vast majority used the scale in aconsistent fashion.

    * Studies related to current versions of the TSE and SPEAK testslaunched in July 1995 and July 1996, respectively.

  • 25

    Technical Reports

    TR–15.* Strengthening the Ties That Bind:Improving the Linking Network in SparselyConnected Rating Designs. Myford and Wolfe.August 2000. The purpose of this study was toevaluate the effectiveness of a strategy for linkingraters when there are large numbers of ratersinvolved in a scoring session and the overlap amongraters is minimal. In sparsely connected ratingdesigns, the number of examinees any given pairof raters has scored in common is very limited.Connections between raters may be weak and ten-tative at best. The linking strategy employed in-volved having all raters in a Test of Spoken En-glish scoring session rate a small set of sixbenchmark audiotapes, in addition to those exam-inee tapes that each rater scored as part of his orher normal workload. Using output from Facetsanalyses of the rating data, the researchers lookedat the effects of embedding blocks of ratings fromvarious smaller sets of these benchmark tapes onkey indicators of rating quality. The researchersfound that all benchmark sets were effective forestablishing at least the minimal connectivityneeded in the rating design in order to allow place-ment of all raters and all examinees on a singlescale. When benchmark sets were used, the high-est scoring benchmark (i.e., those examinees thatscored 50s and 60s across the items) produced thehighest quality linking (i.e., the most stable link-ing). The least consistent benchmark sets (i.e.,those that were somewhat harder to rate becausean examinee’s performance varied across items)tended to provide fairly stable links. The mostconsistent benchmarks (i.e., those that were some-what easier to rate because an examinee’s perfor-mance was similar across items) and middle scor-ing benchmarks (i.e., those from examinees whoscored 30s and 40s across the items) tended toprovide less stable linking. Low scoring bench-mark sets provided the least stable linking. Whena single benchmark tape was used, the highestscoring single tape provided higher quality linkingthan either the least consistent or most consistentbenchmark tape.

    Monograph Series

    MS–7.* The Revised Test of Spoken English: Dis-course Analysis of Native Speaker and Nonna-tive Speaker Data. Lazaraton and Wagner. Decem-ber 1996. Describes a qualitative discourse analysisof native speaker and nonnative speaker responsesto the current TSE test; results indicated that thematch between intended task functions (as per thecontent specifications) and the actual functionsemployed by native speakers was quite close.

    MS–9.* Theoretical Underpinnings of the Test ofSpoken English Revision Project. Douglas andSmith. May 1997. The purpose of this paper is tolay a theoretical foundation for the revisions lead-ing to the current Test of Spoken English. Therevision project was undertaken in response toconcerns expressed by researchers and score usersabout the validity of the TSE test and to a requestby the TOEFL Committee of Examiners to makethe Test of Spoken English more reflective of cur-rent thinking on the assessment of oral languageskills. The paper first discusses communicative com-petence as a basis for understanding the nature oflanguage knowledge, and then describessociolinguistic and discourse factors that influencespoken language performance. Test method charac-teristics that influence test performance are alsodiscussed, as are types of evidence necessary forestablishing reliability and validity of the currentTSE test. The paper concludes with a discussion ofthe implications of the theory for the interpreta-tion of examinee performance with regard to aca-demic and professional contexts of language use.

    * Studies related to current versions of the TSE and SPEAK testslaunched in July 1995 and July 1996, respectively.

  • 26

    TOEFL Research Reports, Technical Reports, and MonographsRelated to TSE and SPEAK Tests*

    AREA TSE/SPEAK

    TEST VALIDATIONConstruct Validity RR-4, 7, 13, 36, 46, 48,** MS-7,** MS-9**Face/Content Validity RR-49Predictive Validity RR-7, 13, 49, 63**Concurrent Validity RR-4, 7, 48,** 49, 58**

    Response Validity

    TEST INFORMATIONScore Interpretation RR-36Underlying Processes RR-36Diagnostic ValuePerformance Descriptors

    Reporting/Scaling RR-48,** 58**

    EXAMINEE PERFORMANCEDifference VariablesLanguage Acquisition/LossSample Dimensionality

    Person Fit

    TEST USEDecisions/Cut Scores RR-13Test/Item BiasSocio/Pedagogical ImpactSatisfying Assumptions

    Examinee/User Populations

    TEST CONSTRUCTIONFormat Rationale/Selection RR-48**Equating RR-58**Item Pretesting/Selection

    Component Length/Weight RR-48**

    TEST IMPLEMENTATIONTesting TimeScoring/Rating RR-4, 18, 48,** 49, 65,** 66,** TR15**

    Practice/Sequence Effects

    TEST RELIABILITYInternal Consistency RR-40Alternate FormsTest-Retest

    Inter-/Intrarater RR-4, 7, 18, 40, 49

    APPLIED TECHNOLOGYInnovative FormatsMachine Test ConstructionComputer-Adaptive TestingItem Banking

    * Research Reports are identified by their series number preceded by “RR”; Technical Reports are listed by their series number preceded by “TR”;Monographs are preceded by “MS.”

    **Studies related to current versions of the TSE and SPEAK tests launched in July 1995 and July 1996, respectively.

  • 27

    American Psychological Association. Standards forEducational and Psychological Testing. Washington, DC:American Psychological Association, 1999.

    Bejar, I. A Preliminary Study of Raters for the Test ofSpoken English (TOEFL Research Report 18).Princeton, NJ: Educational Testing Service, 1985.

    Boldt, R. F. Reliability of the Test of Spoken EnglishRevisited. (TOEFL Research Report 40). Princeton, NJ:Educational Testing Service, 1992.

    Boldt, R. F., and Oltman, P. Multimethod ConstructValidation of the Test of Spoken English (TOEFLResearch Report 46). Princeton, NJ: EducationalTesting Service, 1993.

    Clark, J. L. D., and Swinton, S. S. An Exploration ofSpeaking Proficiency Measures in the TOEFL Context(TOEFL Research Report 4). Princeton, NJ: Educa-tional Testing Service, 1979.

    Clark, J. L. D., and Swinton, S. S. The Test of SpokenEnglish as a Measure of Communicative Ability inEnglish-Medium Instructional Settings (TOEFLResearch Report 7). Princeton, NJ: EducationalTesting Service, 1980.

    Douglas, D., Murphy, J., and Turner, C. The St.Petersburg Protocol: An Agenda for a TSE ValidityMosaic (ETS internal document). Princeton, NJ:Educational Testing Service, 1996.

    Douglas, D., and Smith, J. Theoretical Underpinningsof the Test of Spoken English Revision Project (TOEFLMonograph Series 9). Princeton, NJ: EducationalTesting Service, 1997.

    Henning G., and Cascallar, E. C. A PreliminaryStudy of the Nature of Communicative Competence(TOEFL Research Report 36). Princeton, NJ: Educa-tional Testing Service, 1992.

    Henning, G., Schedl, M., and Suomi, B. K. Analysisof Proposed Revisions of the Test of Spoken English.(TOEFL Research Report 48). Princeton, NJ: Educa-tional Testing Service, 1995.

    Hudson, T. A Conceptual Validation of the Theory toTest Specification Congruence of the Revised Test ofSpoken English (ETS internal document). Princeton,NJ: Educational Testing Service, 1994.

    Lazaraton, A., and Wagner, S. The Revised TSE:Discourse Analysis of Native Speaker and NonnativeSpeaker Data (TOEFL Monograph Series 7). Princeton,NJ: Educational Testing Service, 1996.

    Myford, C. M., and Wolfe, E. W. Monitoring Sourcesof Variability Within the Test of Spoken English Assess-ment System. (TOEFL Research Report 65). Princeton,NJ: Educational Testing Service, 2000.

    Myford, C. M., and Wolfe, E. W. Strengthening theTies That Bind: Improving the Linking Network inSparsely Connected Rating Designs (TOEFL TechnicalReport 15). Princeton, NJ: Educational Testing Service,2000.

    Pike, L. W. An Evaluation of Alternative ItemFormats for Testing English as a Foreign Language(TOEFL Research Report 2). Princeton, NJ: Educa-tional Testing Service, 1979.

    Powers, D. E., and Stansfield, C. W. The Test ofSpoken English as a Measure of Communicative Ability inthe Health Professions: Validation and Standard Setting(TOEFL Research Report 13). Princeton, NJ: Educa-tional Testing Service, 1983.

    Powers, D. E., Schedl, M. A., Wilson-Leung, S., andButler, F.A. Validating the Revised Test of SpokenEnglish Against a Criterion of Communicative Success(TOEFL Research Report 63). Princeton, NJ: Educa-tional Testing Service, 1999.

    Sarwark, S. M., Smith, J., MacCallum, R., andCascallar, E. C. A Study of the Characteristics of theSPEAK Test (TOEFL Research Report 49). Princeton,NJ: Educational Testing Service, 1995.

    Stricker, L. J. Using Just Noticeable Differences toInterpret Test of Spoken English Scores (TOEFL Re-search Report 58). Princeton, NJ: Educational TestingService, 1997.

    Wainer, H., and Braun, H. I. (Eds.) Test Validity.Hillsdale, NJ: Lawrence Erlbaum Associates, 1988.

    References

  • 28

    Appendices

    Appendix A

    TSE Committee Members(2001-2002)

    Richard F. Young, Chair (2000-2003) University of Wisconsin-MadisonMember (1997-2000)

    Tim McNamara (2001-2004) University of Melbourne, Australia

    James E. Purpura (1997-2003) Teachers College at Columbia University

    Emma Castillo (2000-2002) Philippine Normal University

    Barbara Hoekje (1999-2002) Drexel University

    Marysia Johnson (2000-2003) Arizona State University

    Julia Delahunty (ex officio) Middlesex County College

    Mark C. Miller (ex officio) University of Delaware

    Former Members(1992-2001)

    Frances Butler, Chair (1992-1994) University of California-Los Angeles

    Dan Douglas, Chair (1994-1997) Iowa State UniversityMember (1992-1994)

    Miriam Friedman Ben-David (1992-1996) Educational Commission for ForeignMedical Graduates (ECFMG)

    Richard Cameron (1999-2000) University of Illinois-Chicago

    Richard Gaughran (1997-2001) Comenius University, Slovakia

    Frederick L. Jenks (1994-1997) Florida State University

    Mark Miller (1992-1994) University of Delaware

    Joseph A. Murphy (1994-1997) Nagasaki Junshin Catholic University, Japan

    Cynthia L. Myers (1996-1999) Iowa State University

    Barbara S. Plakans (1996-1999) The Ohio State University

    Jennifer St. John (1992-1995) University of Ottawa, Canada

    Jan Smith (1992-1996) University of Minnesota

    Carolyn E. Turner, Chair (1997-2000) McGill University, CanadaMember (1995-1997)

  • 29

    Appendix B

    TEST OF SPOKEN ENGLISH (TSE) RATING SCALEApproved by TSE Committee, December 1995

    60 Communication almost always effective: task performed very competently.

    Functions performed clearly and effectively

    Appropriate response to audience/situation

    Coherent, with effective use of cohesive devices

    Use of linguistic features almost always effective; communication not affected by minor errors

    50 Communication generally effective: task performed competently.

    Functions generally performed clearly and effectively

    Generally appropriate response to audience/situation

    Coherent, with some effective use of cohesive devices

    Use of linguistic features generally effective; communication generally not affected by errors

    40 Communication somewhat effective: task performed somewhat competently.

    Functions performed somewhat clearly and effectively

    Somewhat appropriate response to audience/situation

    Somewhat coherent, with some use of cohesive devices

    Use of linguistic features somewhat effective; communication sometimes affected by errors

    30 Communication generally not effective: task generally performed poorly.

    Functions generally performed unclearly and ineffectively

    Generally inappropriate response to audience/situation

    Generally incoherent, with little use of cohesive devices

    Use of linguistic features generally poor; communication often impeded by major errors

    20 No effective communication: no evidence of ability to perform task.

    No evidence that functions were performed

    No evidence of ability to respond appropriately to audience/situation

    Incoherent, with no use of cohesive devices

    Use of linguistic features poor; communication ineffective due to major errors

    Copyright © 2001 by Educational Testing Service.All rights reserved.

  • 30

    Com

    mun

    icat

    ion

    alm

    ost

    alw

    ays

    effe

    ctiv

    e:ta

    sk p

    erfo

    rmed

    ver

    y co

    mpe

    tent

    ly

    Spea

    ker

    volu

    ntee

    rs in

    form

    atio

    n fr

    eely

    , with

    little

    or

    no e

    ffor

    t, an

    d m

    ay g

    o be

    yond

    the

    task

    by

    usin

    g ad

    ditio

    nal a

    ppro

    pria

    tefu

    nctio

    ns.

    •N

    ativ

    e-lik

    e re

    pair

    str

    ateg

    ies

    •So

    phis

    ticat

    ed e

    xpre

    ssio

    ns•

    Ver

    y st

    rong

    con

    tent

    •A

    lmos

    t no

    liste

    ner

    effo

    rt r

    equi

    red

    Fun

    ctio

    ns p

    erfo

    rmed

    cle

    arly

    and

    eff

    ecti

    vely

    Spea

    ker

    is h

    ighl

    y sk

    illfu

    l in

    sele

    ctin

    gla

    ngua

    ge to

    car

    ry o

    ut in

    tend

    ed f

    unct

    ions

    that

    rea

    sona

    bly

    addr

    ess

    the

    task

    .

    App

    ropr

    iate

    res

    pons

    e to

    aud

    ienc

    e/si

    tuat

    ion

    Spea

    ker

    alm

    ost a

    lway

    s co

    nsid

    ers

    regi

    ster

    and

    dem

    onst

    rate

    s au

    dien

    ce a

    war

    enes

    s.•

    Und

    erst

    andi

    ng o

    f co

    ntex

    t, an

    d st

    reng

    th in

    disc

    ours

    e an

    d lin

    guis

    tic c

    ompe

    tenc

    e,de

    mon

    stra

    te s

    ophi

    stic

    atio

    n

    Coh

    eren

    t, w

    ith

    effe

    ctiv

    e us

    e of

    coh

    esiv

    ede

    vice

    s

    Res

    pons

    e is

    coh

    eren

    t, w

    ith lo

    gica

    lor

    gani

    zatio

    n an

    d cl

    ear

    deve

    lopm

    ent.

    •C

    onta

    ins

    enou

    gh d

    etai

    ls to

    alm

    ost a

    lway

    sbe

    eff

    ectiv

    e•

    Soph

    istic

    ated

    coh

    esiv

    e de

    vice

    s re

    sult

    insm

    ooth

    con

    nect

    ion

    of id

    eas

    Use

    of

    lingu

    isti

    c fe

    atur

    es a

    lmos

    t al

    way

    sef

    fect

    ive;

    com

    mun

    icat

    ion

    not

    affe

    cted

    by

    min

    or e

    rror

    s

    •E

    rror

    s no

    t not

    icea

    ble

    •A

    ccen

    t not

    dis

    trac

    ting

    •R

    ange

    in g

    ram

    mat

    ical

    str

    uctu

    res

    and

    voca

    bula

    ry•

    Del

    iver

    y of

    ten

    has

    nativ

    e-lik

    e sm

    ooth

    ness

    Ove

    rall

    feat

    ures

    to c

    onsi

    der:

    Func

    tiona

    l com

    pete

    nce

    is th

    e sp

    eake

    r’s

    abili

    ty to

    sel

    ect f

    unct

    ions

    to r

    easo

    nabl

    yad

    dres

    s th

    e ta

    sk a

    nd to

    sel

    ect t

    he la

    ngua

    gene

    eded

    to c

    arry

    out

    the

    func

    tion.

    Soci

    olin

    guis

    tic c

    ompe

    tenc

    e is

    the

    spea

    ker’

    sab

    ility

    to d

    emon

    stra

    te a

    n aw

    aren

    ess

    ofau

    dien

    ce a

    nd s

    ituat

    ion

    by s

    elec

    ting

    lang

    uage

    , reg

    iste

    r (l

    evel

    of

    form

    ality

    ) an

    dto

    ne, t

    hat i

    s ap

    prop

    riat

    e.

    Dis

    cour

    se c

    ompe

    tenc

    e is

    the

    spea

    ker’

    sab

    ility

    to d

    evel

    op a

    nd o

    rgan

    ize

    info

    rmat

    ion

    in a

    coh

    eren

    t man

    ner

    and

    to m

    ake

    effe

    ctiv

    eus

    e of

    coh

    esiv

    e de

    vice

    s to

    hel

    p th

    e lis

    tene

    rfo

    llow

    the

    orga

    niza

    tion

    of th

    e re

    spon

    se.

    Lin

    guis

    tic c

    ompe

    tenc

    e is

    the

    effe

    ctiv

    ese

    lect

    ion

    of v

    ocab

    ular

    y, c

    ontr

    ol o

    fgr

    amm

    atic

    al s

    truc

    ture

    s, a

    nd a

    ccur

    ate

    pron

    unci

    atio

    n al

    ong

    with

    sm

    ooth

    del

    iver

    yin

    ord

    er to

    pro

    duce

    inte

    lligi

    ble

    spee

    ch.

    Com

    mun

    icat

    ion

    gene