a paradigm shift toward evidence-based clinical practice: developing a performance assessment

5
A paradigm shift toward evidence-based clinical practice: Developing a performance assessment Nancy Wentworth a, *, Lynnette B. Erickson b,1 , Barbara Lawrence c,2 , J. Aaron Popham d,3 , Byran Korth e,1 a 205 B MCKB, Brigham Young University, Provo, UT 84602, United States b 201 X MCKB, Brigham Young University, Provo, UT 84602, United States c 206 U MCKB, Brigham Young University, Provo, UT 84602, United States d 201 J MCKB, Brigham Young University, Provo, UT 84602, United States e 206 H MCKB, Brigham Young University, Provo, UT 84602, United States Recent accreditation mandates have called for teacher educa- tion programs to base their decision-making on cutting edge research and solid evidence. Issues of evidence-based decision- making have become central to the debate on educational policies and teacher preparation requirements. Cochran-Smith (2006) asserts, ‘‘Teacher education’s current preoccupation with evidence is consistent with the way the standards movement has evolved and with the trend toward evidence-based practice in education writ large’’ (p. 6). Teacher education programs seeking national accreditation are striving to meet many and varied expectations and standards established by national programs and organizations (McConney, Schalock, & Schalock, 1998). Evaluation and accred- itation of schools of education will now be based on the performance of their graduates on licensure tests, commonly accepted outcomes, and national program standards. Since 1982 this university has chosen to align with the National Council for Accreditation of Teacher Education (NCATE) in defining and providing required evidence of knowledge, skills, and dispositions for teacher candidates (NCATE, 2002) but recently NCATE requirements had changed. We realized that our unit definition, course outcomes, and assessments did not meet the expectations articulated in current NCATE requirements. We began to make the required changes in our program. First, the unit definition was expanded to a university-wide Educator Preparation Program (EPP), which more accurately represents the shared responsibility for educator preparation beyond the School of Education to encompass all colleges and departments across the university. The new EPP consists of 7 colleges and 26 departments. The EPP articulated assessment requirements applicable to and accepted by all licensure programs within the unit and identified the structures for collecting, analyzing, reporting, and using data for program improvement. The Interstate New Teacher Assessment and Support Consortium (INTASC) Standards (1992), which had been adopted by the State Board of Education as a part of its licensure process for entry-level educators, were selected as the foundation of all assessments. The challenge then was to find an instrument that would assess what we valued. Darling-Hammond (2006) asserts that perfor- mance assessments not only provide evidence for individual performances, but also help to represent program goals, create a common language, focus understanding, and provide information about program strengths and weaknesses. However, finding or creating reliable and valid instruments to provide data appropriate Studies in Educational Evaluation 35 (2009) 16–20 ARTICLE INFO Keywords: Evaluation research Program evaluation Student teacher evaluation Teacher evaluation Accreditation (institutions) Assessment instruments (individuals) ABSTRACT The Clinical Practice Assessment System (CPAS), developed in response to teacher preparation program accreditation requirements, represents a paradigm shift of one university toward data-based decision- making in teacher education programs. The new assessment system is a scale aligned with INTASC Standards, which allows for observation and evaluation of teacher candidate performance over time to show growth from novice to high level proficiency. This article describes the creation of the CPAS and examines results from its implementation in early childhood, elementary, and secondary programs in both early and capstone clinical experiences. Three years of implementation experiences have informed decisions on supervisory practices, program offerings and requirements, alignment of course outcomes, and understanding of strengths and weaknesses of individual licensure programs. ß 2009 Elsevier Ltd. All rights reserved. * Corresponding author. Tel.: +1 801 422 5617; fax: +1 801 422 0652. E-mail addresses: [email protected] (N. Wentworth), [email protected] (L.B. Erickson), [email protected] (B. Lawrence), [email protected] (J.A. Popham), [email protected] (B. Korth). 1 Tel.: +1 801 4224809; fax: +1 801 422 0652. 2 Tel.: +1 801 422 7069; fax: +1 801 422 0652. 3 Tel.: +1 801 422 2177; fax: +1 801 422 0652. Contents lists available at ScienceDirect Studies in Educational Evaluation journal homepage: www.elsevier.com/stueduc 0191-491X/$ – see front matter ß 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.stueduc.2009.01.006

Upload: nancy-wentworth

Post on 28-Oct-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Studies in Educational Evaluation 35 (2009) 16–20

A paradigm shift toward evidence-based clinical practice: Developing aperformance assessment

Nancy Wentworth a,*, Lynnette B. Erickson b,1, Barbara Lawrence c,2, J. Aaron Popham d,3, Byran Korth e,1

a 205 B MCKB, Brigham Young University, Provo, UT 84602, United Statesb 201 X MCKB, Brigham Young University, Provo, UT 84602, United Statesc 206 U MCKB, Brigham Young University, Provo, UT 84602, United Statesd 201 J MCKB, Brigham Young University, Provo, UT 84602, United Statese 206 H MCKB, Brigham Young University, Provo, UT 84602, United States

A R T I C L E I N F O

Keywords:

Evaluation research

Program evaluation

Student teacher evaluation

Teacher evaluation

Accreditation (institutions)

Assessment instruments (individuals)

A B S T R A C T

The Clinical Practice Assessment System (CPAS), developed in response to teacher preparation program

accreditation requirements, represents a paradigm shift of one university toward data-based decision-

making in teacher education programs. The new assessment system is a scale aligned with INTASC

Standards, which allows for observation and evaluation of teacher candidate performance over time to

show growth from novice to high level proficiency. This article describes the creation of the CPAS and

examines results from its implementation in early childhood, elementary, and secondary programs in

both early and capstone clinical experiences. Three years of implementation experiences have informed

decisions on supervisory practices, program offerings and requirements, alignment of course outcomes,

and understanding of strengths and weaknesses of individual licensure programs.

� 2009 Elsevier Ltd. All rights reserved.

Contents lists available at ScienceDirect

Studies in Educational Evaluation

journal homepage: www.e lsev ier .com/stueduc

Recent accreditation mandates have called for teacher educa-tion programs to base their decision-making on cutting edgeresearch and solid evidence. Issues of evidence-based decision-making have become central to the debate on educational policiesand teacher preparation requirements. Cochran-Smith (2006)asserts, ‘‘Teacher education’s current preoccupation with evidenceis consistent with the way the standards movement has evolvedand with the trend toward evidence-based practice in educationwrit large’’ (p. 6). Teacher education programs seeking nationalaccreditation are striving to meet many and varied expectationsand standards established by national programs and organizations(McConney, Schalock, & Schalock, 1998). Evaluation and accred-itation of schools of education will now be based on theperformance of their graduates on licensure tests, commonlyaccepted outcomes, and national program standards.

Since 1982 this university has chosen to align with the NationalCouncil for Accreditation of Teacher Education (NCATE) in defining

* Corresponding author. Tel.: +1 801 422 5617; fax: +1 801 422 0652.

E-mail addresses: [email protected] (N. Wentworth),

[email protected] (L.B. Erickson), [email protected]

(B. Lawrence), [email protected] (J.A. Popham), [email protected]

(B. Korth).1 Tel.: +1 801 4224809; fax: +1 801 422 0652.2 Tel.: +1 801 422 7069; fax: +1 801 422 0652.3 Tel.: +1 801 422 2177; fax: +1 801 422 0652.

0191-491X/$ – see front matter � 2009 Elsevier Ltd. All rights reserved.

doi:10.1016/j.stueduc.2009.01.006

and providing required evidence of knowledge, skills, anddispositions for teacher candidates (NCATE, 2002) but recentlyNCATE requirements had changed. We realized that our unitdefinition, course outcomes, and assessments did not meet theexpectations articulated in current NCATE requirements. Webegan to make the required changes in our program.

First, the unit definition was expanded to a university-wideEducator Preparation Program (EPP), which more accuratelyrepresents the shared responsibility for educator preparationbeyond the School of Education to encompass all colleges anddepartments across the university. The new EPP consists of 7colleges and 26 departments. The EPP articulated assessmentrequirements applicable to and accepted by all licensure programswithin the unit and identified the structures for collecting,analyzing, reporting, and using data for program improvement.The Interstate New Teacher Assessment and Support Consortium(INTASC) Standards (1992), which had been adopted by the StateBoard of Education as a part of its licensure process for entry-leveleducators, were selected as the foundation of all assessments.

The challenge then was to find an instrument that would assesswhat we valued. Darling-Hammond (2006) asserts that perfor-mance assessments not only provide evidence for individualperformances, but also help to represent program goals, create acommon language, focus understanding, and provide informationabout program strengths and weaknesses. However, finding orcreating reliable and valid instruments to provide data appropriate

N. Wentworth et al. / Studies in Educational Evaluation 35 (2009) 16–20 17

and meaningful for individual programs can be challenging—timeand labor intensive, at best (Darling-Hammond, 2006). We beganwith the INTASC Standards, which had already been designated ascovering the knowledge, skills, and dispositions we most desired.The purpose of the planned assessment was to link these standardsto individual candidate performance in authentic practicumsettings as well as on-campus academic contexts. The ClinicalPractice Assessment System (CPAS), based on the INTASCStandards, was developed specifically for the purpose of assessingthe performance quality of teacher candidates in clinical settings atall levels of their development. Although the initial motivation fordeveloping the instrument was self-directed, the final productlends itself to use and critique by a broader audience of teachereducators.

1. Instrument creation and implementation

In this section of the paper we will describe the creation andinitial implementation of the CPAS instrument. Discussion willinclude the purpose and vision for the instrument, the pilot and,finally, changes made in the original instrument based onfeedback.

1.1. Creation of the Clinical Practice Assessment System

The CPAS was developed to provide data on candidate fieldperformance directly tied to program goals, and to ‘‘ensure [our]candidates are skilled in applying the latest proven techniquesfor instruction in their subject matter area’’ (U.S. Department ofEducation, 2005, p. 11). The goal was to construct a scale thatwould allow for observation and evaluation of teachingperformance over time using the same expectations of quality.We hoped to document candidate progress from a level typicalof a beginner to a level of proficiency appropriate to a veteranteacher, using the same criteria and expectations throughoutthe teacher training experience and across all licensureprograms.

We began by reviewing each of the INTASC Standards toidentify the elements that were most likely observable in ateaching setting. Based on this analysis, we developed the initialversion of the instrument with 44 indicators associated to theStandards. The grading rubric was comprised of a 5-point scale(4 = distinguished, 0 = not acceptable), with descriptors written ateach scale point for each indicator. University field supervisorswere invited to review the documents and participate in training/discussion sessions to provide feedback on this pilot version of theassessment. Following considerable discussion and feedback,resulting in a series of revisions in the descriptors and the format,field supervisors reluctantly agreed to field test the instrumentduring the remainder of that semester, using it alongside theexisting form.

Table 1CPAS scale-based split half reliability.

Scale/standard # Indicators Reliability

coefficient El Ed

Reliability

coefficient Sc Ed

Content 4 0.493 0.360

Learning 4 0.694 0.471

Diversity 3 0.807 0.475

Instruction 4 0.889 0.683

Management 6 0.958 0.712

Communication 4 0.943 0.439

Planning 2 0.950 0.357

Assessment 3 0.963 0.597

Reflection 2 0.982 0.599

Relationships 2 0.967 0.318

After the initial field testing we convened a follow-up trainingmeeting with a broader group of field supervisors to orient them tothe format and descriptors and to obtain more feedback on theinstrument. Recommendations from this session included redu-cing the length of the assessment by eliminating redundant orinapplicable descriptors and modifying the format. This revisedversion, which then included only 34 indicators associated withthe INTASC Standards accompanied by the grading rubric, waspiloted for a semester. Following this pilot, we sought additionalfeedback through meetings with district representatives from ouruniversity–public school partnership. Their suggestions resulted ina further change in the rubric, eliminating the ‘‘distinguished’’category and creating a 0–3 scale for the instrument.

The training meetings served three very important purposes inthe process of developing the assessment. First, they provided asetting to collect valuable feedback to be applied in strengtheningand refining the instrument. Second, they centered the participantsfirmly in the focus of the instrument—the INTASC Standards—giving them a common basis for their ratings of candidateperformance, thus providing a foundation for establishingreliability in using the instrument. Third, discussion begun inthese meetings led to the development of an electronic version ofthe observation form for use in the actual classroom observations,which could be downloaded to our electronic assessment systemdatabase facilitating analysis and summary of data for instrumentevaluation as well as individual student assessment.

Three programs have used the revised CPAS instrument toevaluate their candidates: Early Childhood Education (ECE),Elementary Education (El Ed) and Secondary Education (Sc Ed).The following sections will describe the reliability and validity ofthe instrument, data analysis comparing the many programs usingthe instrument, growth of candidates in particular programs, andcomparisons between scores from university supervisors andmentor teachers. Selected data from the three programs arerepresented.

1.2. CPAS validation

After the initial implementation of the instrument, a statisticalanalysis of its psychometric properties was conducted. The firststep was to check the reliability of the separate subscales(indicators associated with each of the Standards) using aCronbach’s Alpha scale reliability (see Table 1). For the elementaryeducation supervisors, those who had been involved in thedevelopment or had had significant training in using the scale,the reliabilities were acceptable on all scales, but a little low on thefirst two—Content and Learning. Inadequate reliabilities amongthose rating secondary education candidates were due toinsufficient training of the raters in the INTASC Standards as wellas in use of the CPAS scale, resulting in inconsistency in use of thescale.

Range of

item Rel. El Ed

Range of item

Rel. Sc Ed

Problem

items El Ed

Problem

items Sc Ed

0.22–0.36 0.14–0.26 4 None

0.46–0.51 0.20–0.32 None None

0.60–0.69 0.22–0.37 None 10

0.74–0.80 0.40–0.52 None None

0.83–0.90 0.38–0.52 None None

0.83–0.88 0.19–0.33 None None

0.95 0.23 None None

0.91–0.93 0.39–0.42 None None

0.97 0.46 None None

0.94 0.21 None None

Fig. 1. Rasch reliability data including elementary clinical and pre-clinical.

Table 2Program CPAS scores.

Early childhood

education (n = 121)

Elementary

education (n = 447)

Secondary

education (n = 698)

Mean 2.62 2.61 2.62

Max 3.00 3.00 3.00

Min 1.88 1.35 1.00

N. Wentworth et al. / Studies in Educational Evaluation 35 (2009) 16–2018

The issue of rater reliability was particularly difficult becausethere were so many users of the scale and so few opportunities totrain and build consensus. In addition, no concurrent data existedamong the users: that is, every observation was carried out by onlyone person, so there was no opportunity to compare ratings of thesame observation by more than one individual. Thus, a secondattempttodeterminethe reliableuseof the toolwas conductedusinga Rausch analysis for polytomous items. The placement of the itemcurves for each scale point indicates the extent to which each scalepoint is being used properly relative to the other options (see Fig. 1).If the scale is being used to its full capacity (each scale point actuallybeing used as an indicator of performance), the curve representingeach scale point should cross the curve for the adjacent point beforecrossing the curve for any other point. This indicates that all ratersrepresented are tending to use the scale points consistently. Notethat the data represented above include those from the pre-clinicalpracticum for the elementary education candidates as well as fromtheir clinical experience, student teaching.

1.3. Data analysis

An initial analysis of the CPAS data computed average scores ofcandidates in each of the three programs (Table 2). All of the 34indicators were averaged together, rather than averaging theINTASC Standards separately and then averaging those ten scores.(The INTASC Standard average scores were analyzed by programand are reported later in this paper.) Although the number ofcandidates in the programs varies widely, the average scores arevery consistent.

Fig. 2. Initial, mid, and capstone fi

A primary purpose of the CPAS instrument is to documentgrowth in candidates throughout the program. Fig. 2 shows theaverage scores of elementary candidates in their initial, mid, andcapstone field experiences. Initially the INTASC Standard averagescores were spread widely. By the end of the program the scoreswere much closer together. The lowest initial scores went upwhich indicates growth of candidates in initially weak areas.

The CPAS instrument provides information on the performanceof our candidates on the INTASC Standards. Fig. 3 is a summary ofthe Sc Ed data for the INTASC Standards, comparing scores foruniversity supervisors and mentor teachers in the schools. Theweakest standard for our secondary candidates reported by bothuniversity supervisors and mentor teachers is Diversity, followedby Assessment. The highest two scores are Reflection andRelationships, followed by Instructional Strategies. These highsand lows are consistent in most secondary content areas and in theearly childhood and elementary programs.

1.4. User feedback

The use of INTASC Standards as the foundation for the CPASinstrument has frustrated some instructors. The elementaryinstructors are the most satisfied with the standards. Oneelementary university supervisor stated, ‘‘The 34 indicators [forthe INTASC Standards] help me mentor my candidates. They giveme a great way to bring up areas where they need to improve.’’ Onesecondary supervisor was not as positive. He stated,

The INTASC Standards have been all right, but the CPAS formitself has been more trouble than it’s worth. The standards andtheir descriptors tend to be too vague to be of much use tosupervisors or candidates; some descriptors don’t apply well (orat all) to secondary teachers. The biggest concern for us inEnglish is the CPAS lack of content-specific language, and wehave already written indicators that we’re happy with. In thatrespect, the INTASC Standards helped us write indicators thatcovered a good range of teaching concerns.

eld experience CPAS (El Ed).

Fig. 3. Multiple secondary education data.

N. Wentworth et al. / Studies in Educational Evaluation 35 (2009) 16–20 19

Even though the INTASC Standards and the CPAS form havefrustrated this supervisor, he and his colleagues in the Englishprogram are working to add content-specific language in eachindicator so that the CPAS form does reflect what they are teachingin their program. The ECE faculty has expressed the same concernand has likewise added language to the CPAS form they use. Theyscore the same 34 indicators as the other programs, but theyinclude additional language for many items that help specify whatthat indicator ‘‘looks like’’ in an early childhood classroom.

The CPAS form is used by both university supervisors andclassroom mentor teachers. The university supervisors remainmore constant as a group than the mentor teachers and thus aremore likely to have been trained in the use of the CPAS instrument.There are far fewer university supervisors than mentor teacherseach semester, and each university supervisor evaluates severalcandidates, while the mentor teacher evaluates one or two. Fig. 4represents a candidate by candidate score from the universitysupervisor and the mentor teacher, arranged according to thedifference between the two scores. For some candidates thementor teacher gives a higher score; for others the universitysupervisor’s score is higher. For some the scores are almostidentical. The interesting finding here is that the two scores formost candidates seem to be parallel, even though they may not bethe same. With improved training and experience for raters, thesescores should become more consistent.

1.5. CPAS revisions

Based on a statistical analysis of the CPAS system itself, on datafrom 2 years of scores, and comments from experience of

Fig. 4. Comparison of mentor teacher and university supervisor scores (Sc Ed).

university supervisors and mentor teachers using it, someadditional changes were made. The rating scale was changedfrom a 4-point (0–3) to a 6-point (0–5) scale, with the 4 and 3 scalepoints being identified as high competent and competent, and the 2and 1 scale points being identified as emerging and low emerging.This wider scale gives observers a larger range within which to ratecandidates, allowing greater differentiation among excellent,average, and poor performers. It was determined that the 0 pointon the scale should be indicated any time an observer does not seea particular indicator demonstrated during an observation and thatthe 0 would be counted in the final score. The need for this 0 optionoccurs frequently during practicum observations and occasionallyduring student teaching/internship visits. However, candidatesand supervisors should ensure that there is no need for a 0 on thecandidate’s final evaluation because every element of the CPASshould have been addressed at some point during the studentteaching or internship experience. A few of the descriptors on therubric were edited to be less ‘‘absolute’’ in the statement ofexpectation (eliminating all, always, every), even at the highestlevels. In addition a few descriptors were changed to better reflectthe new scale points. Two of the Diversity indicators wererewritten to better reflect the intent of the program for thisaspect of teacher training. Supplementary information wasremoved from three indicators to make them less confusing andpotentially contradictory.

A system has now been created that requires universitysupervisors and mentor teachers to enter the CPAS Final Formelectronically. Doing so requires only one entry of the form, gatherselectronic signatures from all individuals who are required to signthe form, submits the form to the Education Advisement Center aswell as the Placement Office, and greatly increases the speed withwhich this information can be made available to candidates andprospective employers.

2. Discussion

Ongoing analysis of data from the CPAS instrument providesfocus for continuing discussions within the EPP and in individuallicensure programs. Additionally, field notes and reflectionsrecorded by designers and users during and after training sessionswith university field supervisors have deepened understanding ofdiffering points of view, including both practical and theoreticalpositions for programmatic decisions. Production, implementation,and evaluation of training materials for CPAS use have allowed us toimprove the instrument. Finally, actual collection and analysis ofCPAS data during at least three semesters for teacher candidates inEarly Childhood, Elementary, and Secondary Education Programs inpre-clinical and clinical experiences have provided data to informdiscussions of supervisory practices, program offerings and require-ments, alignment of course outcomes, and strengths and weak-nesses of individual licensure programs within the EPP. These will bediscussed in the following section of the paper.

2.1. Paradigm shift to professional standards

The shift to a data-based decision-making paradigm based onprofessional standards has gone beyond mere compliance withaccreditation requirements to meaningful application and functionfor stakeholders in the teacher preparation process. The CPASserves multiple functions. Foremost, it provides assessment ofteacher candidate performance based on professional standards. Inaddition, because of this foundation in professional standards itcan provide ongoing educational benefits both to the teachercandidate and to those who are assessing the teaching perfor-mance. Darling-Hammond, Hammerness, Grossman, Rust, andShulman (2005) assert,

N. Wentworth et al. / Studies in Educational Evaluation 35 (2009) 16–2020

Part of the reason these kinds of performance assessmentsappear to stimulate learning is that they focus teachers’reflection on professional standards in specific content areasthat are used for scoring and that candidates are asked toconsider when evaluating their own practice. Thus teachingpractices can be reviewed, revised, and discussed in light ofsome shared common language about teaching and learning,which helps ground and focus the work. Furthermore, thestandards serve as public criteria by which performances can bemeasured. (pp. 423–424)

2.2. Paradigm shift to consistent expectations and understandings

Introduction of the CPAS instrument resulted in a paradigm shiftamong university field supervisors (and later mentor teachers) inhow pre-clinical and clinical teacher candidates would be observedand rated at the university. Instruments used prior to the CPAS weredevised to represent the level of skill expected by candidates at agiven point in their development, so a candidate could receive thehighest mark on the scale with a level of experience andperformance that was only in the developing stages. In contrast,the CPAS was developed to allow observation and evaluation ofgrowth over time, and to show candidates their progress from abeginning level to higher proficiency. This pattern is still the basis ofthe scale, but as a result of feedback from our partners in the schools,we lowered the top end of the scale, increasing the possibility forcandidates in their capstone field experience to achieve the topmark. Our supervisors, however, still find it difficult to limitthemselves to the lower end of the scale for candidates still inpracticum and candidates who have not progressed as well as others.We are working to ensure that all users of the instrument truly usethe elements of the rubric to give feedback to candidates and showthem where they are actually performing.

Additionally, the CPAS instrument provides teacher candidatesand supervisors with greater consistency in expression of expecta-tions, a common language for communication, and a basis forevaluation, as well as a history of teaching performance. CPAS dataalso provide useful information for making programmatic decisionsand adjustments. For example, low scores in the area of diversityhave prompted attention to faculty development in the preparationof candidates to better address issues of diversity in the classroom.

2.3. Paradigm shift to reliability

A Cronbach’s Alpha reliability calculated for each subscale(ranging from 2 to 6 items) indicates that raters in elementaryeducation show acceptable reliability on the subscales (INTASCStandards), although the first two standards (Content and Learning)show the lowest levels of reliability, indicating need for some workto redefine the indicators. Additionally, these data can be interpretedas indicating that these elements need more attention in courseworkand candidate preparation, so that they are more reliably demon-strated during the observations. Additional data from the Raschanalysis add information about the difficulty levels of the indicatorsand use of the full extent of the scale.

3. Conclusions

These multiple paradigm shifts have created a global paradigmaffecting the entire Educator Preparation Program. We have

become a data-centered university unit committed to improvingour programs and enhancing candidate performance based oninstruments aligned with national standards. The CPAS instrumenthas been found to be reliable and valid when university super-visors and mentor teachers are adequately trained; however, itsdevelopment is ongoing, requiring continual evaluation andadjustment. Current discussion centers on the CPAS instrumentitself: the range of the evaluation scale, the rubric descriptors, andthe indicators, along with instrument applicability to specificlicensure programs. For example, continual revisiting of the CPAShas resulted in the decision to adopt a 6-point (0–5) scale toincrease accuracy in discriminating candidate performance; thischange has necessitated a review and refinement of the CPASrubric.

An issue currently being discussed but not yet resolvedconcerns the issue of whether there are substantial differencesin how the INTASC Standards apply across early childhood,elementary, and secondary school settings. CPAS is currently aone-size-fits-all instrument. Secondary programs have givenevidence to support their claim that the indicators for eachstandard are too generic and do not reflect the specific knowledgeand skills that the teacher candidates need to teach specific contentareas effectively. They would like to modify the indicators byadding language that reflects the ways candidates can demonstratethe necessary skills in their clinical experiences. The EPP hasagreed to review all modifications of the CPAS indicators to ensurethat the content-specific language maintains the integrity, mean-ing, and original purpose of the CPAS instrument and its basis in theINTASC Standards.

Demands for data-based decision-making in teacher educationprograms, though not initially welcomed, have compelled criticalscrutiny of the assessment tools within our university’s EducatorPreparation Program. Benefits from this scrutiny have extendedbeyond the necessary evaluation and assessment to impactbuilding of professional communities, alignment of expectationsbetween clinical mentors and university supervisors, and strength-ening of professional development for all contributors to pre-service teacher preparation. The paradigm shifts made throughoutthe process of instrument development and data analysis havestrengthened program outcomes and candidate performance. Thediscussions continue.

References

Cochran-Smith, M. (2006). Taking stock in 2006: Evidence, evidence everywhere.Journal of Teacher Education, 57(1), 6–12.

Darling-Hammond, L. (2006). Assessing teacher education: The usefulness of multiplemeasures for assessing program outcomes. Journal of Teacher Education, 57(2),120–135.

Darling-Hammond, L., Hammerness, K., Grossman, P., Rust, F., & Shulman, L. (2005). InL. Darling-Hammond & J. Bransford (Eds.), The design of teacher education programsin Preparing Teachers for a Changing World: What teachers should learn and be able todo (pp. 391–441). San Francisco: Jossey-Bass.

INTASC’s Model Standards for Beginning Teacher Licensing. (1992). Assessment andDevelopment: A Resource for State Dialogue. Retrieved February 11, 2008 fromhttp://www.ccsso.org.

McConney, A., Schalock, M., & Schalock, H. D. (1998). Indicators of student learningin teacher evaluation. In J. Stronge (Ed.), Evaluating teaching: A guide tocurrent thinking and best practice (pp. 162–192). Thousand Oaks, CA: CorwinPress .

NCATE. (2002). Professional Standards for the Accreditation of Schools, Colleges, andDepartments of Education. Washington, DC.

U.S. Department of Education. (2005). The secretary’s fourth annual report on teacherquality: A highly qualified teacher in every classroom. Washington, DC: U.S.Department of Education, Office of Postsecondary Education.