state consortium on educator effectiveness august 16, 2012 student learning objectives and educator...

48
State Consortium on Educator Effectiveness August 16, 2012 Student Learning Objectives and Educator Evaluation

Upload: rose-jones

Post on 25-Dec-2015

215 views

Category:

Documents


2 download

TRANSCRIPT

State Consortium on Educator Effectiveness

August 16, 2012

Student Learning Objectives and Educator Evaluation

Webinar Logistics

Everyone is muted

Use the chat function to make a comment or ask a question

You may chat privately with individuals on your team

If you have problems, you may send Naz Rajput a message via the chat function or an email at [email protected]

2

Welcome

Janice Poda, CCSSO

Initiative Director Education Workforce

3

Presenters

Scott Marion,

Associate Director,

Center for Assessment

4

Presenters: RI Student Learning Objectives Team

Laura Kacewicz: Assessment Specialist, Office of Instruction, Assessment, and Curriculum

Jessica Brown: Assessment Specialist, Office of Instruction, Assessment, and Curriculum

Jessica Delforge: Education Specialist, Office of Educator Quality

5

The Issue

States and districts are creating evaluation systems that include academic student performance information

Using Value-Added Models (VAM) and/or Student Growth Percentiles (SGP) with state standardized tests is a significant challenge

How do we incorporate student performance results from non-tested grades and subjects (NTGS) into educator evaluation systems?

6

What is comparability?

Educators within the units of analysis are held to similar levels of expectations, at least in some relative sense

For example, it would be a threat to the system if the teachers in grades 4-8 reading and math received noticeably lower ratings than the rest of the teachers (NTSG) in the school

7

Is Comparability Important?

At what levels is comparability important?

Within schools? Clearly yes.

Within districts? Probably yes.

Within states? It would be nice, but it might be too high of a bar right now.

8

Student Learning Objectives as a Framework

Goal: A comprehensive and thoughtful approach that includes the tested subjects/grades, the “non-tested” content area teachers, and other licensed professionals

“Tested” and “non-tested” subjects and grades can be viewed as special cases of the comprehensive framework

9

Creating SLOs

SLOs offer more promise than “drop from the sky” assessments for improving practice, but they have more “moving parts”

We need to focus on three key components, all of which need to be in place for SLOs to work well The Objectives

The Targets—for both students and teachers

The Assessment(s)

10

Use of Student Learning Objectives in the Rhode Island Educator

Evaluation Model

CCSSO Special SCEE WebinarAugust 16, 2012

Made in Rhode Island

Educators

Working Groups

Advisory Committee

Educators representing 23 districts and organizations contributed to the development of the content.

Six working groups of local educators designed the major components of the system.

The Advisory Committee for Educator Evaluation Systems (ACEES) reviewed the content, along with the Technical Advisory Committee (TAC).

RIDE field tested the model in two districts and one charter in 2010-11.

12

Rhode Island Model: Multiple Measures

Student Learning

Professional Practice

Professional Responsibilities

13

Measures of Student Learning

Student Learning Objectives• Measure learning against a set of academic standards

aligned with school and district goals

• Measurement tools include purchased assessments, district or regionally produced assessments, common or teacher-created assessments designed by schools or teacher teams

• Use Rhode Island student content standards or other nationally recognized standards(e.g., CCSS)

Growth Model• Used for contributing educators in Literacy and Math,

grades 3-7

14

Gradual Implementation: SY2011-12

Time Frame Edition I: Gradual Edition II: Full

Summer 2011 Training begins for evaluators

Fall 2011* Gradual Implementation begins / Beginning-of-Year Conference Collecting feedback

Winter 2011/2012* Observations / School visits/ Mid Year Conference

Early Spring 2012* Observations/ School visits Model refinement

Late Spring 2012* End-of-Year ConferenceSummative ratings collected

V 2 Launch/ Increased communication

Summer 2012 Training Academies

Fall 2012 Full Implementation

* Indicates ongoing support and training via ISPs and Modules 15

Gradual Implementation: SY2011-12

It allowed for more time for training so that evaluators felt adequately prepared to take on this role.

It provided everyone with the opportunity to practice and get hands-on experience with the process.

It provided time to learn and talk about the components and make informed revisions before full implementation.

16

Gradual vs. Full Implementation for Teachers

Component Edition IGradual Implementation 2011-2012

Edition IIFull Implementation 2012-2013

Evaluation Criteria Student Learning Professional Practice Professional Responsibilities

same

Number of Evaluation Conferences

3 evaluation conferences between the teacher and the evaluator (Beginning, Middle, and End-of-Year)

same

Classroom Observations

At least 4, including: 1 long (30+ minutes), announced and 3 short (15 + minutes), unannounced

Written feedback required after each observation

Post observation conference required after announced observation

At least 3, including: 1 announced and 2 unannounced

At least 20 minutes each Written feedback required after each Pre- and post-observation conferences

are optional (local decision)

Professional Growth Goals At least 3 set at the beginning of the year At least 1 set at the beginning of the year

Student Learning Objectives At least 2-4 (per teacher) 3 performance levels for individual SLOs 5 performance levels for sets of SLOs

At least 2 per teacher (no more than 4) 4 performance levels for both individual

and sets of SLOs

Rhode Island Growth Model Not applicable same

Teacher Professional Practice Rubric

Holistic rubric with 21 competencies Classroom observations and evidence

collection required to assess competencies

Observation rubric with 8 competencies All competencies are 100% observable

(additional evidence collection not required)

Professional Responsibilities Rubric Holistic rubric with 10 competencies Holistic rubric with 8 competencies

RI has been gathering educator feedback from the start…

18

Early Adopters Warwick and Jamestown have helped us learn from full implementation.

Surveys & Focus Groups help us learn what’s most important to teachers and building administrators.

Data from the Field helps us prioritize the refinements based on real experiences in the classrooms and schools.

Feedback from District Evaluation Committees gives us another valuable perspective on refinements.

Five Priorities for Model Refinement

Early Adopters

Surveys & Focus Groups

Data from the Field

Feedback from District

Evaluation Committee

…resulting in five key priorities for model refinement

19

Streamline the Model

Strive for Accuracy &

Consistency

Clarify Expectations,

Requirements & Timelines

Align to Other Initiatives

Focus on Measures of

Student Learning

By listening to educators in Warwick and Jamestown and gradual implementation districts all year, RIDE has identified five priorities for model refinement that will help make the RI Model stronger – and more practical to use.

Priority: Clarify Expectations, Requirements and Timelines

20

What We Heard

•Teachers who have received information from their building administrators about the RI Model tend to understand it better than those who do not.

•Administrators and teachers report confusion over which aspects of the model are optional and which are required, as well as which decisions should be made locally and which are made at the state level.

Priority: Clarify Expectations, Requirements and Timelines

21

What We Are Doing

•Clarified expectations around the requirements for

SLOs

• Requiring educators to set at least 2 and no more

than 4 SLOs

SLO Claims, Challenges & Supports

Scott Marion

22

Claims to evaluate SLOs

23

We can create claims for SLOs and then consider the challenges and support for these claims

These claims would be part of a theory of action and validity argument

I present just a few examples to illustrate this point, but this should be done in more detail prior to implementing such an approach on a large scale

Learning Goal/Objective Claim

24

Claim: Teachers have the knowledge, skills, and attitudes (& ethics) to set meaningful, ambitious, and fair learning goals for students

Challenge: Who will guide, monitor, and/or evaluate the quality of these learning goals? This adds an extra (or at least different) significant validation

requirement beyond test-based approaches

Places principals into the role of instructional leaders and many might not have the skills (but this could be an opportunity if successful)

Opportunity: Teaching quality would likely improve if teachers, working with good leaders, were supported in the way they use data to establish goals for their students.

The Challenge of the “Objectives”

We have not seen evidence that teachers and other educators can generate high quality SLOs without significant practice and training

Identifying meaningful learning goals appears to quite difficult

Just like we learned with performance assessment, teachers need deep subject matter knowledge to do this well

We might be able to draw from some familiar work such as Wiggins & McTighe’s Understanding by Design and the assessment specifications being developed by both large scale assessment consortia, but it is still a huge challenge

25

Grain Size

Finding the right grain size for the “objectives” is a major challenge. Some things to keep in mind:

There is no perfect grain size, because we are definitely in search of the Goldilocks criterion

The grain size should be inversely related, in part, to the number of SLOs on which the teacher will be evaluated

Larger grain size “objectives” might require multiple assessments, while more specific SLOs could probably be measured with a single performance tasks or other assessment

26

Grain Size Question?

Which will provide a more reliable estimate of a teacher’s contribution to student performance: A single large grain size SLO measured with multiple

assessments or

Several, more specific SLOs each measured with one assessment?

Might what we learned about the relative effects of adding tasks or raters on the generalizability (reliability) of performance assessments apply here…?

27

Rhode Island Examples:Grade 11, Writing Arguments

This objective statement is too broad:Students will improve their ability to write in response to informational text.

This objective statement is too narrow:Students will improve their ability to include textual evidence in written arguments.

This objective statement is acceptable: Students will improve their ability to analyze informational text and to write arguments informed by their analysis, grounded in germane textual evidence.

28

Assessment Claims, Challenges & Opportunities

29

Assessment and Analysis Claim

30

Claim: Teachers/schools/districts have assessment and analytic tools and capacity sufficient for judging whether students have reached the intended goals

Challenge 1: Are classroom assessment tools capable of validly measuring ambitious goals?

Challenge 2: If external assessments are used, would that lead to narrow goals to match the more limited tools (tail wagging the dog)?

Challenge 3: Is the analytic capacity available for supporting inferences about teacher contributions to student performance?

Opportunity: Could this be a lever for improving the quality of classroom assessment and evaluation tools and processes?

Assessments

We need high quality assessments to evaluate the extent to which students have achieved the goals

We do not have time today to talk about the challenges associated with finding assessments to evaluate SLOs, but for now… Think broadly about “assessment”

Do not let the assessment drive the goal; the assessment should be used to support learning goals

The learning goal and assessment should be things that teachers would use in the classroom as part of good instructional practice

31

RI Priority: Focus on AccurateMeasures of Student Learning

32

What We Heard in Rhode Island

•Teachers report that there is value in setting measureable goals for student learning, but more common assessments are needed to do this consistently across schools and districts.

Priority: Focus on Accurate Measures of Student Learning

33

What We Are Doing

•Connecting with other RI RttT initiatives, including the Comprehensive Assessment System project that promotes assessment literacy and the Interim Assessments project, which will provide tests in math, ELA, Science and Social Studies

•Transitioning from scoring sets of Student Learning Objectives on a 5-point rubric to a 4-point rubric.

•The resulting 4-point scale translates to a Final Summative Matrix that becomes 4x4 instead of 4x5. The new matrix still highlights the critical importance of student learning as a primary indicator of educator effectiveness.

34

Growth and Starting Levels

Growth and Starting Levels

Many rules associated with incorporating student performance, require the use of “growth” measures, which are generally defined as the difference between two measures at different time points

There are many reasons why this policy is misguided, most of which are discussed in detail in: Marion, et al. (2012). Considerations for Analyzing Educators’

Contributions to Student Learning in Non-tested Subjects and Grades with a Focus on Student Learning Objectives. www.nciea.org

Bottom line is that we need to figure out a way to contextualize students’ varying starting points…

35

A “Rough Conditioning” Approach for SLOs

Using prior performance information (e.g., last year) or some early assessments in the current year, we can group students into 3-4 “performance” groups

SLO targets would then be differentiated according to the students’ starting group.

At least two ways to differentiate targets:

Different levels of achievement (e.g., basic, proficient)

Different proportions of students reaching the same target (e.g., 80% of Level 3 students will achieve target, 65% of Level 2 students will achieve goal)

36

37

Lessons Learned …

And Still To Be Learned

Rhode Island: Lessons Learned from Development of SLOs

• Understanding of Process for all involved

• Consistency in implementation

• Participant investment

• Balancing many contexts (e.g., special education)

38

Rhode Island: Lessons Learned from Development of SLOs

• Support and Training

• From State to Evaluators, from Evaluators to Educators

• Ongoing support and training is needed

• Materials as guidance for implementation

39

Rhode Island: Lessons Learned from Development of SLOs

• Collaboration is key

• Within schools and districts

• Teacher team SLOs• Encourages best practices (i.e., scoring each

other’s work, using the same curriculum)

• Eases burden on evaluators

40

Scott Marion: We Still Have Much to Learn

If anyone tells you they have all the issues related to SLOs or NTSG solved, hold onto your wallet

This is all relatively new and there does not yet exist a body of research to shed light on the validity of using SLOs for educator evaluation

We think it offers tremendous potential for improving practice, but not without significant support

41

Scott Marion: We Still Have Much to Learn

We need to create an evaluation framework to help us design appropriate studies of SLOs and other aspects of educator evaluation, fortunately…

42

http://www.ride.ri.gov/educatorquality/EducatorEvaluation

For more information on RI and to download detailed documents

43

Center for Assessment: Evaluating Educator Evaluation Systems

The Center for Assessment’s annual conference (Reidy Interactive Lecture Series—RILS)

September 13-14, 2012 in Boston

Focused specifically on developing approaches for evaluating educator evaluation systems, with a terrific set of speakers, including:

Courtney Bell (ETS), Henry Braun (BC), Heather Hill & Corinne Herlihy (Harvard), and representatives from NYC, Denver, and Montgomery County public schools

For information: www.nciea.org or [email protected]

44

Continue the Conversation

Please continue the conversation in the private Evaluation Discussion

Group Thread

45

To Join the Private Evaluation Discussion Group

SCEE members may log into the collaboration site (www.ccsso.org/scee)

Mouse-over the “Discussions” navigation tab

Click on “Discussions Directory”

Click on “Evaluation Private Discussions”

Under the discussion group title and icon, click on “Request an Invitation” (or just click on this link)

46

Upcoming Webinars

September 11:

What’s next with Common Core Implementation?

October 3: SPECIAL SCEE WEBINAR

The Principal’s Role in Evaluating Teachers

October 9:

The Complexity of Teaching and Leading: Observing Teacher Practice

47

Thank you

48