self-study report – addendum march 18, 2017...

54
1 Self-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State University’s College of Education Initial Certification Program, we would like to take this opportunity to thank the CAEP Review Team for the detailed formative feedback review. In this addendum, we will provide clarification and further evidence as requested. We recognize that while evidence was presented in our Self-Study Report, our acronyms were given as presumed understanding and visual representations may have helped to illuminate the narrative. As such, we hope that the following information provides the important details needed to demonstrate our capacity to prepare teacher candidates for student learning. The following addendum is outlined according to CAEP Standards 1-5 as presented in the Formative Feedback Review. We hope that in providing clarification and responding to each question, we have addressed the areas for improvement and stipulations. We also believe that the cross-cutting themes and selected improvement plan items are addressed as we have answered questions within each standard. Standard 1 Addendum: Tasks and Areas for Improvement Task 1: Training issues on clinical observation assessment tool What were the training issues alluded to in the self-study? What training subsequently was provided to ensure reliability in the cooperating teachers' and supervisors' use of the clinical observation tool? One of the training issues that was alluded to the in the self-study was the use of the Framework for Teaching (Danielson, 2013) as an evaluation tool to score interns’ enacted classroom practice. Instructional coaches were not consistently using the rubric to score the assignment in the same manner; this was noted when discussing the assignment during Office of Clinical Experience’s team meetings when instructional coaches shared their stories of practice and raised questions to ask for clarification. The Director of OCE, the Assistant Director, and the Clinical Instructional Specialist supported the instructional coaches with focused time as a team on re-examining the assignment description, requirements, and rubrics. Group discussion of clinical coaches’ scoring and rationale for scoring from stories of practice provided an authentic context for improving their understanding of the assignment and how to score the assignment. As a result of the discussion and scores from Fall 2015, training for use of the Framework for Teaching (FfT) (Danielson, 2013) and observation techniques were provided through professional development workshops in Winter 2016 in which the instructional coaches watched videos of teaching and used the FfT to score the teachers’ practice. Evidence of the agenda and work session are found in Exhibits 1.1a and 1.1b – OCC-PD-Agendas 1 and 2. Instructional coaches scored the teaching without discussing the video, but then engaged in rich dialogue afterward to share their rationale for scoring. This exposed the wide range of scores in some areas of the FfT and helped the OCE team to develop shared norms for what teaching should look like at pre-student and student teaching levels. This kind of professional development had previously taken place many years ago but with several changes in leadership in OCE and newly revised assessments, returning to this intentional, focused work was needed for the instructional coach team. These opportunities for watching videos, evaluation, discussing, and norming were continued in the ongoing training in which the OCE team engaged with the New Teacher Center.

Upload: others

Post on 31-Jan-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

1

Self-Study Report – Addendum

March 18, 2017

Introduction

On behalf of Wayne State University’s College of Education Initial Certification Program, we would like to take this opportunity to thank the CAEP Review Team for the detailed formative feedback review. In this addendum, we will provide clarification and further evidence as requested. We recognize that while evidence was presented in our Self-Study Report, our acronyms were given as presumed understanding and visual representations may have helped to illuminate the narrative. As such, we hope that the following information provides the important details needed to demonstrate our capacity to prepare teacher candidates for student learning. The following addendum is outlined according to CAEP Standards 1-5 as presented in the Formative Feedback Review. We hope that in providing clarification and responding to each question, we have addressed the areas for improvement and stipulations. We also believe that the cross-cutting themes and selected improvement plan items are addressed as we have answered questions within each standard.

Standard 1 Addendum: Tasks and Areas for Improvement

Task 1: Training issues on clinical observation assessment tool What were the training issues alluded to in the self-study? What training subsequently was provided to ensure reliability in the cooperating teachers' and supervisors' use of the clinical observation tool? One of the training issues that was alluded to the in the self-study was the use of the Framework for Teaching (Danielson, 2013) as an evaluation tool to score interns’ enacted classroom practice. Instructional coaches were not consistently using the rubric to score the assignment in the same manner; this was noted when discussing the assignment during Office of Clinical Experience’s team meetings when instructional coaches shared their stories of practice and raised questions to ask for clarification. The Director of OCE, the Assistant Director, and the Clinical Instructional Specialist supported the instructional coaches with focused time as a team on re-examining the assignment description, requirements, and rubrics. Group discussion of clinical coaches’ scoring and rationale for scoring from stories of practice provided an authentic context for improving their understanding of the assignment and how to score the assignment. As a result of the discussion and scores from Fall 2015, training for use of the Framework for Teaching (FfT) (Danielson, 2013) and observation techniques were provided through professional development workshops in Winter 2016 in which the instructional coaches watched videos of teaching and used the FfT to score the teachers’ practice. Evidence of the agenda and work session are found in Exhibits 1.1a and 1.1b – OCC-PD-Agendas 1 and 2. Instructional coaches scored the teaching without discussing the video, but then engaged in rich dialogue afterward to share their rationale for scoring. This exposed the wide range of scores in some areas of the FfT and helped the OCE team to develop shared norms for what teaching should look like at pre-student and student teaching levels. This kind of professional development had previously taken place many years ago but with several changes in leadership in OCE and newly revised assessments, returning to this intentional, focused work was needed for the instructional coach team. These opportunities for watching videos, evaluation, discussing, and norming were continued in the ongoing training in which the OCE team engaged with the New Teacher Center.

Page 2: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

2

Much of this training also included the opportunity to engage mentor teachers from partner P-12 districts. This allowed the use of the FfT as an evaluation tool to become more standardized and for expectations of teaching practice at pre-student and student teaching levels to become more shared across sites. TASK 2: MTTC Test Scores Please explain the discrepancy between the data provided by the state and the self-study narrative.

In the original self-study document, there was a discrepancy between the data provided by the state and what we examined in the narrative. This came about because we only looked at data of our completers, whereas the reports from MDE included all qualified test takers. We have reviewed the data for all qualified test takers in which the initial attempt fell within the MDE reporting dates 08/01/2012 through 07/31/2015. If the content area test was passed on the initial attempt, it was counted as an initial pass. If it was eventually passed during the period reviewed, it was counted as a subsequent pass. If the test was never passed during the period reviewed, it was counted as never passed. An individual was only counted once (they either passed it initially, passed it eventually, or never passed it).

Exhibit 1.2 (MTTC 3-year data report) is a report that is a side-by-side comparison of pages MDE’s numbers (left side) in comparison to numbers we have using the above logic (right side of report). Our numbers are different; we have 105 more test takers than MDE reports. But, our pass rates are very similar: MDE cumulative percent is 80.5; Wayne State’s results are 81.29. Because of the difference in results, we will contact the state and ask them for the underlying data that identifies the test taker. This will allow us to cross-reference the data to determine the discrepancies between our data and the state’s data and correct our processes.

Another issue addressed in the Formative Feedback report was our scores consistently fall below the benchmark. We observe that social studies/elementary MTTC scores are below state average. All content knowledge for social studies elementary pre-service teachers were provided, outside of the College of Education via multiple courses. Because of this issue that has been addressed with curriculum review, we underwent a revision for social studies elementary pre-service teachers and submitted a revision to the Elementary Program. Previously these majors took a minimum of 7 courses in different social studies related courses including economics, world history, 2 courses in American history, geography and this was an area most of our elementary majors struggled. WE concluded that the information in these courses did not connect to the requirements on the test. After exploring how other EPPs delivered the material for this content area, we decided to create two courses for content area social studies for elementary teachers. This program was approved by the state and now candidates now take elementary social studies content courses within the College of Education that provide content that is specifically expected for content knowledge for elementary teachers. The courses they now take are SSE 5720: Social Studies content for Elementary Teachers 1 (see Exhibit 5.21 SSE 5720 syllabus); and beginning Winter 2018, the second course will be available: SSE 6720 Social Studies content for Elementary Teachers 2. This process began with an examination of our MTTC scores, discussion with the Departments in College of Liberal Arts & Sciences, curriculum revision and MDE approval (see Exhibit 1.7c MDE El Ed approval letter). Task 3: Alignment to Career & College Ready Standards Which multiple indicators/measures and corresponding rubrics evaluate candidates' ability to provide effective instruction for all students linked to college- and career-ready standards? What do these data indicate about candidates' preparedness to teach to college- and career-ready standards? There are multiple measures we collect to ensure our candidates demonstrate effective instruction that is linked to college and career ready standards. The assessments include: the lesson plan, the digital self-study, and clinical observation. Our candidates are expected to address student outcomes and learning goals that are based on the state standards, which are aligned with College & Career Ready Standards

Page 3: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

3

such as:

• Common Core State Standards for ELA • Michigan Science Standards • Michigan Mathematics Standards, which are Common Core State Standards for Mathematics.

As a whole data set, our completers demonstrated an average of well over 85% effective instruction that examines P-12 student learning outcomes on the “Outcomes” element of the Lesson Planning Framework for Effective Instructional Design. The 85% score is our benchmark that indicates proficiency for all levels of our candidates including undergraduate (UG), post bachelor (PB) and Masters of Arts in Teaching (MAT). This means candidates score a minimum of 3 on the 4-point rubric to reach proficiency on the “Outcomes” portion of the assessment. This benchmark includes how the candidates address outcomes and learning goals in all lesson plans that they designed and implement (See Exhibit 1.3 Lesson Plan data F15-F16 pp. 1-4). This assignment asks teacher candidates to connect their lessons to state standards and student outcomes/learning goals, which is aligned with College and Career Ready standards. In the Lesson Plan Rubric, these outcomes are assessed on whether they promote rigorous high-level learning in the discipline and can be located on the Lesson Plan Rubric in the “Outcome Section” (see Exhibit 2.6 – Lesson Plan Rubric). Similar to the Lesson Plan, the Digital Self Study assessment also examines P-12 student learning outcomes that are aligned to College and Career Standards on the “Setting Instructional Outcomes” element of the rubric for this assessment. Candidates are assessed as to how they design a sequence of learning activities that follow a coherent sequence and align to instructional goals, which is found on the Digital Self Study Rubric under 1e Designing Coherent Instruction (see Exhibit 2.5 – Digital Self Study Rubric). Data indicates that for all three levels of our completers (UG, PB, and MAT), on average scores meet or exceed the 85% benchmark that indicates proficiency as scored with a minimum of 3 on the 4-point rubric (see Exhibit 1.4 – Digital Self Study data F15-F16- pp. 2-4). For the observational tool candidates are also expected to set instructional learning outcomes for P-12 students that are aligned to College and Career Ready standards. Both their mentor teacher and our instructional coaches assess the candidates. Ratings for this element on the rubric for our completers averaged well over the proficiency cut rate of 85% for all levels (undergraduate, post bachelor and MAT) as indicated of scoring a minimum of 3 on a 4-point scale (see Exhibit 1.5 – Clinical Observation data F15-16, pp. 2-4). The data from these three assessments indicate that our program as a whole prepares our completers to proficiently address College and Career-Ready expectations when planning and delivering their lessons. When looking at the disaggregated data, there are very few licensure areas that fall below the 85% benchmark of proficiency that examines P-12 student learning outcomes. Each program area examines their data annually with last year being the first year of examining the data we currently use. We only had 1 cycle of this data that faculty examined with many areas having “N’s” too low to make adequate recommendations. Future analysis for all three cycles of data collection (Fall 15, Winter 16, and Fall 16) will occur at March 22, 2017 Teacher Education Forum meeting. By examining data that is disaggregated by licensure areas (see Exhibit exhibits 1.3 -1.5 data reports) to discuss “College and Career Ready” practices, where not only trends and implication for the entire program will be discussed, faculty will also examine these specific results aligned by licensure and endorsement areas to make change to courses, syllabi, and assignments throughout the program to ensure each program has woven College and Career Ready Standards throughout their courses. At this point we reserve the right to make these recommendations and changes by the end of this semester for the future academic year.

Page 4: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

4

Task 4: Administration of Assessment Instruments Exactly when during the candidates' tenure in their program does the EPP administer and score each assessment tool? Which assessments are administered prior to student teaching and which are administered during student teaching? What is the cycle for each program and each level? A complete list of the Initial Certification Program Assessment cycle is located in Exhibit 1.6 (Initial Certification Assessment Cycle). Table 1 below illustrates were candidates are administered performance assessments related to their coursework that are scored during pre- and student teaching. During pre-student teaching, 3 lesson plans, 3 clinical observations, teaching self-study with digital video and e-portfolio are collected. During the student teaching semester, the same assignments are collected, and also the case study. Final results in the student teaching semester are used for accreditation reporting purposes. During the pre-student teaching semester, assignments from the following 3 courses, across all programs are collected and scored: TED 5150 Analysis of elementary school teaching, TED 5650 Pre-student teaching- Clinical experience for secondary school teaching, KIN 5780 Pre-student teaching for kinesiology, and MED 4560 Practicum in Music Education. During the student teaching semester, data is collected from the following courses. These courses may differ based on the program of the candidate’s major. The courses are: TED 5780/5790 Student teaching/ early childhood and special education, DNC 4410/4420 student teaching for dance, KIN 5790 Student teaching for kinesiology, HE 5780 student teaching for health educators, and MED 4570 Student teaching for music education. Table 1: Key Assignments for Clinical Work Pre-Student Teaching TED 5150 TED 5650 MED 4560 KIN 5780

Student Teaching TED 5780/5790 DNC 4410/4420 KIN 5790 HE 5780 MED 4570

Student Teaching Special Areas: TED 5790 Early Childhood, Special Education,

Lesson Plan 1 Lesson Plan 1 Lesson Plan 1 Lesson Plan 2 Lesson Plan 2 Lesson Plan 2 Lesson Plan 3 Lesson Plan 3 Lesson Plan 3 Clinical Observation 1 Clinical Observation 1* Clinical Observation 1* Clinical Observation 2 Clinical Observation 2* Clinical Observation 2* Clinical Observation 3 Clinical Observation 3* Clinical Observation 3* Teaching Self-Study with Digital Video

Teaching Self-Study with Digital Video

Teaching Self-Study with Digital Video

E-portfolio Case Study Case Study E-portfolio E-portfolio *One of these observations will require the use of content-area specific technology (this is discussed in AFI 2). Task 5: State Review of Programs Where are the documents indicating the MDE's review and approval of each program at each level? The MDE review letters are in Exhibit 1.7a MDE Licensure Program Approval Letters. Faculty reviews each licensure area to determine if major changes need to be made to the program. If there are significant changes than how the program was originally approved, a proposal to the state is required. An example of this state approval came recently from our Elementary Education faculty as described in Task 2 above.

Page 5: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

5

This source of documentation for MDE approval four our Elementary Education program Option 1, proposal request to reduce hours for planned major is documented in the proposal to the MDE and their approval letter (see Exhibit 1.7b: Elementary Education Proposal and Exhibit 1.7c MDE El Ed approval). AREAS FOR IMPROVEMENT AFI 1: STATE REVIEW The EPP provides no evidence of state's review of each program to ensure that candidates apply their content and pedagogical knowledge, as reflected in the state's standards. Further, the data indicate that not all candidates in all programs pass the MTCC test of content. The response in Task 5 indicates that the review letters are part of the report and that the programs are reviewed by the state. Wayne State University has an MTTC pass rate of 80.5% as a three-year aggregate score for MTTC from August 2012 – July 2015. This rate includes students who have not completed the program. Only those students who meet all requirements for the program are recommended for teacher certification. We believe this justification addresses the issue for this AFI. AFI 2. Technology None of the rubrics provided indicate that the EPP assesses the candidates’ ability to model and apply technology standards as they design, implement, and assess learning experiences. As part of our areas for improvement from TEAC accreditation, the CAEP advisory team has been examining the use of technology in the teacher education programs. Even though each program individually fosters the use of technology by focusing on specific content-area technologies, we wanted our candidates to have two common experiences regarding to technology during their teacher education program:

1. We created an assignment to be completed early in the program. This introductory task is assessed in TED 6020 Computer Applications in Teaching course for majority of our candidates and in other technology infused courses for candidates who don’t take TED 6020.

2. One of the lessons observed by clinical supervisor during the student teaching (at the end of their program) requires the use of content specific technology for instruction.

International Society for Technology in Education (ISTE) Standards for teachers were used when creating the assignments and the rubric. We specifically paid special attention if our assignments and rubrics address the following ISTE standards for teachers:

• “Facilitate and inspire student learning and creativity o Teachers use their knowledge of subject matter, teaching and learning, and technology to

facilitate experiences that advance student learning, creativity, and innovation. • Design and develop digital age learning experiences and assessments”

(https://www.iste.org/standards/standards/standards-for-teachers).

Initial Technology Task: We created a rubric for an initial technology task that candidates complete at the beginning of the teacher education program. Majority of our candidates take TED 6020 course. TED 6020 course description states that “variety of hands-on experiences where technology is used as a tool to support instruction and assessment purposes in K-12 classrooms. Course activities introduce students to educational technology standards.” Moreover, we worked with other program area faculty (such as physical education and health education) in the development of the rubric for this initial technology task. Faculty teaching the course scores this technology-related assessment. Candidates are expected to reach

Page 6: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

6

the level of proficiency by being ranked a 3 on a 4-point rubric (see Exhibit 1.8. Technology Rubric). Even though the rubric is the same for all candidates in the different programs, the assignments in the courses where the technology based assignment will be assessed can be different.

In TED 6020, candidates will evaluate two lesson plans in their discipline and reflect by focusing on how technology is integrated to engage learners and build deeper learning in their content specific area (see Exhibit 1.9. TED 6020 Technology Assignment). In the physical education course, candidates create and present of two different posters. One poster is a QR (Quick Response 2-dimensional bar-code/link) skill poster inclusive of an instructional video link and the other is a Public Address (PA) promotional poster that includes a QR code with additional information (see Exhibit 1.10. PE Assessment for Technology). Content-area Specific Technology Use during Student Teaching: One of the lessons observed by instructional coaches during the student teaching (at the end of their program) requires the use of content specific technology for instruction and specifically assesses the lesson regarding the technology use. The items in the lesson plan assignment (1d, 1e, 2e, 3c, domain 5) should address technology used to:

a. Engage learners, and build deeper learning, b. Demonstrating knowledge & resources, and c. Organizing physical space.

To document this requirement, a new criterion was added to the observation rubric to include a content-specific technology criterion (see Table 2).

Table 2: content specific technology criterion

Component Unsatisfactory Basic Proficient Expectation Level

Distinguished The rare intern

Content-Specific Technology Integration

Teachers use technology as a stand-alone tool, which does not or poorly facilitate student learning in the content area.

Teachers attempt to use their knowledge of subject matter, teaching and learning, and technology, which may partially facilitate student learning in the content area.

Teachers use their knowledge of subject matter, teaching and learning, and technology to facilitate experiences that advance student learning in the content area.

Teachers use their knowledge of subject matter, teaching and learning, and technology to facilitate high-level, rigorous experiences that advance content-specific learning, creativity and innovation.

We envision that through this intentional documentation of effective technology use through the initial technology task, development of the ISTE standards in methods courses and the observation during student teaching, we can demonstrate of how we foster our candidates’ effective and appropriate technology use to advance student learning (see Figure 1). At this point, we expect our candidates become familiar with general and some content specific technologies, ISTE standards for teachers and P-12 students, and have a critical eye for appropriate use of technology. As stated earlier, each content area program nurtures candidates’ technology knowledge and skills in their methods courses. In these courses, the aim is to help our candidates focus on how they will advance student learning by using their knowledge of subject matter, teaching and learning, and technology. Finally, at the last stage we want to observe how they plan and teach with technology.

Page 7: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

7

Figure 1. Fostering Technology throughout the Teacher Education Program

AFI 3: DISAGGREGATED DATA CAEP requires three cycles of data be presented, disaggregated by program and level, and analyzed for strengths, weaknesses, and trends. Faculty reviews their programs’ data annually near the end of our winter semester. With the many changes in our data from previous accreditation reports faculty only reviewed one of the two 2 cycles of data that we submitted last summer. Looking at strengths, weaknesses, and trends, by licensure program areas was not trustworthy because there were many programs with low “N’s” to make adequate recommendations. We currently have three cycles of disaggregated data from Fall 2015 to Fall 2016. The CAEP advisory committee has preliminarily examined the data and note that almost all licensure program areas are above the 85% benchmark of proficiency that indicate our candidates are prepared to be effective teachers. Initial examination of data by licensure area for each data set on the Lesson Plan, Digital Self-Study, Clinical Observation, Case Study, and E-Portfolio are described below. It should be noted that even though there are several results that fall below the 85% benchmark, when averaged out across multiple semesters for the most part all licensure areas met or exceeded the 85% benchmark.

LESSON PLAN

The lesson plan rubric consists of 5 major components for assessment (See Exhibit 2.6 – Lesson Plan Rubric). The expectation is that our completers should meet an average of 85% proficiency on this assessment meaning that candidates receive a minimum of 3 out of 4 on a 4-point scale. Results from the Lesson Plan assignment (Exhibit 1.3) indicate areas for further analysis where Completer’s Proficiency scores fall below 85%. Completers with a concentration in English scored 75%, completers with a concentration in Social Studies and also Elementary Language Arts scored 80% for the Assessment area of the Lesson plan in Fall 2015. Completers with a concentration in Elementary Language Arts also scored 80% for the Outcomes area of the Lesson Plan in Fall 2015. Completers with a major in Visual Art (n = 3) scored 66.67% for the Instructional Practice area of the Lesson Plan in Winter 2016.

DIGITAL SELF-STUDY

Initialtechnologytaskfamiliarizingcandidates

withtechnology

Methodscoursesfosteringeffectiveand

apprpriateuseofcontent-areatechnologies

Content-areaspecifictechnologyuseduring

studentteaching

Page 8: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

8

The Digital Self-Study rubric consists of 21 major components for assessment (see Exhibit 2.5 – Digital Self-Study Rubric). The expectation is that our completers should meet an average of 85% proficiency on this assessment meaning that candidates receive a minimum of 3 out of 4 on a 4-point scale. Results from the Digital Self-Study assignment (Exhibit 1.4) indicate areas for further analysis where completer’s proficiency scores fall below 85%. Completers with a concentration in Chemistry (n = 2) had an overall score 66.67% for Winter 2016. Post Bachelor Degree completers scored 57.1% for the Using Questioning and Discussion Techniques area in Winter 2016. Cognitive Impairment teacher candidates scored 80% for the Designing Student Assessments, Using Assessments in Instruction, Reflecting on Teaching, Maintaining Accurate Results, Participating in the Professional Community, Growing & Developing Professionally, and Showing Professionalism areas in Fall 2016. Early Childhood- General & Special Education completers scored 80% in Managing Classroom Procedures area in Fall 2015, and 77.8% in Using Questioning & Discussion Techniques in Winter 2016. Early Childhood- General & Special Education completers (n = 2) scored 50% in the areas of Demonstrating Knowledge of Resources, Designing Student Assessments, Maintaining Accurate Records, and Participating in the Professional Community in Fall 2016. Elementary Education Candidates scored 81.3% in the Designing Student Assessments category in Fall 2016. Completers with a concentration in English scored 75% in the areas of Using Questioning & Discussion Techniques, and Participating in the Professional Community in Fall 2015. Completers with a concentration in Mathematics (n = 4) (Elementary or Secondary) scored 75% in the areas of Setting Instructional Outcomes, Designing Student Assessments, Using Assessment in Instruction, and Maintaining Accurate Records for Fall 2016. Social Studies Teacher Candidates scored 80% in the areas of Setting Instructional Outcomes, Managing Student behaviors, Using Questioning & Discussion Techniques, and Reflecting on Teaching; 70% in the areas of Designing Coherent Instruction and Managing Classroom Procedures, and 60% in Designing Student Assessments for Fall 2015.

CLINICAL OBSERVATION

The Clinical Observation rubric consists of 19 major components for assessment (see Exhibit 2.7 – Clinical Observation Rubric). The expectation is that our completers should meet an average of 85% proficiency on this assessment meaning that candidates receive a minimum of 3 out of 4 on a 4-point scale. Results from the Clinical Observation assignment (Exhibit 1.5) indicate areas for further analysis where completer’s scores fall below 85%. Additional analysis is suggested where Clinical Instructors’ and Mentor Teachers’ ratings do not match. The instructional coaches in the area of Maintaining Accurate Records rated cognitive Impairment Teacher Candidates 80% in Fall 2016. The Mentor Teachers in the areas of Managing Classroom Procedures, and Participating in the Professional Community rated early Childhood-General & Special Education Teacher candidates 77.8% in Winter 2106. The Mentor Teachers in multiple categories rated English Teacher Candidates below proficiency. In Fall 2015 (n = 6), 50% scored low for Setting Instructional Outcomes, Using Questioning & Discussion Techniques, and Demonstrating Knowledge of Resources. 66.7% score was indicated for Managing Student Behavior whereas 83.3% score was indicated for the areas of Demonstrating Knowledge of Content & Pedagogy, Managing Classroom Procedures, Communicating with Students, and Showing Professionalism. In Winter 2016 (n = 5), teacher candidates scored 80% in the areas of Demonstrating Knowledge of Students, Designing Coherent Instruction, Creating and Environment of Respect & Support, Establishing a Culture for Learning, Managing Classroom Procedures, Managing Student Behavior, Communicating with Students, Using Questioning & Discussion, Engaging Students in Learning, and Demonstrating Flexibility & Responsiveness. In Fall 2016 (n = 6), Mentor teachers indicated a score of 66.7% in the area of Designing Student Assessments, and scores of 83.3% in the areas of Demonstrating Knowledge of Students, Demonstrating Knowledge of Resources, Using Assessment in Instruction, Demonstrating

Page 9: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

9

Flexibility & Responsiveness, Participating in the Professional Community, and Showing Professionalism. Mathematics – Mentor Teachers rated elementary or secondary completers below proficiency in multiple categories. In Fall 2015, 77.8% proficiency was indicated for areas of Managing Classroom Procedures, Using Questioning & Discussion Techniques, Engaging Students in Learning, and Participating in the Professional Community. In Winter 2016, Mentor Teachers rated Managing Student Behavior as 83.3%. In Fall 2016 (n = 4), Mentor Teachers indicated proficiency at 75% in the areas of Managing Student Behavior, Managing Classroom Procedures, and Participating in the Professional Community.Instructional coaches rated Social Studies completers 80% proficiency in Fall 2015 for Designing Coherent Instruction. In Fall 2015, Mentor Teachers rated completers 77.8% proficient in Demonstrating Knowledge of Students, Managing Student Behavior, Using Questioning & Discussion Techniques, and Demonstrating Flexibility & Responsiveness. In Winter 2016, Mentor Teachers indicated completer’s were 66.7% proficient in Managing Student Behavior, and 83.3% proficient in the areas of Managing Classroom Resources, Using Questioning & Discussion Techniques, Engaging Students in Learning, Reflecting on Teaching, and Maintaining Accurate Records. In Fall 2016, Mentor Teachers indicated 66.7 % proficiency in the areas Designing Student Assessments, and Using Assessments in Instruction. 83.3% proficiency was indicated for Demonstrating Knowledge of Students, Managing Student Behavior, Using Questioning & Discussion Techniques, and Demonstrating Flexibility & Responsiveness.

CASE STUDY

The Case Study rubric consists of 5 major components for assessment (see Exhibit 2.4 - Case Study Rubric). The expectation is that our completers should meet an average of 85% proficiency on this assessment meaning that candidates receive a minimum of 3 out of 4 on a 4-point scale. Results from the Case Study assignment (Exhibit 1.11) indicate areas for further analysis where Completer’s Proficiency scores fall below 85%. Completers with a concentration in English scored 75% in the areas of designing Behavior and Academic Intervention Plan, Behavior and Academic Intervention Plan (Physical Space), and Results and Discussions in Fall 2015 and Winter 2016 semesters.Completers with a concentration in Mathematics (Elementary or Secondary) scored 83.3% for designing Behavior and Academic Intervention Plan and 75% for designing Behavior and Academic Intervention Plan (Physical Space) in Fall 2015.Social Studies Teacher Candidates scored 70% in the areas of designing Behavior and Academic Intervention Plan, Behavior and Academic Intervention Plan (Intervention Sequence), and Results and Discussions in Fall 2015.

CASE STUDY

The Case Study rubric consists of 5 major components for assessment (see Exhibit 2.4 - Case Study Rubric). The expectation is that our completers should meet an average of 85% proficiency on this assessment meaning that candidates receive a minimum of 3 out of 4 on a 4-point scale. Results from the Case Study assignment (Exhibit 1.11) indicate areas for further analysis where Completer’s Proficiency scores fall below 85%. Completers with a concentration in English scored 75% in the areas of designing Behavior and Academic Intervention Plan, Behavior and Academic Intervention Plan (Physical Space), and Results and Discussions in Fall 2015 and Winter 2016 semesters. Completers with a concentration in Mathematics (Elementary or Secondary) scored 83.3% for designing Behavior and Academic Intervention Plan and 75% for designing Behavior and Academic Intervention Plan (Physical Space) in Fall 2015. Social Studies Teacher Candidates scored 70% in the areas of designing Behavior and Academic Intervention Plan, Behavior and Academic Intervention Plan (Intervention Sequence), and Results and Discussions in Fall 2015.

Page 10: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

10

E-PORTFOLIO

The E-Portfolio rubric consists of 8 major components for assessment (see Exhibit 2.8 – E-Portfolio Rubric). The expectation is that our completers should meet an average of 85% proficiency on this assessment meaning that candidates receive a minimum of 3 out of 4 on a 4-point scale. Results from the E-Portfolio assignment (Exhibit 1.12) indicate areas for further analysis where Completer’s Proficiency scores fall below 85%. Teacher Candidates with a concentration in Mathematics (Elementary or Secondary) scored 66.7% for having created a LinkedIn Profile and Digital Resume in Fall 2016. Similarly, Social Studies candidates received a below proficient rating of 71.4% for creating a LinkedIn Profile and Digital Resume in Fall 2016.

Further analysis of data will occur on March 22, 2017 at a Teacher Education Forum meeting. Program faculty will analyze the data for all three cycles (Fall 15, Winter 16, and Fall 16). By examining performance assessment data that is disaggregated by licensure areas (see Exhibits 1.2, 1.3, 1.4, 1.5, 1.11, 1.12– CAEP data reports) as well as other CAEP data, program faculty will discuss strengths, weaknesses and trends for each program specific. Then trends will be discussed across programs to determine entire programmatic changes if needed. We will make recommendations for changes for the 2017-18 academic year and we will be ready to discuss some of these recommendations at the April 9-11 visit. In the future, our plans for data analysis of program data will occur annually near the end of the winter semester by all program faculty. The results will be shared with part time faculty where changes to courses, syllabi and assignments may occur.

AFI 4: COLLEGE AND CAREER READY

None of the EPP's rubrics assess candidates' ability to demonstrate skills and commitments to providing P-12 students with access to college- and career-ready standards.

The issues discussed from task 3 should address the College and Career Ready standards that are evaluated on multiple assessments. The data from three assessments including the lesson plan, the digital self-study, and the observations indicate that our program as a whole prepares our completers to proficiently address College and Career-Ready expectations when planning and delivering their lessons with an average score of close to 98% as a whole program. This meets our benchmark that 85% of our completers receive a 3 on a 4-pt scale when being assessed that candidates are able to demonstrate skills and commitment to providing PK-12 students with access to college and Career Ready Standards.

Page 11: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

11

Standard 2 Addendum: Tasks and Areas for Improvement Task 1: Co-selection of the clinical educators/mentor teachers C-1: How and when does the EPP and its P-12 partners come together to engage in co-selection of clinical educators’/mentor teachers? Signed MOUs with districts (reviewed and signed each academic year) articulate how we co-select classroom teachers from partner P-12 schools to serve as mentor teachers for teaching candidates. These MOUs state that the district shall: “Co-identify with the university effective mentor teachers to work with interns.” In order to serve as a mentor teacher, those interested must first meet criteria established in the MOUs (listed in the section below titled “Co-Selection of Mentor Teachers.”) Those who meet the criteria are invited by the district and the Office of Clinical Experiences to join them for an informational meeting; the purpose of this meeting is to clarify the expectations, roles, and responsibilities of serving as a Wayne State Mentor Teacher. The meetings allow potential Mentors a context to meet the university team and to ask questions about the program, co-teaching, and coaching models. The meeting allows the university team a context for meeting potential mentor teachers informally, sometimes for the first time. After the mentor teacher has attended the informational meeting and has been selected as a mentor, instructional coaches hold ongoing check-ins with mentor teachers; mentors also meet with the Clinical Instructional Specialist who checks in with all mentor teachers and provides informal professional development with them to norm mentoring practice across classrooms, buildings, and sites. Figure one provides a graphic representation of this process. Co-Selection of Mentor Teachers The Office of Clinical Experiences (OCE) identifies and selects mentor teachers in collaboration with partner schools’ administrators and/or instructional coaches. A teacher must evidence the items on the

Figure 1: Co-Selection of Clinical Educators

Page 12: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

12

checklist in figure 2 in order to be considered as a mentor teacher for WSU teaching interns. This list is adapted for specific partners as necessary. In general, the discussion begins with these criteria. Figure 2: Mentorship Criteria Checklist

• He must be a PK-12 classroom teacher in one of our partner schools. • She must hold a current, valid State of Michigan teaching certificate in the grade level

and/or content area for which she would like to mentor teaching interns. • His school administrator(s) must be supportive of his decision to serve as a mentor teacher. • She must be identified as an effective teacher by school/district evaluations of her teaching

and by her students’ academic growth and data. • He must be committed to creating opportunities for equity and success for all learners in

his classroom. • She must have a positive classroom climate and culture, as identified by observations of

her teaching by school administrators and/or instructional coaches. • He must be deeply reflective about his teaching practice and view teaching as a profession

that requires ongoing inquiry into practice for professional growth. • She must be willing to open her teaching practice to interns as co-teaching colleagues and

willing to mentor them using a coaching model. • He must be willing to participate in university-led professional development to learn the

tools for mentoring and coaching that he will need to grow as an effective mentor teacher. • She must be willing to partner with the university instructional coach as a partner in

mentoring interns. • He must model positive professional dispositions and commitment to the teaching

profession.

There is not a specific requirement on the number of years a teacher must have taught prior to serving as a mentor teacher because the overall skills and dispositions listed above are of much greater importance than number of years served; however, teaching interns will not be placed with a teacher who is a first-year teacher or a teacher in his/her first year in a new building. C-2: What is the EPP’s process for the ongoing evaluation of clinical educators? Mentor teachers are identified as being effective teachers by their impact on P-12 student achievement and observations of their teaching, as determined by the school district partner. The university’s Office of Clinical Experience (OCE) leads professional development for mentor teachers, with particular emphasis on how to lead coaching conversations with teaching interns and how to mentor interns with a co-teaching model. Mentor teachers are invited and some have participated with university instructional coaches in shared professional development with the New Teacher Center, focused on coaching conversations to facilitate the growth of teaching interns (see Exhibit 1.1a OCE-Prof. Dev. - Agenda 1 and 1.1b OCE Prof. Dev. - Agenda 2). Mentor teachers are evaluated on an on-going informal basis via informal feedback from interns and instructional coaches as well as observations of the Clinical Instructional Specialist during professional development with the mentor teachers. University Instructional Coaches are part-time faculty and OCE evaluates them as clinical educators in accordance with the WSU part-time faculty contract and the respective collective bargaining agreement as it relates to job postings, hiring, and evaluation of field instructors. For OCE field instructors, after the first, fourth, seventh, and thirteenth time teaching the clinical course, they are required to submit an

Page 13: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

13

adjunct faculty binder and be observed in their class. These binders ask for the field instructor’s Student Evaluation of Teaching (SET) scores, syllabus, and grading of student materials for assessment purposes (see Exhibit 2.1a – Part time faculty evaluation). Outside of these formal evaluations of their teaching, field instructors work with the Clinical Instructional Specialist to engage in meta-coaching and receive feedback on their coaching of interns. The process involves each university Instructional Coach having a coaching conversation with a teaching intern while the Clinical Instructional Specialist observes the coaching conversation and takes notes based on a coaching goal the field instructor has identified. After the coaching conversation, the intern returns to the classroom and the field instructor and Clinical Instructional Specialist have a meta-coaching conversation in which the latter supports the instructor’s reflective practice to facilitate professional growth in order to support more effective clinical coaching, and hence interns’ growth. This meta-coaching also provides ongoing, informal feedback on field instructors’ practice and areas that need to be addressed in upcoming OCE team meetings and professional development in order to support field instructors. This feedback is peer-to-peer and not part of the field instructors’ official evaluation. C:3: How does the EPP use the provided checklist data to implement programmatic changes or improvements in the selection of its mentor teachers? The goal of the Office of Clinical Experiences (OCE) is provide support to the classroom teacher in how to mentor a new teacher. We invite mentor teachers to professional development workshops regarding observational techniques and effective coaching in teaching (see Exhibit 1.1a OCE-Prof. Dev. - Agenda 1 and 1.1b OCE Prof. Dev. - Agenda 2). When OCE receives feedback from instructional coaches, P-12 school leaders, and/or teaching candidates that mentor teachers are struggling with mentoring candidates, an OCE support staff will document the communication and a review is conducted. The checklist in Figure 2 guides decisions made regarding mentor teachers as to whether the candidate completes student teaching with him/her and/or whether incoming candidates will be placed with the same mentor teacher in the future. The checklist has supported decisions to not only continue the work with effective mentor teachers; we have had to remove candidates from specific classrooms and we have created a list of teachers evaluated as ineffective based on criteria presented in the checklist (see Exhibit 2.1aa: Areas of Concern re Mentor Teachers). The mentor teacher’s name and district has been removed for confidentiality; but as evidence, there are notes regarding the rationale for removing the candidate from the classroom. We believe that teaching and mentoring are overlapping, but somewhat distinct skill sets. We are committed to supporting the growth of mentor teachers and do so through contexts such as the shared professional development in which we engaged with partner school mentor teachers, instructional coaches, and the New Teacher Center, as well as the ongoing professional development that OCE provides for mentor teachers, and which is required per the MOUs that P-12 partners sign. It is stated in the responsibilities of the mentor teacher: “Mentor teacher commits to engage in OCE-led professional development to grow use of mentoring and coaching tools.” In the district-level roles and responsibilities section of the MOU, it states that the district will “Release mentor teachers from classroom teaching on mutually agreed upon dates for professional development with university related to mentoring, coaching, co-teaching, etc.; PD will be provided by OCE at no cost to district; teaching interns will be in classrooms teaching PK-12 students so subs will not be required.” Task 2: Placement of candidates in setting with P-12 students from diverse backgrounds A-1: The EPP provides no evidence that it ensures that all candidates have clinical experiences with P-12 students from diverse populations. (This should be a breakdown by demographics of school districts chosen.)

Page 14: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

14

OCE places teaching candidates in P-12 schools where interns teach underserved children in urban communities and inner ring suburbs, including Detroit, Dearborn, Ferndale, Van Dyke, etc. Teaching interns teach in Title I schools that serve diverse families, including those living in poverty, students of color, language learners, immigrant households, and other underserved children. As an urban university, Wayne State is committed to preparing teacher candidates who are prepared to teach underserved children. See table 2.1 for partnership districts’ ethnic breakdown and table 2.2 for their school demographics. Table 2.1: District Ethnicity American

Indian Asian African

American Hispanic/ Latino

Native Hawaiian/ other

Two or more races

White

Ferndale

0% 1% 63% 3% 0% 7% 26%

U Prep

0% 0% 96% 0% 0% 4% 0%

Wayne Westland

0% 1% 35% 5% 0% 4% 55%

Van Dyke

0% 4% 55% 3% 0% 9% 29%

Dearborn

0% 1% 4% 2% 0% 0% *93%

Grosse Pointe

0% 2% 16% 3% 0% 4% 75%

DPS 0% 1% 82% 13% 0% 0% 4% *Note: There is not a category for Arab-American; however, Dearborn has the largest proportion of Arab Americans in the United States and is home to a growing number of refugees of war, whose children are served by Dearborn Public Schools. Table 2.2 School Demographics Total

Breakfast Participation as a Percentage of Total Lunch Participation

Free and Reduced Lunch Participation by Eligible Students

Economically Disadvantaged Students

4-year Graduation Rate (2014-5)

Average Class Size K-3

Ferndale 59.3% 64.5% 62.0% 60.93% 13.4

University Preparatory Academy

22.3% 61.7% 74.4% 91.67% 12.8

University Preparatory Science and Math Academy

6.1% 53.3% 65.3% 93.5% 20.5

Page 15: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

15

Wayne Westland: Winter-Walker Elementary School

100% 74.0% n/a n/a n/a

Van Dyke 84.6% 92.0% 63.6% 66.34% 21.8 Dearborn 36.3% 68.1% 68.5% 89.88% 22.5 Grosse Pointe 8.3% 55.3% 13.3% Greater than

95% 17.5

Detroit City School District

81.0% 64.6% 74.0% 77.35% 16.7

A-2: The EPP does not provide a description of the selection process of placement sites. Prior to 2014-15, OCE placed students in districts that were widely distributed across Metro Detroit area. With the call from CAEP Standard 2 to focus on deeper school partnerships, OCE has narrowed its scope of partnerships and focused on schools that could provide our students with an urban Title I schools environment. To narrow the scope of potential partners, the director and assistant director of OCE, visited multiple school districts in 2014-15 and met with school superintendents, assistant superintendents, principals, assistant principals and staff. The director and assistant director visited school locations and observed classrooms in order to find districts that would be a good match for developing partnerships. OCE was focused on selecting schools that: served diverse, underserved communities; had positive school culture and climate; were committed to education as a tool for equity; enacted rigorous curriculum and instruction; and welcomed developing clinical models with co-teaching and the potential for year-long internship in partnerships with the university. The schools that were selected to become partnership schools, in essence, could prepare effective urban educators, in accordance with our college theme. B-1: Partners co-construct mutually beneficial P-12 school and community arrangements, including technology-based collaborations, for clinical preparation and share responsibility for continuous improvement of candidate preparation. The MOUs that OCE has on file with the partnership schools define the role/responsibilities of all of the stakeholders. Every term OCE meets with district partners to complete a check-in on the partnership. There is also a meeting with mentor teachers, throughout the academic year in order to provide professional development, which benefits our partnership. School district representatives and mentor teachers serve as reviewers for the College of Education Capstone Conversation event as well as are invited to attend our OCE Advisory Board meetings to provide feedback/input. The focus with our school partners is to build a teacher pipeline, which allows our students to become employed, if vacancies exist, in these districts. A great example of this is with one of our partnership districts, University Preparatory Academy (UPA). This partnership district recently had vacancies for the start of the 2016-17 academic year. WSU students filled close to 80% of these teaching vacancies in the district. This was unprecedented and their district leaders attribute this to the teacher pipeline that has been co-constructed between WSU and UPA. C-1: How does the EPP ensure that items included in the various MOUs are achieved? OCE ensures that items included in the MOUs are achieved by meeting formally with the school partners on a regular basis, at least once each term and informally from candidate, supervisor and mentor teacher

Page 16: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

16

feedback. The MOUs are discussed on a regular basis and are open for re-interpretation. OCE is also receiving ongoing, daily feedback on the enactment of the MOUs through field instructors and other OCE team members who are in the P-12 schools. We are a team that is engaged in schools and regularly visit schools and classrooms; this ongoing connection allows us to see in real-time the ways that MOUs are enacted in daily contexts with partners. These regular visits to schools also make for open communications with school partners and quick resolutions of any matters arising. C-2: What process is used to track candidates’ placements to ensure that all candidates in all programs all levels (undergraduate, post-baccalaureate, graduate) have experiences with P-12 students from diverse populations? As defined by the InTASC standards and adapted by CAEP, diversity is defined as

“(1) Individual differences (e.g., personality, interests, learning modalities, and life experiences), and (2) group differences (e.g., race, ethnicity, ability, gender identity, gender expression, sexual orientation, nationality, language, religion, political affiliation, and socio-economic background) (InTASC Model Core Teaching Standards, p. 21)”

The Metropolitan Detroit area is uniquely a diverse population of both individual and group differences which affords all of our candidates who complete their student teaching an opportunity to work with a diverse student population. Tables 2.1 and 2.2 above reflect the diversity of our School Partners and reflect the general demographics of the surrounding Detroit area. All of our candidates at all program levels teach in the Metropolitan Detroit area and therefore work with individual and group differences. The Office of Clinical Experiences creates a master list of all candidates in various field experiences for each term (see Exhibits 2.1c-e Master lists); these spreadsheets include district and school information. C-3: What is the sequence of field experiences for post-baccalaureate and graduate candidates? The sequence of field experiences for post-baccalaureate and graduate candidates is the same as our undergraduate students. All candidates are required to complete one phase of pre-student teaching followed by one phase of student teaching. (Note: Candidates in specialty areas such as: early childhood, art education, and special education are required to complete a third phase of field experience per the Michigan Department of Education requirements). C-4: Who chooses the placement sites? How does the EPP ensure diversity of placements? The director and assistant director choose the placement sites based upon the site visits that were conducted during the 2014-15 academic year, as described in Task 2, A-2 above. To ensure the diversity of placements, we continually work with our candidates and school partners to receive feedback and make frequent site visits. OCE places teaching candidates in P-12 schools where students teach underserved children in urban communities and inner ring suburbs, including Detroit, Dearborn, Ferndale, Van Dyke, etc. Teaching interns teach in Title I schools that serve diverse families, including those living in poverty, students of color, language learners, immigrant households, and other underserved children. As an urban university, Wayne State is committed to preparing teacher candidates who are prepared to teach underserved children, as described in Tables 2.1 and 2.2 above. Task 3: Training provided for P-12 partners A-1: Data collected from the trainings and professional development that drive programmatic changes.

Page 17: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

17

During a review of data from Fall 2015 assessments, The Accreditation Advisory Committee noticed that the scores were lower for clinical observations than prior semesters and asked for OCE to consider ways to improve the performance of our candidates. At the same time, clinical coaches expressed challenges in using the Danielson Framework and incorporating multiple newly revised assessments at one time. They discussed being overwhelmed and felt that they were not being as supportive to the student teachers because the coaches were more focused on understanding the use of the Danielson framework. As a result of lower observational scores and the feedback from the clinical coaches, professional development was introduced. On February 8 and March 23, clinical instructional coaches participated in professional development led by the Office of Clinical Experience. As part of the agenda, the coaches watched multiple videos of student teacher performance and were asked to rate the candidate, see Exhibit 1.1a – OCE-Prof. Dev.-Agenda 1 and Exhibit 1.1b – OCE – Prof. Dev. Agenda 2. After rating them, the coaches discussed what the scoring expectations are and common ways that the group would review candidates’ performance. After a review of the Winter 2016 data, the Accreditation Advisory Committee noticed that clinical observations were improved and asked a faculty expert in statistical evaluation to provide quantitative analysis of the changes in mean scores of the Clinical Observation from Fall 2015 and Winter 2016. We wanted to know if the change was significantly different and if so, whether the professional development may have made a difference. Professional development regarding the use of the Danielson Framework had not occurred in the Fall. However, uniform training was provided to the clinical educators through professional development. Cronbach’s alpha, a measure of internal consistency reliability, was conducted after listwise deletion on the N = 161 cases with full records. The result was CA = .959 based on the 22 items on the rating scale, which is an extremely high level of internal consistency.

A two-independent samples t-test was conducted on the average clinical observation (1 = unacceptable – 4 = distinguished) of the student teachers, as rated by the clinical supervisors. The preliminary test for the underlying assumption of homoscedasticity was statistically significant (Levene’s F = 23.99, p = .000). Therefore, the adjusted Welch-Aspin t test with the Satteritwaite adjustment to the degrees of freed was used in place of the usual t-test. The result, t = -2.94, df = 109.002, was statistically significant (p = .004). This indicated the student teachers in the Winter, 2016 (mean = 3.28, n = 71) cohort scored statistically significantly higher than the Fall, 2015 cohort (mean = 3.06, n = 60). As a result of the data above, The Office of Clinical Experiences recognized the need to incorporate professional development regularly and will continue to rely upon data to make programmatic changes. A-2: Verification of training offered to mentor teachers and field instructors. Training offered to mentor teachers and field instructors ranged from formal training with the New Teacher Center, during which mentor teachers engaged in shared training on coaching with field instructors, to informal P-12 school site-based meetings with districts and teachers when issues/concerns/clarifications occur. District leaders emailed or phoned OCE, and OCE also reached out to partnership schools (see Exhibit 2.2 – training email verification). This email sent from OCE to our partnership school is an example of an informal needs-assessment for professional development with mentor teachers. It demonstrates the ongoing communication between school partners and WSU to support training of mentor teachers. Exhibits 1.1a OCE-Prof. Dev. - Agenda 1and 1.1b OCE-Prof. Dev. - Agenda 2 provide additional evidence and includes the notes from the professional development. B: “The OCE also hold technology trainings in order for mentor teachers and field instructors to learn and practice how to best use digital tools in the classroom with their interns (Other Measures/OCE/iTEACH Agenda Summer).”

Page 18: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

18

OCE provided training to field instructors at the iTEACH summer workshop; these field instructors then trained mentor teachers on the use of digital tools that the interns would be using to document their teaching practice. OCE used this teach-the-teacher method in order to assist mentor teachers in supporting teaching candidates with the digital teaching self-study assignment—artifacts for which interns would need to gather in the classroom. This assignment requires our candidates to videotape a lesson they teach, gather artifacts of the lesson, and use digital artifacts to analyze the P-12 students’ learning and their own teaching. Digital tools played a key role in this assignment so we supported with training. C-1: How does the OCE verify selection and evaluation of all clinical faculty not just cooperating teachers? University instructional coaches are part-time faculty and OCE evaluates them as clinical educators in accordance with the WSU part-time faculty contract and the respective collective bargaining agreement as it relates to job postings, hiring, and evaluation of field instructors. Our selection of clinical faculty for our pre-student and student teaching instructional coaches requires us to detail the types of duties expected for this position. Exhibit 2.1b represents a job description for one of our elementary education instructional coach positions, which highlights the importance of “engaging in collaborative coaching; and engaging in clinical instructional team professional development.” Evaluation for OCE field instructors, after the first, seventh, and thirteenth time teaching the clinical course, they are required to submit an adjunct faculty binder; and in their 4th time teaching the course they are formally observed in their class (see Exhibit 2.1a). These binders ask for the field instructor’s Student Evaluation of Teaching (SET) scores, syllabus, and grading of student materials for assessment purposes. Outside of these formal evaluations of their teaching, field instructors work with the Clinical Instructional Specialist to engage in meta-coaching and receive feedback on their coaching of interns. The process involves each university field instructor having a coaching conversation with a teaching intern while the Clinical Instructional Specialist observes the coaching conversation and takes notes based on a coaching goal the field instructor has identified. After the coaching conversation, the intern returns to the classroom and the field instructor and Clinical Instructional Specialist have a meta-coaching conversation in which the latter supports the instructor’s reflective practice to facilitate professional growth in order to support more effective clinical coaching, and hence interns’ growth. This meta-coaching also provides ongoing, informal feedback on field instructors’ practice and areas that need to be addressed in upcoming OCE team meetings and professional development in order to support field instructors. This feedback is peer-to-peer and not part of the field instructors’ official evaluation. C-2: What evidence is provided that demonstrates improved use of digital tools in the classroom as a result of trainings EPP has held? What are the requirements for working in settings with diverse students? All candidates are currently using digital tools to record their teaching and gather artifacts of their teaching practices, such as P-12 student work samples. These artifacts are part of the candidates’ teaching self-study, which was something that was not done previously in our programs (See Exhibit 2.5 Digital Teaching Self Study rubric CAEP aligned). Candidates then use the Framework for Teaching (Danielson, 2013) as they watch the digital videos of their practice and reflect on and evaluate their own practice; emphasis is placed on goal-setting and engaging in a coaching conversation with their mentor teachers and university instructional coaches as part of the process. Instructional coaches also use iPads to record short digital video clips and take digital photos of artifacts of teaching practice during interns’ lessons. These digital documents become artifacts that support evidence-based coaching conversations. Digital artifacts of teaching practice are also a regular part of clinical seminars, which are structured as professional learning communities. A regular part of these seminars is candidates sharing videos of

Page 19: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

19

practice and engaging in reflective dialogue and professional thought partnering. It should be noted that the MOUs with partner districts included media release for education purposes and interns must adhere strictly to district media use policies. Areas for Improvement Area For Concern 1: 3 cycles of data, rubrics:

AFI 1a: Disaggregated Data for 3 cycles: The data for all three cycles (Fall 15, Winter 16 and Fall 16 semesters) has been disaggregated by program level (UG, PB, MAT) as well as by licensure. Data reports are found in Exhibits 1.2 (MTTC data), 1.3 (Lesson Plan), 1.4 (Digital Self Study), 1.5 (Clinical Observation), 1.11 (Case Study), and 1.12 E-Portfolio). This is also addressed in Standard 1 Addendum, AFI 3.

AFI 1b: Rubrics tagged to CAEP & and level of proficiency is not clear:

Rubrics are tagged to CAEP and InTASC standards for the following assessments: Capstone Conversation, Case Study, Digital Teaching Self Study, Lesson Plan, Clinical Observation, and E-Portfolio (see Exhibits 2.3-2.8). Candidates must meet a 3 on a 4-point scale in order to be considered proficient on each of these assignments. We expect that 85% of our completers should reach the level of proficiency on each of the assignments. NOTE: The Capstone Conversation will no longer be part of our data set. Although we still will conduct this event with our completers who will be recommended for certification, we decided that this event should not be a high-stakes event, but more of a celebration. Starting Winter 2017 semester, this event will take on a different format with no scoring rubric attached to the event. Area For Concern 2: The EPP states diversity is addressed but does not clearly list proficiencies that are required for all candidates or provide evidence that all candidates demonstrate these proficiencies.

All teaching candidates must design and teach lesson plans based on the Wayne State Lesson Planning Framework and the Digital Self-Study, which includes use of a Universal Design for Learning model and consideration of diverse learners, see Exhibits 2.6 and 2.5; also see candidate data results for three cycles in Exhibits 1.3 and 1.4. Candidates must design instruction that begins by considering the strengths and competencies of learners (cultural, linguistic, ethnic, religious, differently abled, etc.) through assets lenses as opposed to deficits. All initial certification candidates take specific courses in their program (such as BBE 5000 KIN 5600, MED 3550), which engage candidates in learning about diversity in education more deeply through assets lenses. Candidates also develop e-portfolios in which they develop and provide evidence of their commitment to teaching for diversity; this section must contain artifacts of their teaching practice to support their claims about their practice (see Exhibit 1.12 E-portfolio data which present 3 cycles of candidate data and Exhibit 2.8 e portfolio rubric CAEP aligned). The Framework for Teaching (see Exhibit 2.9 - Danielson, 2013), which is used as the evaluation tool for candidates’ lesson plans and each lesson observation also contains specific evaluation criteria relevant to diversity. For example, domain 1b Demonstrating Knowledge of Students includes addressing P-12 students’ cultural heritages as assets in instruction. 85% of pre-student teachers are expected to be at level 2 (basic) by the end of their internship: “The teacher also purposefully acquires knowledge from several sources about groups of students’ varied approaches to learning, knowledge and skills, special needs, and interests and cultural heritages.” 85% of student teachers are expected to be at level 3 (proficient) by the end of their internship: “The teacher also systematically acquires knowledge from several sources about

Page 20: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

20

individual students’ varied approaches to learning, knowledge and skills, special needs, and interests and cultural heritages.” Domain 2a Creating an Environment of Respect and Rapport includes at level 2 (basic): “Patterns of classroom interactions, both between teacher and students and among students, are generally appropriate but may reflect occasional inconsistencies, favoritism, and disregard for students’ ages, cultures, and developmental levels.” 85% of pre-student teachers are expected to be at this level (2) by the end of their internship. Level 3 for the same domain is the expectation for student teachers: “Teacher-student interactions are friendly and demonstrate general caring and respect. Such interactions are appropriate to the ages, cultures, and developmental levels of the students.” 85% of candidates are expected to reach this level (3) by the end of student teaching. In the Digital Self-Study assignment candidates are assessed on these same criteria as the lesson plan described above that weave in issues of diversity: 1) Demonstrate knowledge of their students; 2) Create an environment of respect and rapport, and 3) engage students in their learning. Through courses listed above (BBE 5000, KIN 5600, MED 3550) our candidates examine the purpose of why these types of components are important, and then they practice pedagogy such as Universal Design for Learning in their methods and clinical courses. Scores for these assessments over 3-cycles of data exceed the 85% benchmark for proficiency on each of these components (see table 2.3 below). From this overview, we can conclude that our candidates are well prepared to address issues of diversity. This data is disaggregated by each licensure area and will be examined at a March 22, 2017 TED forum meeting.

Table 2.3: Components addressing Diversity through UDL

Area For Concern 3: Clinical Educators: 3-A: The EPP describes the minimal criteria for the selection of clinical educators, but does not describe how the EPP and its P-12 partners select clinical educators. Nor does the EPP describe its procedures for evaluating clinical educators or how the results are shared. We believe points in Task 1 (C-1 & C-2) at the top of the document address this area for concern. 3-B: The EPP provides no description of the sequence of clinical experiences required in each program at each level. Further, the EPP provides no description of the tasks associated with each clinical experience or how the candidates are evaluated to ensure candidates develop effectiveness and have a positive impact on students' learning and development. In Task 2, C-3 above: All candidates are required to complete one phase of pre-student teaching followed by one phase of student teaching. (Note: Candidates in specialty areas such as: early childhood, art

Component MAT Post Bachelor UG Fa

15 Wi 16

Fa 16

Fa 15

Wi 16

Fa 16

Fa 15

Wi 16

Fa 16

Number of Students: 20 20 4 5 7 3 58 38 46 1b: Demonstrating Knowledge of Students

100 95 100 100 100 100 98.3 100 100

2a: Creating an Environment of Respect and Rapport,

100 100 100 100 100 100 96.6 100 100

3c: Engaging Students in Learning.

100 100 100 100 100 100 96.6 97.9 97.2

Page 21: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

21

education, and special education are required to complete a third phase of field experience per the Michigan Department of Education requirements.) We provided a table (see Standard 1 addendum, Task 4: Table 1: Key Assignments for Clinical Work that lists the key assessments for each level of clinical experience (pre-student teaching, student teaching, and special area student teaching). The Framework for Teaching (Danielson, 2013) is used to evaluate interns’ lesson plans and observations and of their teaching; this includes domain 3d Using Assessment in Instruction. By the end of pre-student teaching, 85% of candidates are expected to be at level 2 (basic): “Students appear to be only partially aware of the assessment criteria, and the teacher monitors student learning for the class as a whole. Questions and assessments are rarely used to diagnose evidence of learning. Feedback to students is general, and few students assess their own work.” However, by the end of student teaching, 85% of candidates are expected to be at level 3 (proficient): “Students appear to be aware of the assessment criteria,�and the teacher monitors student learning for groups of students. Questions and assessments are regularly used to diagnose evidence of learning. Teacher feedback to groups of students is accurate and specific; some students engage in self-assessment.” Every lesson plan and observation is evaluated and every coaching conversation following a lesson observation includes discussion of evidence of student learning (i.e., assessment). Ongoing evaluation of how candidates are developing their teaching practice to have a positive impact on P-12 students' learning and development is inherent to OCE’s work with candidates.

Page 22: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

22

Standard 3: Tasks and Areas for Improvement

Task 1: Assessment of candidates’ dispositions�

C-1: What evidence indicates that the quality of the disposition survey is sufficient to make decisions about the continuation of a candidate in a program?

Evidence of candidates’ professional dispositions are a part of our phase-in plan. The process for developing an effective dispositions plan and policy has been supported by a review of literature (see Exhibit 3.1aa Bibliography Dispositions and 3.1a – Dispositions and Teacher Assessment The Need For A More Rigorous Definition, which reflects one of the sources of literature reviewed); examination of multiple dispositional policies from institutes in our state (see Exhibit 3.1b – Northern Michigan University (NMU) Dispositional Plan as a sample); and the knowledge and background of the faculty on this disposition committee. In Fall 2016, a committee of teacher education faculty and advisors created the dispositional checklist instrument and implemented with a pilot group of 8 faculty of which 7 responded (see Exhibit 3.1c – Instructions for Disposition Pilot Survey with Methods Faculty and Exhibit 3.1d – Dispositional Pilot Survey). For the Fall 2016 Dispositional pilot, faculty across multiple programs scored candidates in their course and the results showed that almost all candidates received a high score. Further, there was little to no reflection for the student on what the score meant (Exhibit 3.1e – Dispositional Pilot Results). As a result of the analysis by the Dispositional Committee, it was determined that the pilot would be insufficient to continue and that a new plan needed to be developed. The committee has since revised the Dispositional phase in which will be discussed as a revision of our Support and Growth Plan described in C-3 (see Exhibit 3.1f).

C-2: What process will be followed to establish reliability and validity of the disposition form? Based on a review of the literature regarding dispositions, the program will use the Candidate Support and Growth Plan for Professionalism (Exhibit 3.1f) which relies upon the widely accepted dispositional qualities in teacher education and the language used in the InTASC standards. We believe that using the language from the InTASC standards offers validity, consistency with our EPP created assessments and alignment with the CAEP standards. To establish reliability, beginning Fall 2017, faculty will receive detailed information regarding the implementation of the COE Support and Growth Plan at the Teacher Education Division monthly meeting. A pilot group of 6 faculty will be identified and asked to participate in evaluating candidate behaviors to establish inter rater reliability for the dispositional qualities as presented in the Candidate Support and Growth Plan for Professionalism. An analysis of those scores will be discussed with the pilot group and presented to the Accreditation Advisory committee. The committee will provide feedback to the Standard 3 subcommittee who will finalize plans for Winter 2018 official implementation in all methods and student teaching courses.

C-3: How does the EPP plan to help its candidates improve their dispositions when their behaviors are found to be in need of change?

When candidates have an area that needs support, the following protocol is currently established and is up for re-interpretation.

1) In methods courses, the faculty member will initiate a conversation with the student indicating suggestions for improvement. The conversation should be summarized in an email by the faculty member and sent to the candidate, their advisor, and the program coordinator.

2) If a candidate has repeated support indicators across multiple courses in the same semester or across 2 or more semesters, a meeting will be called with the candidate, advisor, program coordinator, and faculty member to determine how the candidate can be best supported in

Page 23: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

23

strengthening those areas while remaining in the program. The candidates will address these areas for strengthening in an academic plan of action document, see in Exhibit 1 below, which will be signed by the candidate.

3) If, after multiple attempts for support, a candidate does not meet the plan of action expectations then a decision will be made to determine if the candidate should stay in the program (see Exhibit 3.1c Support and Growth Plan).

C-4: Which disposition form is the EPP using?

We are phasing in the WSU Professional Dispositions listed below in Table 1. The committee upon meeting with various stakeholders (faculty, academic staff, students) will make revisions to the current document as we obtain more information from the pilot study. The elements of the WSU Professional Dispositions will be a part of the prospective students’ personal statement and the recommendation letter from an educator and is described in Task 4 for standard 3. It will also be reviewed during the admissions orientation as well as in methods courses syllabi. Table 1: WSU Professional Dispositions Professional Behaviors Professional Commitment • Actively and respectfully engages with

diverse perspectives • To ethical teaching and relationships

• Uses self-reflection to enact continued professional growth

• To building an inclusive society

Task 2: Increasing number of teachers in fields with shortages and from underrepresented population

C-1: What are the EPP’s goals for increasing the number of candidates who will become teachers in each of the high needs teaching fields and for the number of candidates from underrepresented minority groups? What are the baseline data and goals for five years?

An historical chart that presents our student enrollment by licensure area is found in Exhibit 3.2a. Important to note about the chart, it does include data regarding retired major codes. Highlighted are the current programs in established high needs areas based on the U.S. Department of Education Teaching Shortage Areas (TSA) (see Exhibit 3.2b Teaching Shortage List). The high needs areas for the state of Michigan for the 2015-16 and 16-17 school years is found on page 83 in that document, which includes: Early Childhood, Mathematics, Several Career & Tech options, Special Education and World Languages. The urban Detroit area is in greater need, where most teaching areas are in crisis. Detroit Public Schools Community District (DPSCD) had over 200 vacancies staring the 2015-16 school year that were staffed by substitutes for the entire year. This school district hires more of our graduates then from any other EPP in the state. So we make the case because of their teaching needs that many of our teaching majors are high needs. Unfortunately, the federally funded Teach Grant is only open to the approved TSAs. Even though our enrollment has continually decreased each year over the past 8 years, we are starting to see an increase in enrollment in some of the state identified high needs teaching areas. For example, our Cognitive Impairment enrollment in fall 2014 had 18 enrolled students, whereas fall 2016 the number has jumped to 46 (over 100% increase). The Early Childhood program numbers were disaggregated from the Elementary Education in 2015 and we note that our enrollment for Winter 2016 was 19 students; currently there are 23, at about a 21% increase over a one-year span. Our goal is to increase overall in high-needs and diverse student population by 2% within 5 years. We note that these increases are in direct relationship to some of our efforts to focus on high needs areas as well as recruiting people of color into the teaching profession.

Page 24: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

24

C-2: What data provide evidence that the EPP is making progress in meetings its goals/benchmarks? Who monitors the progress? To whom are the data reported?

The College of Education continues to place emphasis on retention, certification and graduation rates of students enrolled in teacher education programs to help with our goals and benchmark. Our annual Title II report requires our institute to provide information of our candidates by demographic (including age, ethnicity, race) and their teaching major. Faculty uses the process of this report to provide data when examining the previous years’ goals and benchmarks for enrollment in particular teaching majors and examining the data on the demographics of our student population for high-needs teaching areas and recruitment of diverse students (see Exhibit 3.2b 2016 Title II report). The Data Manager provides the enrollment data that is used to generate the Title II report and disseminates the information to the Dean of the College of Education, Assistant Deans, program area coordinators, and the CAEP Advisory Committee to monitor the growth of programs. The university administration oversees recruitment and retention efforts and monitors enrollment. The enrollment affects the overall budget allotment for the college and because the college has seen a continuous decrease in enrollment there has been a concentrated effort to focus on this issue. Some of our successful ways to recruit students into high needs teaching areas into the teaching profession are through focusing on our STEM teaching majors. The Teach DETROIT program provides significant scholarship moneys and resources to candidates pursuing Elementary Education with a teaching major in Mathematics. Candidates can receive up to a $10,000 stipend to participate in the Teach DETROIT program. Other examples of targeted recruitment for teachers in high-need areas including advisor presentations to students in current STEM programs outside Education and targeted messages sent to students graduating in STEM fields regarding the EPP’s Masters of Arts in Teaching program. Two successful programs that focus on recruiting diverse students into the teaching profession are The Dream Keepers program and our Morris Hood Scholars program. The Dream Keepers program is an Urban Teacher Residency program in collaboration with DPSCD and the Michigan Department of Education where we work with substitute teachers in the district who want to become teachers. This year we have recruited our first cohort consisting of 24 teachers of color, including five who are males. Our other long-standing program is the Morris Hood Scholars program and the Pre-Morris Hood Learning Community whose primary goal is to increase the number of underrepresented teachers in today’s classrooms. Both Morris Hood programs work in partnership to provide academic and professional support to targeted underrepresented students. The Data Manager provides demographic data on a semester basis to monitor students to for special programs like these. The Morris Hood Scholar program has a long history in the college; with that being said, baseline demographic data is provided to the coordinator and the Pre-Morris Hood Learning Community program coordinator (see table 2). The goal of the Morris-Hood programs is to increase participation and membership by 7.5% and increase certified teachers graduating from the program by 5% within the next 5 years (see Exhibit 3.3 Morris Hood Scholars Annual Report 2016). Table 2: Baseline Demographic Data

IPEDS/Race Ethnic 201409 201501 201509 201601 201609 201701 2 or more races 9 8 8 7 7 7 American Indian or Alaskan Native 2 1 2 1 1

Asian 16 14 9 9 15 13 Black or African American 76 68 51 48 37 59 Hispanic or Latino 22 21 20 20 17 20

Page 25: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

25

Native Hawaiian and Other Pacific Islander

1 1 2

Non-Resident Alien 4 4 3 2 2 3 Unknown 21 22 16 15 8 11 White 339 320 254 238 222 215 (blank) 3 5 7 6 2 2 Grand Total 492 463 370 347 312 332

Gender/IPEDS Race Ethnic 201409 201501 201509 201601 201609 201701 Female

2 or more races 7 6 6 4 3 4 American Indian or Alaskan Native 2 1 2 1 1

Asian 14 12 8 9 11 10 Black or African American 58 53 37 34 31 47 Hispanic or Latino 18 15 13 13 11 13 Native Hawaiian and Other Pacific Islander

1 1 2

Non-Resident Alien 3 3 3 2 1 3 Unknown 13 16 12 9 5 8 White 241 229 184 171 163 159 (blank) 3 4 5 4 1 1

F Total 359 339 270 248 228 247 Male

2 or more races 2 2 2 3 4 3 Asian 2 2 1

4 3

Black or African American 18 15 14 14 6 12 Hispanic or Latino 4 6 7 7 6 7 Non-Resident Alien 1 1

1

Unknown 8 6 4 6 3 3 White 98 91 70 67 59 56 (blank)

1 2 2 1 1

M Total 133 124 100 99 84 85 Grand Total 492 463 370 347 312 332

Task 3: Disaggregate data on the three initial programs A-1: Analysis of disaggregated data across level and licensures: Disaggregated data across the three initial licensure programs – undergraduate, post-baccalaureate, and Masters of Arts in Teaching for each of the three required data cycles for candidates for each of the three initial licensure programs are provided in Exhibits 1.2, 1.3, 1.4, 1.5, 1.11, 1.12– CAEP data reports. Each report begins with an overall summary page; following that page the data is disaggregated by level (UG, PB, and MAT); the next view of the data includes disaggregation by licensure area; ending with an overview summary of percentage by each licensure area. Initial analysis of the data is submitted as part of Standard 1’s addendum (see Standard 1 Addendum, AFI 4). Final analysis will occur by licensure area at a Teacher Education Forum Meeting on March 22 where strengths, weaknesses and trends will be discussed, and action plans will be formed.

B-1: Clarification for admissions results in the following:

Page 26: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

26

We currently have two levels that students can be admitted into for initial certification programs. The first is the undergraduate program, which also encompasses the post-baccalaureate (PB) program. The second program is our graduate program, Masters of Arts in Teaching. “Students can be admitted at two different levels into the COE initial certification programs as undergraduates (UG) or graduate (Masters of Arts in Teaching or MAT).”

C-1: How do the entrance requirements compare for the three programs?

Comparison of the entrance requirements for all three programs levels (UG, PB, MAT) includes all of the potential teacher candidates must successfully pass all three part of the Professional Readiness Exam (PRE) and/or alternative scores on either the ACT or SAT exam. (Note: These requirements will be changing as we will be phasing out PRE and phasing in ACT/SAT.) All three levels of potential teaching candidates must also show proof of verification of work experience, totaling 40 hours, which is verified by an educator. All three levels require that potential teaching candidates submit a personal reflection statement as well as a complete criminal history check. All three of these admission requirements for our Teacher Certification program are referenced in the table 3. Table 3: Admission Requirements for Teacher Certification Programs

Undergraduate Level 2 Admission Requirements

Post-Bachelor Certification (Undergraduate Status)

Admission Requirements

Masters of Arts in Teaching Admission Requirements

A minimum 2.5 GPA at Wayne State University or previous institution if a new transfer student

Bachelor’s degree with a minimum of a 2.5 GPA

Bachelor’s degree with a minimum of a 2.75 GPA

Completion of a minimum of 53 semester hours of coursework

Completed BA/BS Completed BA/BS

A Passing score on each of the three sections of the MTTC Professional Readiness Exam, or Michigan Department of Education approved alternative pass measures

A Passing score on each of the three sections of the MTTC Professional Readiness Exam, or Michigan Department of Education approved alternative pass measures

A Passing score on each of the three sections of the MTTC Professional Readiness Exam, or Michigan Department of Education approved alternative pass measures

TED 2250 and/or 40 hours of group work with children

TED 2250 and/or 40 hours of group work with children

40 hours of group work with children

Specific coursework requirements for teacher certification majors (C or better)

Completion of prerequisite courses with a C or better

Completion of prerequisite courses with a C or better

Completion of Intermediate Composition (C or better)

Transcript evaluation form

Transcript evaluation form

Current Negative TB Test Current Negative TB Test Current Negative TB Test (to

Page 27: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

27

C-2: How do the data from the assessments for monitoring candidates’ progress from admission to completion among the three types of programs?

There are several key gateways our candidates must complete in order to progress through to graduation and/or certification. The chart in Exhibit 1.6 represents the assessment cycle for UG, PB, MATs and identifies where and how evidence and assessments are collected, required; and all are aligned to InTASC and CAEP standards. Each program has some special “program requirements” that is discussed through advising and the completion of creating a plan of work. An example of the Early Childhood Education progress through the program is presented in a flowchart (see Exhibit 3.4a). Task 4: Use of non-academic criteria in selection of candidates Prior to admission it is the recommendation of Standard 3 Committee that an Admissions Committee be formed in order to review perspective candidates’ application submission to make a determine if admission should be granted, denied, or if more information is needed. This “Admissions Committee” should consist of one faculty member to represent each program area (i.e., elementary, secondary, physical education, special education, art education, music education, and dance education) and one academic advisor. This committee will be responsible for creating a rubric and cut score in order to rate the perspective candidates’ submission material that will determine admissions. Initially the EPP had chosen the use of InTASC Standards to assess student dispositions. The committee has chosen to not use the InTASC Standards as part of our dispositions that are required for admission into one of the teacher education licensure programs. In lieu of having the students sign the InTASC Standards sheet and present this at orientations, our EPP has recommended updated our reference letter and personal statement to be included as part of the admission process. This non-academic assessment criterion for the selection and admission of candidates, as state above in task 3, includes an updated reference letter from a school/organization for the prospective candidate. The school/organization must provide one example of Professional Behaviors and one example of Professional Commitment that the perspective candidates have demonstrated in the classroom/organization setting. Also, the prospective candidate must complete a professional statement, which will ask the students to demonstrate an example of a professional behavior and professional commitment to education. Exhibits 3.4 (Professional Recommendation) and 3.5 (Personal Statement) are examples of the non-academic forms used for the selection and admission of candidates into a teacher licensure program. These documents will need to be reviewed and approved by faculty for validity and reliability before these documents can be phased in to the admissions process. The professional dispositions in table 5 will be discussed in detail and examples will be provided to students during their initial admission orientation. This will assist students that may have questions/concerns regarding the dispositions and the expectations as they transition into their methods courses where these dispositions will be reviewed by full and adjunct faculty members and placed in course syllabi. Table 5: WSU Professional Dispositions Professional Behaviors Professional Commitment Actively and respectfully engages with diverse To ethical teaching and relationships

go into effect Fall 2017) Personal Statement Personal Statement Personal Statement Criminal History Check Criminal History Check Criminal History Check

Page 28: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

28

perspectives Uses self-reflection to enact continued professional growth

To building an inclusive society

Task 4’s questions: How will a signature on a copy of the InTASC standards be used to discriminate among applicants as a non-academic measure of quality and preparation? Signature of the InTASC standards is neither in alignment with our new disposition framework and therefore is not sufficient to discriminate non-academic measures of quality and preparation. As part of the Disposition Pilot work that the EPP is undertaking, we have redesigned the required letter of recommendation; see Exhibit 3.4, to align with the dispositions. A principal or educator that has seen the candidate interact with children in a formal learning environment must complete the form. This form will be completed as a companion to the “personal statement” form (see Exhibit 3.5) that is required by all applicants in the Level II application for undergraduate and post-bachelor. For perspective candidates, these documents will be a part of the online graduate application. Area for Improvement: Evidence of the use of non-academic criteria in the selection and admissions of its candidates Standard 3.2 is part of our phase in plan for the SIP. In the tasks above we have demonstrated how we are addressing this area of concern through the phase in plan of adding and revising non-academic criteria in the selection and admission procedures for our candidates at all levels (UG, PB, and MAT). Stipulation: The EPP is unable to provide evidence that each cohort of candidates preformed in the top 5oth percentile on a nationally-normed assessment. Since the EPP does not require a nationally normed ability/achievement assessment for admission, we cannot provide evidence that its accepted cohorts meet the CAEP requirement of performing in the top 50 percent. Action Plan for meeting standard 3.2: Relationship to Standard or Component Based upon current discussions and memorandum from Michigan Department of Education (see Exhibit 3.6 PRE-SAT) and the recent decision by CAEP not to accept the Professional Readiness Exam (PRE) has an alternative state-normed assessment, the EPP has created an action plan to implement the requirement of either the ACT and/or SAT for admission into all teacher certification programs (undergraduate, post-bachelor, and MAT). This is a requirement for standard 3.2. The objective of implementing this admission requirement is to provide evidence that each entering cohort at the EPP meets CAEP group average performance on a nationally normed ability/achievement assessment. The rationale used to accept either the ACT or the SAT is based upon the EPP’s current student population of candidates that have taken the ACT and the new requirement from the MDE for high school students to take the SAT. This will allow the flexibility of currently using the ACT and the SAT for potential new candidates. Timeline and Resources Figure 1: Implementation Timeline

Page 29: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

29

Implementation of the new admission requirement will start in Winter 2017, with full implementation scheduled for the Fall 2018 entering cohorts. The data measuring capabilities currently exist at the EPP, with institutional reporting tools for ACT and SAT scores. The Data Manager will be responsible for collecting the SAT or ACT cohort data for each entering class. Beginning Winter 2017, the EPP will update program literature regarding the new admission requirement. This will include the EPP’s website, admission applications, and curriculum guides. It will be critical for current candidates and prospective candidates to be informed about the new admission requirement. During the Fall 2017 semester current candidates will receive official communication from the Teacher Education Division that the ACT or SAT will be required for all candidates starting in the certification program Fall 2018. Primarily, this communication will be to all Level 1 undergraduate candidates and post-bachelor candidates working on prerequisite courses at the EPP. This will allow candidates 4(?) ACT/SAT testing cycles to complete the admission requirement for Fall 2018 admission. The EPP will be able to provide ACT and SAT cohort data to CAEP for all candidates entering the Fall 2018 cohort.

Winter2017

Newadmissionrequirementswill

beaddedtowebsite,

application,andotherprogram

literature

Fall2017

CurrentstudentswillreceiveofficialcommunicationregardingACTorSATadmissionrequirement

Winter2018

Candidateswillsubmitadmissionapplications(due

February1st orJune1st)forFall2018

cohort,ACTorSATscoresarerequired

Fall2018

ACTandSATcohortdataavailablefor

CAEP

Page 30: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

30

Standard 4: Tasks and Areas for Improvement

Task 1: Explanation of the Completer Case Study Assessment and Analysis of Data: What is the case the EPP wishes to make relative to the assessment task (exhibit 1) and data (exhibit 7)? How are these data used in the EPP's decision making? Is this a candidate assessment tool or a completer assessment tool?

In the original submission of this report the wrong attachments (exhibit 1 & 7) were indicated for this “Completer Case Study.” New attachments will be added for our completer Case study – (Exhibit 4.1) as well as further description of the data and how it will be used.

We wish to make a case that results in an increase in PK-12 student growth by examining our completers through this case study. The faculty teaching master’s seminars (TED 7000/ED 7999) will conduct a case study that examines our completers’ effect on PK-12 student learning. Participants will be selected from our completers who have been certified for 1-5 years and are currently enrolled in one of our master’s programs. In order to have a representative sample, we will purposefully select completers from elementary, secondary, special Education and PK-12 certifications. There are approximately 150 students who receive their certification each year. Our plan is to follow up with at least 5% of these students each year through these designated courses. Completers will design and implement an inquiry/action research specific to an area of their practice to examine student growth.

Because this is part of our selective improvement plan we began data collection with a pilot study for the 2016-17 academic year. Faculty teaching the master’s seminars will score their students’ case projects at the end of the winter 2017 semester using the “Completer Case Study Rubric” (see exhibit 4.1), which is aligned to both InTASC and CAEP standards. The rubric is based on a 4-point scale. Acceptable performance on this assessment should be 85% or higher on indicators 3-6, with a score of 3 or higher. The rubric includes line items for both accurate assessment of student growth and their reflections on student learning. The graduate program coordinator will collect data at the end of each semester in the ED 7999 course. The analysis of data will be provided for the CAEP advisory team to disaggregate by program area and teaching majors as evidence of student learning. The results for this completers assessment tool will be disseminated to all initial certification program faculty as a way to examine program effectiveness of our graduates’ ability to affect PK-12 student growth.

Tasks 2, 3, & 5 overlap and our responses are provided below for completer impact on student learning, use of data by EPP, completers perceptions of their preparation.

Task 5: Survey aligned with InTASC and CAEP standards

Table 1.

Teacher Candidate and Year-Out Survey Composite Results

Survey Area Candidate Score Year-Out Score by same completers

InTASC CAEP

High-Quality Learning Experiences

93% 90% (4, 5) 7, 8 1

Critical Thinking 94% 91% 5 1 Connecting Real-world problems and local and global issues

91% 86% 5 1

Page 31: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

31

Using Technology 89% 89% 3,6, 8, 9 1 Addressing the Needs of Special Populations

89% 83% 2 (3) 1

Organizing the Learning Environment

93% 96% 3, 9 1

Effective Use of Assessment Data

91% 74% 6 1

Field experiences and Clinical Practice

92% 79% - 2

Support for Your Job (n/a) 69% - 4

Task 5: Interpretation & Use of Data,

Task 2: How representative of the EPP's' completers is the data presented?

Thirty-one graduates responded to the Year-Out State Survey, in comparison to the 194 candidates who responded the previous year, or a representative of about 15% of our completers for that year. Note that there is an artificially high response rate for candidates because it is a requirement at an on-site college event, while the Year-Out survey is sent only to first year teachers who have taken positions in Michigan public schools (excluding charters and all private schools) and is a voluntary electronic survey. The low response rate means that each response accounts for roughly a 3% change. Thus, for example, the percentage of respondents that rated critical thinking skills as positive or very positive dropped by 3% from candidacy to year-out scores, which only means that one less student than expected provided a mean Likert score data for that set of items that was less than 3 (on a four-point scale).

We expect scores to dip slightly from candidacy to year-out status as teachers take on full responsibility for classrooms with a lower level of support. However, two areas of concern for which we note a larger than expected decline on the Year-Out survey are Effective Use of Assessment Data and Field Experiences and Clinical Practice. In both of these areas 4-6 students’ (21-26% of the small sample) scores dropped below an average of 3. This tells us that we need to put a stronger focus on use of PK-12 assessment data to inform instruction, as well as our institute’s field experiences and clinical practice. This data is shared at faculty meetings annually and addressed by faculty in the various divisions. For example, during the 2016-2017 academic year, all reading and language arts courses are being revised to include a stronger focus on administering assessments, analyzing data, and using the analysis to guide instruction. In terms of field experiences, emphasis was placed on growing support for teaching candidates’ clinical work through the following mechanisms: a) implementing use of the Framework for Teaching (Danielson, 2013) as the evaluation and feedback tool for clinical observations; b) extensive professional development for university Clinical Instructional Coaches (field instructors) in collaboration with the New Teacher Center to transition coaches from an evaluation model in their work with teaching candidates to a coaching model in clinical work; c) professional development for PK-12 partner school Mentor Teachers on the coaching model to support teaching candidates in their internships; and d) the use of meta-coaching in which the Assistant Director of Clinical Experiences from the university observes Clinical Instructional Coaches engage in coaching conversations with teaching candidates and then engages 1:1 with the university coaches in dialogue about how to grow their own coaching practice to better support the teaching candidates’ growth.

Page 32: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

32

Task 5 - How MDE Calculates Percentages from Likert Scales: The scores on the report represent the total rate of efficacy, defined as the overall percentage of “3” and “4” responses on the 4-point Likert scale across each category.

Michigan Public Law 173 mandates that for the 2015-2016 school year 25% of a teacher’s effectiveness rating was based on student growth and assessment data. Beginning in the 2017-2018 school year, this data must come from state assessments, however in 2015-2016 (the time period for our data) the data could come from state assessments (growth information provided by the state) or “multiple research-based growth measures or alternative assessments that are rigorous and comparable across schools within the school district, intermediate school district, or public school academy”. Additional information on teacher effectiveness scores in the state of Michigan can be found at http://www.michigan.gov/mde/0,4615,7-140-5683_75438---,00.html.

At this point, the State will provide growth scores for each teacher based on state-wide test, however, this data did not have to be used to calculate the growth score that determined 25% of the effectiveness rating for the 2015-2016 school year. Beginning in 2017-2018, all schools will use this state-provided data to calculate effectiveness ratings. Because the data is provided by the state, disaggregated by teacher, we will be able to disaggregate student growth data by teacher and thus have normative data for which a cut score could be used to determine acceptable teacher-impact on student growth for our institution.

Task 2. Completers' impact on student learning: What proportion of the effectiveness rating is linked with impact on student learning versus observation data? What evidence is there that provides a clear link between the EPP's completers' effectiveness rating and impact on student learning?

Task 3: What instrument is used to gather completer observation data?

The remaining 75% of the effectiveness rating is based primarily on teacher performance as measured by an observational tool. This tool can be developed or adopted by the school district, intermediate school district, or public school academy and approved by the State. However, the State also provides a list of pre-approved tools including Charlotte Danielson’s Framework for Teaching, the Marzano Teacher Evaluation Model, the Thoughtful classroom, and the 5 Dimensions of Teaching and Learning.

At this point the state does not disaggregate effectiveness ratings by licensure area, nor do they provide individual, identified teacher-data to the University. We plan to address this by intentionally selecting case study participants from a variety of licensure areas for our completers case study as described in Task 1 above.

Task 4: Title: Employers’ satisfaction with completers’ preparation: Are the data provided in exhibit 5 from completers' supervisors or are these data gathered from the EPP's student teaching supervisors (and cooperating teachers)?

The information in exhibit 5 does not relate to employer satisfaction – it is an efficacy score from the candidate at graduation, their supervisor, mentor teacher, and then after teaching for one year. To obtain employer satisfaction, we are in the process of designing and collecting data for a pilot study of a “Principal Survey” (see Exhibit 4.2 on page 12-13). It is designed for building administrators to assess the extent to which they are satisfied with the preparation of our program completers that are currently serving in their schools as teachers in their first five years of teaching. The Michigan Association of Colleges for Teacher Education (MACTE) created the survey (see exhibit 4.2 pp 1-11 for original report and methodology) using the InTASC standards as criterion based on a 4-point Likert scale (strongly agree, somewhat agree, somewhat disagree, strongly disagree). Criteria for acceptable performance will

Page 33: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

33

be an average of 85% of the scores will indicate a 3 or higher on the survey for each criterion. Because the survey is anonymous, we cannot disaggregate the scores by programs or teaching majors. The data will need to be examined as a whole set for each program area to inform the entire program of our graduates’ effectiveness. Results from this pilot will be completed and analyzed looking across all three institutes at a MACTE Board meeting. Wayne State’s initial analysis indicates that of our graduates’ administrators believe they before it will be sent to all school administrators who have Wayne State University alumni in their first through fifth years of teaching starting 2017-18.

Because this is part of our selective improvement plan we have not completed data collection for the pilot study. As scheduled, Wayne State University and two other EPIs in the state are conducting a pilot study in the 2016-2017 school year and will collect our first round of this data during the winter semester. A consortium of Michigan EPIs piloted a principal survey in the winter of 2017 with the intention of piloting the measure and process. This survey was sent to administrators in districts along with a list of our 1-5 year out completers. Administrators were asked to complete the survey if they had one or more of the teachers on the list currently teaching in their school. Wayne State University received 16 responses, though we do not know how many teachers this is reflective of as some administrators may have had more than one teacher in their building, but would have only filled out one response. With this iteration of the survey, we are not sure of the response rate and thus cannot speculate on the representativeness of the data, however, we will be adding a question to the survey for the next round of administration asking how many 1-5 year out teachers are in the building. That said, we do have preliminary data (see Exhibit 4.3), which we report here, with the caveat that only 16 surveys were returned, so the power of the analysis is low. Ninety-one percent of survey responses at the item level indicated that our completers are effective in the area of learners and learning (2% of questions were not answered), 89% in content knowledge, 84% in instructional practices, and 93% in professional responsibility (3% of questions were not answered). The narrative comments of the administrators were mostly positive, stating that our students were well prepared in areas of content knowledge, understanding of diversity, technological skills, and pedagogy. Negative comments cited issues with classroom management, monitoring learning, and content knowledge. The administrators also stated, and we agree, that this information is bound to be somewhat conflicting as there is a good deal of variation in skill sets across the teachers in their buildings (e.g., one completer in their building may have great content knowledge, while another is not as strong). These responses will be shared and discussed as part of the CAEP data reports at our March 2017 faculty forum meeting where we will examine all data to determine effectiveness of our completers on PK-12 student learning. These results will lead to plans for improvement within our programs, courses, and changes within syllabi; which will also be shared with part-time faculty who may not be in attendance at this meeting.

Area for Improvement:

The EPP provides no valid interpretations of its completers' teacher effectiveness rating (which is a combination of observation and impact on student learning data).

While the Michigan Department of Education provides measures of the EPP's teacher effectiveness, the EPP's self-study reports the state-provided data but engages in no interpretation of the provided data or its impact on decisions it makes to revise its programs.

The preceding discussion, we addressed all tasks for improvement. Through these descriptions we have concentrated on the area for improvement where interpreting data to revise program needed to be addressed. Standard 4 is part of our selective improvement plan, and many of our data sources have not been thoroughly analyzed nor collected for the first time as of the writing of this document. There are no

Page 34: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

34

data sets collected yet for the completers case study that addresses task 1, it will be collected at the end of the winter 2017 semester; initial analysis occurred for the pilot of the principal survey that addressed task 4 but with the low N – we anticipate skewed results; and a more thorough analysis of the EPI score has been discussed that addressed task 2, 3, and 5. With these in mind, we believe that because we have much of standard 4 as part of our improvement plan, we have addressed as best as possible the concerns of the reviewers in terms with the data we currently have available. As we continue to collect the data that addresses program impact, we will better be able to examine strengths, weaknesses and trends for our program.

Page 35: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

35

Standard 5: Tasks and Areas for Improvement Task 1: Other Measures/COE – Complete list of changes – Accreditation Advisory Committee Updated through Feb. 2017 A: The self-study report analysis of evidence for Standard 5 references this document, but it could not be located in AIMS. B: A list of annual changes and recommendations are presented to the committee each year to make considerations for the upcoming semester (Other Measures/COE - Complete List of Changes - Accreditation Adv Comm 1/2015 -5/2016)." EPP Response: The Complete list of changes below reflects the results of our review of the initial certification program from Fall 2015 – Winter 2017. Key areas of improvement included a revision of assessments and rubrics for the EPP created assessments collected in Student teaching as recommended by the 2012 TEAC report (Standard 1); stronger clinical partnerships and an enacted memorandum of understanding with partnering districts (Standard 2); development of an action plan to implement SAT/ACT requirements for incoming candidates and dispositional policy plan (Standard 3); and multiple pilot studies to evidence program impact (Standard 4). We also recognized the need to detail our system of data collection and explore how we make informed decisions based on the evidence (Standard 5).

Exhibit 5.1: Complete list of Changes for the Initial Certification Program January 2015 – February 2017

CAEP Standard

Addressed

Accreditation Needs Action Updated Feb. 2017

1-5 1. Evidence to be Collected for 5 CAEP standards

Created Summary of Evidence charts by each Subcommittee (Feb-May 2015)

Submitted SSR Aug 2016

2 2. Evidence of Clinical Partnerships

Met with OCE January 2016 – 1. Developed multiple

partnerships with Schools and Districts (List provided)

2. OCE developed a Memorandum of Understanding

completed May 2016

3 3. Dispositional Policy Will work with our university legal team

Piloted Dispositional Survey to Faculty Results to be reviewed for Onsite Visit

1,5 4. Assessments and Rubrics per TEAC 2012 Report Feedback needed revisions

Summer 2015 – Revised the following assignments:

1. Lesson Plan 2. Case Study 3. Observation Evaluation 4. E-Portfolio

• Collected Fall 2015 and Winter 2016 data

• Analyzed results at May 2016 TED Forum

• Findings presented in SSR August 2016

1,2 5. Need assessment with video reflection

Summer 2015 – OCE created and implemented Teaching Self Study with Video

Collected Fall 2015 and Winter 2016 Data

1, 5 6. Need clarity on purpose and February 2016 – Writing Team Changes to Capstone

Page 36: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

36

goals of Capstone Conversation

explored and defined purpose of Capstone Conversation for TED review in March 2016 and implementation in Winter 2016

Conversation tabled November 2016 in Accreditation Advisory Committee due to need for Accreditation Evidence

5 7. Need to Add Updates to the Website

Submitted assessments to Donna February 2016

Updated April 2016

1 8. Need to Classify License Areas in order to Disaggregate Data

Created Coded Chart by Grouping, February 2016

Completed February 2016

1 9. Align all assessments to InTASC

Completed February 2016 Completed February 2016

1,5 10. Meet with Part-Time and Full Time faculty to review Common Knowledge Across Program Areas and Expected Knowledge of Candidates for Student Teaching Semester

Pedagogical Knowledge Professional Development, January 7, 2016 Licensure areas meet with part-time faculty individually and collectively

1,5 11. Program Review by Licensure Group

May 2016- Program areas to review assessment data from Student Teaching to monitor, improve, make recommendations

Each program area submitted analysis for SSR June 2016

5 12. Submit SSR to CAEP Complete analysis, write findings Submitted August 2016 1,5 13. Student Teaching Case Study

Review TED Forum Reviewed November 2016 Meeting

Developed subcommittee to further develop revised case study

- Met January 9, 2017 - Continuing to draft

revised case study for faculty feedback

1 14. Technology Development Accreditation Advisory Committee recommends improvements in technology integration due to SSR findings

Developed subcommittee to meet 2017 Winter semester Subcommittee identified rubric from National Technology Standards to incorporate in Technology course and PE courses

C-1: What data are used to make decisions? Exhibit 1.6 – The Initial Certification Assessment Cycle is presented below. The chart provides an overview of key assessments and evidence the Initial Certification program collects to monitor our program and candidates’ performance. Sources are classified as a National, EPP-created, State designed assessment where applicable and all are aligned to CAEP standards. All of the EPP created assessments have been aligned to the InTASC standards. We also have additional data sources, aligned to each CAEP substandard, which are presented in the Exhibit # 5.2 – Evidence Matrix of all Sources aligned to CAEP. This spreadsheet represents all tagged evidence as provided in the initial self-study report and each source is aligned to CAEP standards.

Page 37: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

37

Exhibit 1.6 - The Initial Certification Assessment Cycle for Undergraduate, Graduate and Post-Bachelors

Evidence/ Assessment

Courses where Introduced/ Developed (where applicable)

When and Where Data is Collected/ Required

How Data is Collected

Type InTASC Alignment

InTASC Category

CAEP Alignment

GPA University Admissions

Through Academic Services

High School/ Other College Transcripts

3.5

ACT Entry into Level 2 Through Academic Services

National 3.2

SAT Entry into Level 2 Through Academic Services

National 3.2 (phase in through 2018)

MTTC – PRE

Entry into Level 2 Through the State – Pearson

State 3.2 (phase out through 2018)

MTTC – Content

Entry into Student Teacher

Through the State – Pearson

State 1.1, 1.2, 3.4

Case Study The following are pre-student courses: TED 5150 TED 5650 MED 4560

The following are Student Teachings courses TED 5780/5790 DNC 4410/4420 KIN 5780 HE 5780 MED 4570

Through Data Manager – Blackboard

EPP Created 1, 2, 9 1, 4 1.1, 1.2

Clinical Observatio

ns

Through Data Manager – Blackboard

Danielson Framework/ EPP Created

1,2,3,5,6,7,8,9,10

1,2,3,4 1.1, 1.4, 2.2, 3.5

Digital Video

Through Data Manager – Blackboard

EPP created 1,2,3,4,6,7,8,9

1,2,3,4 1.1, 1.2

E-Portfolio Through Data Manager – Blackboard

EPP Created 8,9,10 4 1.1, 1.2, 1.5, 2.3, 3.4

Lesson Plan

Licensure area methods courses

Through Data Manager –

EPP Created 1,2,3,4,6,7,8,9

1,2,3,4 1.1, 1.4, 1.5

Page 38: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

38

TED 5150 TED 5650 MED 4560

Blackboard

MDE Student

Exit Survey

Qualtrics State 1.3, 3.6

MDE Supervisor

Exit Survey

Qualtrics State 1.3, 3.6

EPI WSU Compiled

Survey Efficacy

Scores

Collected Annually, sent to EPI March/April

Qualtrics State 1.3, 2.3, 5.1

EPI Performan

ce Score Report

Annual, sent to EPI each Summer

Qualtrics State 1.3, 2.3, 4.1, 4.4

Principal Survey re:

Completers

Annual, sent out March/ April Partner Schools

Qualtrics MI Institutions Created

4.3

Observation Scores

from Completers

Partner Schools Data Manager School District 4.2

Completer Case Study

TED 7000 (Masters course)

Collected end of semester in ED 7999 (Master’s Final Project)

Blackboard EPP Created 4.1

Page 39: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

39

C-2: How are stakeholders involved in the decision-making process? The Stakeholders Involvement The decision-making process is a collaborative effort of administration, faculty, mentor teachers, Clinical Instructional Coaches, District representatives, multiple COE departments, and committees. Exhibit 5.3 Decision Making Cycle of COE Initial Certification Program is a diagram of the four phases in the program where input from stakeholders is needed. Upon entry into the Initial Certification Program (COE Level 2), undergraduates, graduates, and post-bachelors are evaluated by Academic Service Officers. Criteria include GPA (2.5 for undergraduates and post-bachelors; 3.0 for graduates), personal statement, and verification of work with children (See Standard 3 addendum for further details and documentation of exhibits). During the program, undergraduates, graduates and post-bachelors of the initial certification program are reviewed by Academic Services, the Accreditation Advisory Committee, and Content Area Programs as they monitor Candidate progress through EPP assessment scores (Standard 1), grades in coursework (Standard 3), and scores by program area in TED meetings (Standard 5, Exhibit 5.19 and 5.20). Documentation of each has been provided in the corresponding standard addendum and see Exhibit 1.6 –The Initial Certification Assessment Cycle for a diagram of where each of the stakeholders are involved. When candidates enter pre-student teaching, they are reviewed and monitored by Mentor Teachers in the field, Clinical Instructional Coaches which is led by the Office of Clinical Experiences (OCE); criteria are presented in the pre-student teaching handbook and includes multiple measures such as observation scores, professionalism evaluation (rubric is provided) and lesson plan scores (see Exhibit 5.5 – Winter 2016 Pre-Student Teaching handbook). At the end of the program, candidates enter student teaching and are evaluated and monitored by Mentor Teachers in the field, Clinical Instructional Coaches and Capstone Conversation Reviewers for performance and disposition; criteria are presented in the student teaching handbook and includes multiple measures such as observation scores, assessment scores from lesson planning, self-study reflection with digital video, and professionalism evaluation (see Exhibit 5.6 – Winter 2016 Student Teaching handbook). As part of our phase in/self-improvement plan (SIP), we have incorporated multiple measures to inform us about our program impact and we are currently in the pilot phase for the following processes. After candidates exit the program, the program Impact subcommittee (Standard 4 of the Accreditation Advisory Committee) requests observation scores from completers years 1-5 out; the committee sends out a principal survey of completer efficacy; and collect PK-12 student learning via Completer Case study from graduates in the Master’s program. Districts send observation scores from the completers and analysis of these scores are initially reviewed in the accreditation advisory committee for monitoring and program improvement. The Standard 4 addendum provides additional information regarding these measures. Please see Exhibit 5.3 below, which may provide additional clarification of the program review cycle and process. Also, see Exhibit 5.4 - Key Stakeholders Description and Responsibilities.

Exhibit 5.3 - Decision Making Cycle of COE Initial Certification Programs

Page 40: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

40

C-3: How Data is collected? The College’s data manager who collaborates with several departments to gather data for program review manages data Collection for the Initial Certification Program. Several data systems are involved including Blackboard—a candidate/faculty interface where assignments can be presented, uploaded and graded; STARS—a University-wide system that houses candidates’ personal profile, grades and plan of work; and Qualtrics that houses all surveys (State, Principal Survey, etc.) (also, see Exhibit #1.6 - Column: How Data is Collected). Upon the collection of data, the manager creates reports at each phase of the program (beginning, during and after) for analysis of program impact, proficiency ratings, trends, and any candidates with areas of concern (see data reports of key assessments as presented for Standard 1 and Exhibit 5.15). C-4: What process or plans have been established for analyzing and interpreting results from assessments? The reports generated by the data manager are presented to the Accreditation Advisory Committee at the beginning of each semester for the previous semester. The committee will review the reports to determine areas of strengths and weaknesses and will report findings and make recommendations to the following groups: Teacher Education Program Faculty, Academic Services, and Program Coordinators. At the end of each school year, Teacher Education faculty as a whole and then by program area, review the data that has been collected from the two prior semesters and the recommendations from the Accreditation Advisory Committee. Responses from the annual data review inform program systematic monitoring and

ResponsesfromAnnualDataReviewinformsProgram

SystematicMonitoringandProgrammaticImprovement

BeginningofInitialCertificationfor

TeacherCandidatesPrimaryReviewer:AcademicSvc.

DuringProgramofInitialCertification

forTeacherCandidates

PrimaryReviewer:Accred.Advisory

EndofProgramforInitialCertification

forTeacherCandidates

PrimaryReviewer:OCC

AfterProgramPrimaryReviewer:Accred.Advisory

Page 41: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

41

programmatic Improvement for the next school year (see Exhibit 5.7 – Initial Certification Annual Schedule and Process for Data Collection and Analysis.

Page 42: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

42

Exhibit 5.4 - Key Stakeholders Description and Responsibilities CAEP Standard

Key Stakeholders Description Responsibilities

3, 5 Academic Services (Administrators, Academic Officers, Support Staff)

Academic Services screens all applicants for COE and audits completers who seek licensure. They make recommendations to the State for completers

Admissions Recruitment/ Retention Certification Audit Licensure Recommendation to State

1-5 Accreditation Advisory Committee

The Accreditation Advisory Committee is comprised of 18 members that represent the wide group of stakeholders who are integral in the initial certification program.

Data Analysis Monitor Program, Policies and Procedures Make Recommendations

1, 4, 5 Teacher Education Department (Administrators, Faculty)

The Teacher Education Department (TED): TED is comprised of all licensure areas which are grouped into similar content areas. Similar areas are grouped into programs and led by program coordinators. At the end of the year, program coordinators review the results of assessment scores from teacher candidates during their student teaching semester to closely monitor the candidate performance and monitor program effectiveness.

Data Analysis Programmatic and Licensure Area Review Completer Case Study (Graduate Courses) Make Recommendations

2, 5 Office of Clinical Experiences (Advisory Board, Clinical Instructional Coaches)

The Office of Clinical Experiences coordinates clinical experiences for pre-student and student teachers; holds seminars for the candidates where assessments are taught, gathered and collected for accreditation; and maintains key partnerships with school districts.

Data Collection Clinical Experiences Clinical Partnerships

2, 4 Partnership Schools (District Administrators, Principals, Mentors Teachers

Partnership schools provide clinical experiences for our student teachers and evaluation of proficiency both in student teaching and after graduation as the candidates enter the classroom as teachers.

Clinical Experiences Clinical Partnerships Principal Survey Completer Observation Scores

1-5 Data Manager The Data Manager collects data from multiple systems and distributes to the departments where applicable.

Data Collection, Organization and Dissemination

Page 43: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

43

Exhibit 5.7: Initial Certification Annual Schedule and Process for Data Collection and Analysis Steps in the Data Collection and Analysis Annual Cycle and Time Period

1. Data Manager Prepares all Assessment Data Results of previous year and presents results to Accreditation Advisory Committee

September 1st- September 31st

2. Accreditation Advisory Committee conducts data analysis for the following: - Administrative Dissemination - Teacher Education Department (TED) Forum including PE, Health, and KHS - Program Area

October 1st – October 31st

3. Areas of Concern are sent to Accreditation Advisory Subcommittee for Further Discussion and/or Curriculum Changes

November 1st – January 31st

4. Subcommittee Deliberates and determines next steps. Recommendations are sent to Accreditation Advisory Committee for discussion and presented to TED:

If approved by TED, sent back to Accreditation Advisory committee for Implementation process If not approved, back to subcommittee for further review, tabled, or eliminated

February 1st – February 28th

Accreditation Advisory Council reviews all recommendations - Determine phase in plan - Write phase in plan

March 1st – April 30th

Accreditation Advisory Council Presents Final Phase – in Plan and Implementation process to the following:

- Administrative Dissemination - TED Forum including PE, Health, KHS, - Program Area

May 1st – May 31st

CAEP Yearly Report Findings posted on Accreditation website and presented at Final Assembly meeting

May 31st

Advisory Chair Prepares for Fall Phase-in Plans June 1st – August 31st

Page 44: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

44

Task 2: Revision to Capstone Conversation Questions for EPP concerning additional evidence, data, and/or interviews

1) The capstone conversation rubric provided is not tagged to InTASC or CAEP standards. How does the Capstone Conversation provide evidence of candidates' readiness for teaching and impact on student learning? How are candidate responses and artifacts collected and analyzed? How is inter-rater reliability established?

While we presented the Capstone Conversation as part of earlier accreditation and as part of the CAEP Self-Study report, the TED faculty has determined that the Capstone Conversation will not be an assessment used for CAEP assessment. The purpose of this culminating event is to celebrate the achievements of the candidates as they reflect and articulate their experiences in student teaching and coursework. TED members agreed that if candidates have made it the Capstone Conversation, they will have successfully met all requirements of the program and this is an opportunity to welcome them as educators in the field. Therefore, no data is presented regarding the Capstone Conversation. Task 3: Validity and Consistency of Data Analysis Questions for EPP concerning additional evidence, data, and/or interviews

1) How are validity and reliability established in the interpretation of assessment data? A sample of two data reports that were analyzed in May 2016 and then in July 2016 are provided as evidence (see Exhibit 5.8 - COE 2015-2016 Lesson Plan by Area - May 2016 Report and Exhibit 5.9 – Lesson Plan by Area July 2016 Report). To better clarify, as the data manager completes the process of data collection, she is able to denote missing data and has the accreditation coordinator and a representative from the Office of Clinical Experiences verify that the spreadsheets represent all data. She will send out an initial reporting and tag any missing scores and asks us for a follow up and then we will ensure that the spreadsheets represent all data to be analyzed. Assessment data for our programs are collected using instruments that have been validated and are known to be reliable. Our programs follow the recommendations from instrument authors regarding interpretation of data. The instruments we use include Danielson’s Framework for Teaching (2013) (Exhibit 5.10) which have been aligned to the InTASC standards, are used nationally and have been externally validated, see Exhibit 5.11 Danielson Framework Validity Article May 2016. We also use the State of Michigan’s certification examinations (MTTC and PRE); and Educator Preparation Institute (EPI) data from the Michigan Department of Education for performance score reports as required by the State. The State has provided a technical manual and analysis of the Candidate and Supervisor which details the method for establishing validity and reliability for the MTTC assessment, Candidate Survey, and Supervisor Survey, (see Exhibit 5.12a – 2015 EPI Technical Manual, Exhibit 5.12b Exhibit 5.12b - MTTC_2013-2014 Technical Report Appendix, Exhibit 5.13 – Candidate Survey Analysis, and Exhibit 5.14 - Supervisor Survey Analysis). Multiple scorers read candidates’ e-portfolios, including: their clinical instructional coaches, the faculty panel for their Capstone Conversations, and their mentor teachers. These qualitative data are used to supplement the data collected by the aforementioned instruments. For a full overview of all the evidence collected for each candidate, see Exhibit 1.6 – Initial Certification Assessment Cycle. Task 4: Data Discrepancy and Outliers Questions for EPP concerning additional evidence, data, and/or interviews

1) How are discrepancies identified? What is the process for evaluating outliers? What program recommendations and improvements resulted from the analysis? When is final

Page 45: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

45

reporting done? At the end of the calendar year? School year? Who examines the data and makes recommendations and improvements?

Reviewing the reports run by the data manager identifies discrepancies in the data. The reports generated by the data manager are presented to the Accreditation Advisory Committee at the beginning of each semester for the previous semester. Collective scores that are below proficiency levels are probed for clarification of which candidates have scored below passing in that particular area. An example of what the Accreditation Advisory committee receives for review is located in Exhibit 5.15 – A Review of Issues with Student Scores on Assessments.

Page 46: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

46

Exhibit 5.15 – A Review of Issues with Student Scores on Case Study Assessment

Subject Behavioral/Academic Intervention Plan

Academic Period Name ID

English 1 201509 English 2 201509

Subject Results and Discussion Academic Period Name ID

English 2 201509 English 2 201509

SubjectBehavioral/Academic Intervention Plan (Physical Space)

Academic Period Name ID

English 2 201601English 2 201601

Subject Results and Discussion Academic Period Name ID

English 2English 2 201609

Subject Behavioral/Academic Intervention Plan

Academic Period Name ID

Mathematics (Elem or Sec) 2 201509  Mathematics (Elem or Sec) 2 201509

Subject

Behavioral/Academic Intervention Plan (Intervention Sequence)

Academic Period Name ID

Mathematics (Elem or Sec) 2 201509 Mathematics (Elem or Sec) 2 201509Mathematics (Elem or Sec) 2 201509

Subject Behavioral/Academic Intervention Plan

Academic Period Name ID

Social Studies 1 201509 Social Studies 2 201509Social Studies 2 201509

Subject

Behavioral/Academic Intervention Plan (Intervention Sequence)

Academic Period Name ID

Social Studies 1 201509 Social Studies 1 201509Social Studies 2 201509

Subject Results and Discussion Academic Period Name ID

Social Studies 2 201509 Social Studies 2 201509Social Studies 2 201509

Subject Results and Discussion Academic Period Name ID

Social Studies 2 201609 Social Studies 2 201609

Scores in the Individual Licensure Areas which caused the Overall Assessment to dip Below 85%

201609

Page 47: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

47

The Office of Clinical Experiences incorporates multiple (at least 3) coaching conversations with clinical instructional coaches throughout the student teaching semester, see Exhibit 5.6 – Winter 2016 Student Teaching Handbook for specific details. This coaching is a time when candidates receive feedback regarding their lessons and instructional practice. The clinical instructional coaches want candidates to do well and it is our goal to help them to be successful in the program. The data manager for the Accreditation Advisory Committee completes final reporting after each semester when all grades are in and when the State has sent the survey responses (generally 2 months after the end of the semester). The Teacher Education Department and program coordinators review data annually. At the end of each school year, Teacher Education faculty as a whole and then by program area, review the data that has been collected from the two prior semesters and the recommendations from the Accreditation Advisory Committee. Responses from the annual data review informs program systematic monitoring and programmatic Improvement for the next school year (See Exhibit 5.7 - Initial Cert Annual Schedule and Process for Data Collection and Analysis.)

Page 48: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

48

Task 5: Evidence 3: COE - Clinical Observation Evaluation and Rubric C-1: How did the t-test provide useful data? C-2: The EPP indicates that initial scores were low, but with training the scores were a more accurate reflection of candidate performance. What data and evidence support these statements?

The Accreditation Advisory Committee asked a faculty expert in statistical evaluation to provide quantitative analysis of the changes in mean scores from Fall 2015 and Winter 2016. We wanted to know if the change was significantly different and if so, whether the professional development may have made a difference. Professional development regarding the use of the Danielson Framework had not occurred in the Fall. On February 8 and March 23, clinical instructional coaches participated in professional development led by the Office of Clinical Experience. As part of the agenda, the coaches watched multiple videos of student teacher performance and were asked to rate the candidate, see Exhibit 1.1a – OCE-Prof. Dev.-Agenda 1 and Exhibit 1.1b – OCE – Prof. Dev. Agenda 2. After rating them, the coaches discussed what the scoring expectations are and common ways that the group would review candidates’ performance. Although there were six clinical educators conducting ratings, it was determined that inter-rater reliability would not be appropriate, because that would have required duplication of effort, time, and expense (multiple raters rating the same teacher candidates). However, uniform training was provided to the clinical educators through intensive professional development. Cronbach’s alpha, a measure of internal consistency reliability, was conducted after listwise deletion on the N = 161 cases with full records. The result was CA = .959 based on the 22 items on the rating scale, which is an extremely high level of internal consistency.

A two-independent samples t-test was conducted on the average clinical observation (1 = unacceptable – 4 = distinguished) of the student teachers, as rated by the clinical supervisors. The preliminary test for the underlying assumption of homoscedasticity was statistically significant (Levene’s F = 23.99, p = .000). Therefore, the adjusted Welch-Aspin t test with the Satteritwaite adjustment to the degrees of freed was used in place of the usual t-test. The result, t = -2.94, df = 109.002, was statistically significant (p = .004). This indicated the student teachers in the Winter, 2016 (mean = 3.28, n = 71) cohort scored statistically significantly higher than the Fall, 2015 cohort (mean = 3.06, n = 60). The table below contains the results of a test of the statistical significance in change in means from F15 to F16 for each of the Clinical Observation Standards. The t test assumes equal standard deviations for each semester. When the degrees of freedom (df) are not whole numbers, the Levene’s test for homoscedasticity (equal variances) indicated the assumption was violated. The tabled values for t and p represent the Welch-Aspin adjustment to the t test and the Satterthwaite adjustment to the df. This adjustment is known as the separate variances version of the t test. As indicated in the table, there was a statistically significant increase from F15 to W16 for all Standards with nominal alpha set at the 0.05 level. Exhibit 5.16 – Sample Clinical Observation Statistical Analysis

Clinical Observation Analysis

F15 W16 Test of Difference in Means

Mean (SD) Mean (SD) df t p Standard 1a 3.06 (.374) 3.27 (.490) 109.002 -2.937 0.004 Standard 1b 2.93 (.457) 3.34 (.602) 110.86 -4.397 0.000 Standard 1c 2.87 (.445) 3.30 (.495) 121.902 -5.115 0.000 Standard 1d 2.85 (.601) 3.34 (.513) 130 -5.087 0.000

Page 49: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

49

Standard 1e 2.94 (.410) 3.31 (.534 111.703 -4.385 0.000 Standard 1f 2.75 (.603) 3.21 (.487) 130 -4.839 0.000 Standard 2a 3.09 (.329) 3.46 (.565) 93.667 -4.533 0.000 Standard 2b 3.01 (.316) 3.30 (.527) 94.959 -3.638 0.000 Standard 2c 2.90 (.419) 3.21 (.609) 104.078 -3.371 0.001 Standard 2d 2.93 (.457) 3.13 (.618) 109.013 -2.099 0.038 Standard 2e 3.24 (.448) 3.40 (.527) 103.842 -2.077 0.040 Standard 3a 3.15 (.486) 3.33 (.539) 106.67 -2.178 0.032 Standard 3b 2.98 (.618) 3.21 (.581) 191 -2.432 0.016 Standard 3c 3.11 (.473) 3.28 (.552) 102.261 -2.019 0.046 Standard 3d 2.87 (.476) 3.23 (.462) 130 -4.347 0.000 Standard 3e 2.96 (.356) 3.31 (.534) 101.903 -4.404 0.000 Standard 4a 2.93 (.457) 3.39 (.556) 116.334 -5.16 0.000 Standard 4b 2.77 (.453) 3.33 (.539) 117.796 -6.32 0.000 Standard 4c 2.89 (.699) 3.20 (.654) 188 -2.869 0.005 Standard 4d 2.94 (.410) 3.30 (.615) 101.871 -3.796 0.000 Standard 4e 3.15 (.502) 3.34 (.513) 114.572 -2.445 0.016 Standard 4f 3.03 (.413) 3.41 (.496) 117.169 -4.758 0.000

Page 50: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

50

Test of variance for observation assessment

Group Statistics

Grp N Mean Std. Deviation

Std. Error Mean

S1 Fall, 2015 71 3.06 .374 .044

Winter, 2016 60 3.28 .490 .063

Independent Samples Test

Levene's Test for Equality of Variances t-test for Equality of Means

F Sig. t df

Sig. (2-

tailed) Mean

Difference Std. Error Difference

95% Confidence

Interval of the Difference

Lower Upper S1 Equal

variances assumed

23.987 .000 -3.004 129 .003 -.227 .076 -.377 -.077

Equal variances not assumed

-2.937 109.002 .004 -.227 .077 -.380 -.074

Task 6: Danielson Framework for Teaching Rubrics C-1: How are the EPP designed assignment tasks, CAEP standards, InTASC standards, and Danielson Framework rubrics aligned? Exhibit 5.2 is an Evidence Matrix of all Sources, which shows the alignment of all data to the CAEP sub-standards. Each assessment is presented by line item. The Digital Self Study and Clinical Observations refer directly to the Danielson Framework domains by line item, see Exhibit 5.2. The lesson plan and case study rubrics are aligned to the Danielson framework, InTASC and CAEP standards as evidenced in the rubric (see Exhibit 5.17- Lesson Plan Rubric and Exhibit 5.18 Case Study Rubric). Task 7: Candidate Monitoring and Decision-making C-1: How are candidates monitored in relation to proficiency of the CAEP and InTASC standards? EPP Response Candidates are systematically monitored in relation to proficiency of the CAEP and InTASC standards when they enter Pre-Student teaching where accreditation assessments are formerly introduced. They are evaluated during clinical observations and on EPP created assessments by clinical instructional coaches See Exhibit 1.6 The Initial Certification Assessment Cycle. Evaluation is led by the Office of Clinical Experiences is conducted formally or informally by clinical coaches, mentor teachers, and partnering school administrators.

Page 51: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

51

C-2: What are the success criteria for each assessment? EPP Response The criteria for candidate success for each assessment is to score at proficiency (3) or better, by the end of student teaching. Exhibit 1.6 – The Initial Certification Assessment Cycle for assessments displays the assessments conducted in Student Teaching. In addition to the formal assessments, Exhibit 5.5 – The Student Teaching Handbook identifies all assessments and provides a rubric that displays the qualities of proficiency for each assessment. Candidates are evaluated on Professionalism and sign a contract of acknowledgment to uphold the InTASC standards. Each assessment is aligned to the InTASC standard and Exhibit 5.2 – Evidence Matrix displays the CAEP alignment and InTASC standard by line item.

Below is the language regarding success criteria from the Student Teaching Handbook, page 11: In order to pass student teaching, 293 points must be earned. Beyond the number of points; however, a student must also meet all of the professionalism and professional disposition expectations for the teaching internship, as well as earn proficient scores in all areas of the Framework for Teaching during his/her teaching observations.

C-3: What happens when a candidate does not meet success criteria? EPP Response When a candidate does not meet success criteria, there are several steps in place to support the candidate with his/her area of concern. Below is an excerpt from Exhibit 5.6 – Student Teaching Handbook which addresses candidates who do not meet successful criteria.

Page 52: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

52

From Student Teaching Handbook page 7

Termination of Student Teaching: Teaching interns performing overall at or below the proficient level may be required to meet with the field instructor, mentor teacher, and Office of Clinical Experiences to develop an Action Plan for growth. Once corrective measures are established, the field instructor schedules a formal follow-up observation and/or meeting with the teaching candidate in collaboration with the mentor teacher. In the event that adequate progress is not made, the field instructor will inform the Director, and the student will earn a failing grade in student teaching. A second attempt at student teaching may be granted only at the professional discretion of the Director; however, there is no guarantee that a second attempt will be provided. Documented evidence of any of the following conditions may be cause for termination of a student teacher placement:

§ Inappropriate personal or professional behavior, including inappropriate use of social media related to students and/or the teaching internship.

§ Ethical impropriety. § Not upholding the Michigan Professional Educator’s Code of Ethics. § Not upholding the InTASC standards. § Not upholding the professional dispositions required of teaching candidates. § Violation(s) of community practices, standards, or policies. § Lack of professional judgment. § Inappropriate communication or contact with students, parents/guardians, school or College/University

personnel. § A legal conviction of a felony or a misdemeanor requiring a decision from the Michigan Department of

Education

Pupil learning is significantly impeded due to the teaching intern’s: § Lack of content knowledge. § Inadequate planning. § Inadequate classroom organization and/or management. § Deficiency in oral and/or written communication skills. § Inability to relate with students in a meaningful manner. § Inability to conduct oneself as a professional.

Procedures for Termination: When a mentor teacher or field instructor has severe concerns regarding an individual intern, the information is shared with the Director of the Office of Clinical Experiences. This documentation may include written observations, field notes, video, or formal evaluations of the intern’s performance. The Director will determine if the case warrants immediate termination due to concerns for P-12 student safety or quality of learning. Teaching candidates should also know that school districts can independently dismiss an intern from their district. Following the dismissal, the teaching candidate is required to have a conference with the Director of the Office of Clinical Experiences to discuss the next steps. A second attempt at student teaching may be granted only at the professional discretion of the Director; however, there is no guarantee that a second attempt will be provided. If a teaching candidate appeals for another attempt at student teaching, s/he must submit an Action Plan that maps out in detail how s/he will ensure a second internship experience will be successful. This plan is reviewed by the Director and an OCE Advisory Board. If the Action Plan is not approved, a second attempt at student teaching will not be granted and termination from the program will be final.

Page 53: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

53

Addressing the Areas for Improvement The EPP's quality assurance system is comprised of multiple measures that could be used monitor candidates' progress and completers' achievements. However, the details of the system are not provided.

The EPP does not provide evidence to illustrate how relevant, verifiable, representative, cumulative and actionable measures are used to produce empirical evidence or that its interpretations of data are valid and consistent.

The EPP does not provide at least 3 cycles of data that have been analyzed for trends and used in decision-making.

It is our goal that the responses in Standard 5 provide the clear description needed to remove the Area for Improvement specifically to the following:

1) What data are used to make decisions? 2) How are stakeholders involved in the decision-making process? 3) How are data collected and analyzed to evaluate decisions? 4) What process or plans have been established for analyzing and interpreting results from

assessments? Timelines, visual representations and detailed explanations of the multiple measures in place are presented to clarify the system of the Initial Certification program. It is often the case that there are very few exceptions or alternatives for the program levels and areas (undergraduate, graduate, post-bachelors) and when there are, these exceptions or alternatives have been noted throughout each of the standards. The Accreditation Advisory Committee believes that preparing responses to the formative feedback report provided by CAEP has helped us to identify key diagrams and charts which illuminate our processes, procedures, assessments and how we monitor candidates throughout their progression (See Exhibits 1.6, 5.1-5.4). The empirical evidence referred to in the Self Study Report has been documented above and the rationale has been provided. Exhibit 5.16 provides a sample Statistical Analysis of our Clinical observation. We have noted that with each phase in plan and implementation of a newly revised assessment, we will need to conduct statistical analysis to better inform program improvements. We have provided 3 cycles of data as presented in Exhibits 1.3, 1.4, 1.5, 1.11, and 1.12. We admittedly have completed initial analysis of trends and monitoring in order to report for this addendum and will be prepared to discuss further analysis from program areas, and from the Teacher Education Division’s analysis from the March 22, 2017 at the On-Site visit. The 3rd cycle of data was provided to the Accreditation Advisory Standard 1 and 5 Sub-Committee on March 13, 2017, four days prior to the Addendum due date. Addressing the Stipulations The EPP did not provide evidence that it regularly and systematically assesses performance against its goals and relevant standards, tracks results over time, tests innovations and the effects of selection criteria on subsequent progress and completion, and uses results to improve program elements and processes.

The EPP provides no measures of completer impact, including available outcome data on P-12 student growth.

Page 54: Self-Study Report – Addendum March 18, 2017 Introductioncoe.wayne.edu/accreditation/addendum.pdfSelf-Study Report – Addendum March 18, 2017 Introduction On behalf of Wayne State

54

As mentioned above, our goal is that the addendum has provided the necessary evidence that our program regularly and systematically assesses performance against its goals and standards. All of our data is analyzed in the Accreditation Advisory committee and we acknowledge the need to review, analyze and make programmatic decisions will the involvement of the division as a whole and by program area on an annual basis. As such, reporting on each of the assessments presented in Exhibit 1.6 will be analyzed by program area and as a division at the TED retreat each May and will follow the schedule as presented in Exhibit 5.7. Our EPP created assessment data now reports our goals (Exhibits, 1.3, 1.4, 1.5, 1.11, and 1.12). Instead of reporting mean scores, we have responded to the recommendation to analyze trends such as how many candidates are proficient and above, and we provide evidence that we identify those students who are challenged in multiple areas (Exhibit 5.15). Regarding the revisions of EPP created assessments, the key assessments – lesson plan, case study, clinical observation, and the self-study with digital video – were revised to adhere to the recommendations provided to us by the TEAC accreditation process in 2012. We were asked to provide rubrics for each assessment that are aligned to InTASC standards and we were asked to consider valid instruments vetted nationally. We chose the Danielson Framework for Teaching because it is vetted nationally; research has been provided on its validity (Exhibit 5.11); and many of our school-partners use it as their observation tool for teachers. We acknowledged in sections of Standards 1 and 5 in this document that our process of implementing these assessments was not as effective as we wanted; The Office of Clinical Experiences noted the multiple concerns clinical coaches were having and the variance in the scoring for the observation scores. Therefore, as explained in the addendum, Exhibits 1.1a and 1.b provide the agendas that document the professional development we implemented to address the norming of the Danielson Framework scoring. Additionally, the assessments and scores are presented and discussed in program areas and as a division as a regular part of the analysis cycle (see Exhibit 5.19 Power point TED May 2016 Retreat – slides 17-31 as a sample of when the entire division and program areas reviewed and analyzed assessment data for undergraduates, graduates, and post bachelors; also, see Exhibit 5.20 Licensure Report Analysis and Findings which represent the responses from the May 2016 Retreat program areas.) Finally, standard 4’s addendum details our work to strengthen evidence of program impact. While much of our work has been in the form of phase in plans, we have initial findings and this has been documented in the Standard 4 section of this Addendum. We acknowledge that this will also be our selected improvement and it is our goal that by responding to the rubric CAEP provided, we have addressed the necessary task to ensure that our phase in plan is effective. We have added the data analysis of program impact to our processes and systematic monitoring cycle (See exhibits 1.6, 5.3, 5.4 and 5.7).