beginning educator case study (becs)

9
Beginning Educator Case Study (BECS) Narrative from the Case Study: Completers chosen for the case study will be asked to provide the EPP with a work sample including pre-test/post-test data points for a unit or course of study that was taught by the completer during the current academic year. The unit or course of study must be aligned to AR state standards within the grade level and content area for which the completer was prepared to teach by the EPP. Alternatively, completers may, when available, use formative assessment data points and analyze them against summative assessment data points provided by state standardized assessments. In addition, notes about the integration of technology into the lesson to enhance learning will be requested. An examination of the pre-test/post-test data or the formative/summative data will show the completers’ impact on student learning by the completer analyzing the growth of student knowledge for significant difference after the completion of the unit and the guided reflection questions. (NOTE: Pre-test and post-test data were chosen over external test instruments deemed more valid and reliable (such as Arkansas benchmark exams) due to the limits on subjects tested and other considerations such as current changes in current tests option that are being used in the schools.) In the case study conducted on our completers, all teachers participating provided unit or lesson plans aligned to specific state content standards. We have three elementary teachers, a 7th grade English teacher, and a 10th grade economics teacher in this set of data. The EPP reviewed paired data charts to ensure that there were matched pairs, and any student that did not have both pre- and post-test results were removed before running a T-test. Six sets of paired data for Year 1, ranging from 8 students to 25 students in a class, were analyzed from the two sets of completers (2019-20, 2016-17). T-tests were run on pre- and post-test data collection results and all six sets of case study data showed significant learning with a p-value of less than .05 for all pairs of supplied data. Also, case study participants reflected on the learning experience used for the study. 1. Qualitative responses to the first reflection question asking about whether or not the lesson was successful included terms like “…as evidenced by…”, “students applied the previously taught…”, “apply their knowledge”, “listening to their conversations”, “Connected with”, and “were able to see real-world application”. 2. Qualitative responses to the second reflection question asking about the level of student engagement included terms like “highly engaged”, “effective strategy”, “…evidence of this was…”, “(students) inquired”, “real-world relevance”, and “straddling the line of strategic compliance and engagement”. A participant discussed the challenges of COVID and virtual student engagement. This gave the EPP an indication of where we could support our candidates in working with strategies for student engagement in an online modality. 3. Qualitative responses to the third reflection question asking about what the teacher would do differently when they taught the lesson next included terms like “practice”, “collaboration with peers”, “groups smaller”, “Shorter span of time” (lesson), “clarify assignment instructions”, “incorporate more movement”, and “add relevant news and events to… connect real-world events to the class content”.

Upload: others

Post on 14-Nov-2021

11 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Beginning Educator Case Study (BECS)

Beginning Educator Case Study (BECS)

Narrative from the Case Study:

Completers chosen for the case study will be asked to provide the EPP with a work sample including pre-test/post-test data points for a unit or course of study that was taught by the completer during the current academic year. The unit or course of study must be aligned to AR state standards within the grade level and content area for which the completer was prepared to teach by the EPP. Alternatively, completers may, when available, use formative assessment data points and analyze them against summative assessment data points provided by state standardized assessments. In addition, notes about the integration of technology into the lesson to enhance learning will be requested. An examination of the pre-test/post-test data or the formative/summative data will show the completers’ impact on student learning by the completer analyzing the growth of student knowledge for significant difference after the completion of the unit and the guided reflection questions. (NOTE: Pre-test and post-test data were chosen over external test instruments deemed more valid and reliable (such as Arkansas benchmark exams) due to the limits on subjects tested and other considerations such as current changes in current tests option that are being used in the schools.)

In the case study conducted on our completers, all teachers participating provided unit or lesson

plans aligned to specific state content standards. We have three elementary teachers, a 7th

grade English teacher, and a 10th grade economics teacher in this set of data. The EPP

reviewed paired data charts to ensure that there were matched pairs, and any student that did

not have both pre- and post-test results were removed before running a T-test. Six sets of

paired data for Year 1, ranging from 8 students to 25 students in a class, were analyzed from

the two sets of completers (2019-20, 2016-17). T-tests were run on pre- and post-test data

collection results and all six sets of case study data showed significant learning with a p-value of

less than .05 for all pairs of supplied data. Also, case study participants reflected on the learning

experience used for the study.

1. Qualitative responses to the first reflection question asking about whether or not the

lesson was successful included terms like “…as evidenced by…”, “students applied the

previously taught…”, “apply their knowledge”, “listening to their conversations”,

“Connected with”, and “were able to see real-world application”.

2. Qualitative responses to the second reflection question asking about the level of student

engagement included terms like “highly engaged”, “effective strategy”, “…evidence of

this was…”, “(students) inquired”, “real-world relevance”, and “straddling the line of

strategic compliance and engagement”. A participant discussed the challenges of

COVID and virtual student engagement. This gave the EPP an indication of where we

could support our candidates in working with strategies for student engagement in an

online modality.

3. Qualitative responses to the third reflection question asking about what the teacher

would do differently when they taught the lesson next included terms like “practice”,

“collaboration with peers”, “groups smaller”, “Shorter span of time” (lesson), “clarify

assignment instructions”, “incorporate more movement”, and “add relevant news and

events to… connect real-world events to the class content”.

Page 2: Beginning Educator Case Study (BECS)

4. Qualitative responses to the fourth reflection question asked about what resources were

used for the lesson (and would they use the same or other resources when they taught

the lesson the next time) and teachers mentioned that they would likely use the same

resources or tweak some and maybe add an additional tool. Various resources were

mentioned according to their content area, lesson, age group, etc. Thoughtful answers

were provided as to indicate intentionality in the selection of resources.

5. Qualitative responses to the fifth reflection question asked about how the classroom

culture impacted student learning, included terms like: “minimized distraction”,

“respectful boundaries”, “mutual respect”, “understand our norms and practices”,

“respect and kindness”, “helpful to each other”, “high level of respect for others”, and

“seamless transitions”.

Impact on K-12 Learning and Development

Year 1

t-Test: Paired Two Sample for Means

Pre-test results Post-test results

Mean 0.4700 0.7745 Variance 0.0847 0.0577 Observations 83 83 Pearson Correlation 0.7248 Hypothesized Mean Difference 0 df 82 t Stat -13.6916 P(T<=t) one-tail 0.0000 t Critical one-tail 1.6636 P(T<=t) two-tail 0.0000 t Critical two-tail 1.9893

Learning from pre-test to post-test was significant with a P<.05

0

5

10

15

20

25

30

10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Beginning Educator Case Study Year 1 Pre- and Post-test Chart

Pre-Test Frequency Post-Test Frequency

Page 3: Beginning Educator Case Study (BECS)

Year 3

t-Test: Paired Two Sample for Means

Pre Test Post Test

Mean 0.6005 0.8764 Variance 0.0484 0.0111 Observations 39 39 Pearson Correlation 0.7696 Hypothesized Mean Difference 0 df 38 t Stat -11.1564 P(T<=t) one-tail 0.0000 t Critical one-tail 1.6860 P(T<=t) two-tail 0.0000 t Critical two-tail 2.0244

Learning from pre-test to post-test was significant with a P<.05

0

5

10

15

20

10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Beginning Educator Case Study Year 3 Pre- and Post-test Chart

Pre-Test Frequency Post-Test Frequency

Page 4: Beginning Educator Case Study (BECS)

Wilbur D. Mills Educational Service Cooperative 2020-2021 Student Impact Data (Years 1, 2, and 3) Responses Harding University does not know the total number of surveys sent from the Co-Op or the response rate for this survey. Harding received the data gathered from sending the survey and attributed it to Harding as the university for which they completed their program for initial licensure. Harding University was named in 11, 5, and 13 responses each year, respectively, as the EPP that trained the teachers (self-reported). It is unclear which initial teacher preparation program the novice teachers completed - traditional or alternate route. Regardless, the EPP is thankful to receive any data from our completers in their first three years of teaching in the field. The Impact on Student Learning information asked in the survey includes the number of students tested, the Pre-test class average before the chosen unit, and the Post-test class average after the chosen unit. The survey respondents are given choices based on 5% point category spreads (examples include, below 60%, 60-64%, 75-79%, and 95-100%) for both the pre- and post-test questions. To interpret the data as provided to us, we categorized the average class scores of the K-12 students.

Coding Information

1 = below 60%

2 = 60-64%

3 = 65-69%

4 = 70-74%

5 = 75-79%

6 = 80-84%

7 = 85-89%

8 = 90-94%

9 = 95-100%

Years 1 – 3 Aggregate Graph of Pre-test and Post-test Learning

Page 5: Beginning Educator Case Study (BECS)

Year 1 Reponses

t-Test: Paired Two Sample for Means Year 1

Pre-test Post-test

Mean 2 6.091

Variance 5.8 3.091

Observations 11 11

Pearson Correlation 0.567 Hypothesized Mean Difference 0 df 10 t Stat -6.708 P(T<=t) one-tail 0.000 t Critical one-tail 1.812 P(T<=t) two-tail 0.000 t Critical two-tail 2.228

Learning from pre-test to post-test was significant with a P<.05

Page 6: Beginning Educator Case Study (BECS)

Year 2 Reponses

t-Test: Paired Two Sample for Means Year 2

Pre-test Post-test

Mean 3 6.4

Variance 4.5 2.3

Observations 5 5

Pearson Correlation 0.933 Hypothesized Mean Difference 0 df 4 t Stat -8.5 P(T<=t) one-tail 0.001 t Critical one-tail 2.132 P(T<=t) two-tail 0.001 t Critical two-tail 2.776

Learning from pre-test to post-test was significant with a P<.05

0

2

4

6

8

10

1 2 3 4 5

Per

cen

t C

orr

ect

Ran

ge -

Co

ded

Participants

Year 2 Graph

Pre-test class average (before the unit was taught)

Post-test class average (after the unit was taught)

Page 7: Beginning Educator Case Study (BECS)

Year 3 Reponses

t-Test: Paired Two Sample for Means Year 3

Pre-test Post-test

Mean 2.154 6.615

Variance 1.474 1.590

Observations 13 13

Pearson Correlation 0.260 Hypothesized Mean Difference 0 df 12 t Stat -10.679 P(T<=t) one-tail 0.000 t Critical one-tail 1.782 P(T<=t) two-tail 0.000 t Critical two-tail 2.179

Learning from pre-test to post-test was significant with a P<.05

0

1

2

3

4

5

6

7

8

9

10

1 2 3 4 5 6 7 8 9 10 11 12 13

Per

cen

t C

orr

ect

Ran

ge -

Co

ded

Participants

Year 3 Graph

Pre-test class average (before the unit/skill was taught)

Post-test class average (after the unit/skill was taught)

Page 8: Beginning Educator Case Study (BECS)

Completer Impact on K-12 Student Learning Analysis using Transformed Mean Value-Added Scores

(VAS) provided by the Arkansas Department of Education (ADE)

Addresses CAEP Standards R4.1, R5.4

Overall scores include the subjects of English Language Arts (ELA), Math, and Science (if available) and is

a weighted mean. The ADE transforms this mean score for ease of interpretation. The following

explanation is provided – “A score of 80 equals the value-added score of 0. A standard deviation of 35 is

used to spread the scores around 80 so that school means scores will typically range from 65-95.” The

formula used in Arkansas School Ratings is: Transformed Mean VAS = (Mean VAS x 35) + 80.

The ADE has provided the following guide to make meaning of the VAS.

Transformed Score = 80 On average, your completers’ students met expected growth.

Transformed Score greater than 80 On average, your completers’ students are exceeding

expected growth. The higher the score above 80, the greater the magnitude with which

students of your completers exceeded their growth expectation.

Transformed Score less than 80 On average, students of your completers are not meeting

expected growth in achievement. The lower the score below 80, the greater the degree to which

students, on average, failed to meet expected growth.

The ADE provides three years of three cohorts of data (as it becomes available). Due to the privacy

agreement signed by Harding University with the ADE and Harding University’s respect for privacy of our

completers and their K-12 students, the following are simply observations from the actual report

received from the ADE. NOTE: One limitation of analyzing the data that should be considered is that the

data provided is based on where the teacher prepared for licensure and what area he or she was

teaching in the K-12 school – not necessarily that he or she is teaching in the area for which they initially

were trained with the EPP.

What conclusions have been made from reviewing the data?

2015 Completer Cohort

Over the three-year period for our 2015 Completer Cohort, the calculated VAS for the three subjects

ranged from 79.92 to 100. (The EPP considers the 100 to be an outlier for that subject and that year).

The mean VAS for the three years and three subjects is 82.96 (80.52 with the outlier removed). Based on

this information, the EPP completers’ students are meeting expected growth rates.

The Average VAS by all state providers for three years and for the three subjects for 2015 Completers

ranged from 77.94 – 85.69. (The EPP considers the 85.69 to be an outlier for that subject and that year.)

The mean of the calculated average VAS for the three years and three subjects is 79.98 (79.17 with the

outlier removed). Based on this information, the EPP is slightly above the average for all state providers

for completers in the 2015 cohort.

In addition, it is important to note is that the number of teachers whose scores were included in the

calculation dropped 11% over the three-year period (73 to 65). Since access to this data is new, the EPP

will continue to look at this calculation to see if there is an attrition trend in the coming years.

Page 9: Beginning Educator Case Study (BECS)

2016 Completer Cohort

Over a three-year period for our 2016 Completer Cohort, the calculated VAS for the three subjects

ranged from 76.9 to 81.88. The mean VAS for the three years and three subjects is 80.12. Based on this

information, the EPP completers’ students are meeting expected growth rates.

The Average VAS by all state providers for two years and for the three subjects for 2016 Completers

ranged from 78.66 – 87.74. (The EPP considers the 87.74 to be an outlier for that subject and that

year.)The mean of the calculated average VAS for the three years and three subjects is 80.26 (79.32 with

the outlier removed). Based on this information, the EPP is just slightly above the average for all state

providers for completers in the 2016 cohort (when removing the outlier); however, there is no

significant difference between the EPP and the average of all state providers in expected student

growth.

In addition, it is important to note that there was 3.7% increase in the number of teachers from the

cohort of completers from the first year to the third year included in the calculations for VAS. The EPP

will look at the report next year to see if these teachers remain in the classroom in the areas for which

scores are calculated for VAS (ELA, Math, and Science).

2017 Completer Cohort

No data were provided for the 2017 Completer Cohort for Harding University.

2018 Completer Cohort

No data were provided for the 2018 Completer Cohort for Harding University.

2019 Completer Cohort

No report was created for the 2019 Completer Cohort for Harding University (due to COVID-19).

NOTE: This data comes from a research group in Arkansas that links teachers to students based on the course codes assigned to

students and the teachers assigned to those courses. Denis Airola, PhD., Office of Innovation for Education (OIE) at the

University of Arkansas.