final summary evaluation: state projects serving individuals with deaf-blindness october 23, 2006...

Post on 26-Dec-2015

215 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Final Summary Evaluation: State Projects Serving Individuals

with Deaf-Blindness

October 23, 2006

NCDB Sponsored Webinar

Presented by Richard Zeller

Presentation Overview

Self-assessment and verification review purpose and design

Summary of Year One (37) and Year Two (11) project self-assessments

Summary of on-site reviews

Feedback from projects and review teams

Evaluator Recommendations

Discussion

Evaluation Requirement

During year 2, each project must...

"conduct a comprehensive self-evaluation. The evaluation must include a review of the degree to which the project is meeting proposed objectives and goals and an evaluation of outcome data... In addition, the Department of Education intends to conduct a limited number of on-site evaluations based on a stratified randomized sample of sites.”

(RFP, page C-4)

Evaluation Purposes

Summative project evaluation questions: Are projects goals and objectives being achieved? Do projects address each RFP priority? Do projects have appropriate outcome data?

Formative project evaluation: Provide a continuous improvement process for individual projects to use.

National report - both summative and formative: provide information OSEP can use to guide needed system improvements.

Evaluation Constraints

48 projects - single and multi-state Staffing from partial to several FTE Common design to allow summaries Resources come primarily from the projects Assess whether projects are addressing RFP

priorities

Evaluation Design “Work scope” standards: Priorities (a) - (i) and

General Requirements (a) - (c) Priority questions:

What types of strategies are used? Is work being completed in a timely fashion? Are intended results being achieved? Are outcome data available (efforts and effects)? Are improvement plans in effect?

“General Requirements:” Are these requirements being appropriately addressed?

Evaluation Design (continued)

Self-assessments parallel MSIP’s Continuous Improvement and Focused Monitoring System (now the SPP & APR).

Verification Reviews (site visits) during this evaluation, reviews became a check on the self-evaluation process and a way to provide TA to the project.

Adjustments made in both self-assessment and review designs during implementation.

Self-Assessment Summary

Priorities (a) - (e): Strategies Timeliness Results Data - Effort and Effect Adjustments/Future Plans

Priorities (g) - (h) and General (a) - (c): Are the priorities addressed (Yes/No) Are there standards that apply?

Strategies Described

While the relative application of “ongoing” strategies was higher in year two, they

were also more distributed across projects than in year one.

Year One Year Two

Linear 1% 1%

Cyclical 15% 17%

Ongoing 36% 45%

Combined 48% 38%

For all timeliness item ratings

Year One

Year Two

active/behind schedule 15% 19%

not implemented/on schedule

1.4% 3%

active/on schedule 83% 78%

For all result item ratings:

Year One

Year Two

exceeding expectations 13% 13%

meeting expectations 72% 65%

below & approaching expectations

12% 20%

well below expectations 1% 1%

cannot rate 3% 1%

For all effort item ratings:

Ratings cluster: 3 projects in year one and 2 in year two rated “extensive” data for more than 6 items

Year One

Year Two

no data 2% 2%

some level of effort data 13% 33%

some quality of effort data 6% 7%

some level & quality of effort data 62% 45%

extensive level & quality of effort data

18% 13%

For all effect item ratings:Year One

Year Two

no data 12% 20%

outcome data 71% 68%

impact data 16% 12%

Clarification:• Outcomes are the immediate results of your

assistance (e.g., teacher skills gained in training)• Impacts are results your clients have when they

apply what you have taught them (e.g., they teach & children learn communications skills)

Overall Item Ranking (hi to lo)Year Two

(a)(3) R-B practices

(a)(1) State capacity

(d) Collaboration

(a)(4) Provider skills

(b)(1) Census

(a)(5) Address child/family needs

(e) Disseminate

(b)(2) Assess critical child needs

(a)(2) Systemic change

(c)(1) Evaluate effectiveness

(b)(3) Assess state needs

(c)(3) Advisory evaluation design

(c)(2) Measure child outcomes

Year One

(c)(3) Advisory evaluation design

(d) Collaboration

(b)(1) Census

(a)(1) State capacity

(a)(3) R-B practices

(a)(4) Provider skills

(b)(3) Assess state needs

(a)(5) Address family/child needs

(e) Disseminate

(a)(2) Systemic change

(c)(1) Evaluate effectiveness

(b)(2) Assess critical child needs

(c)(2) Measure child outcomes

Areas Needing More Attention?

(c)(2) Measure child outcomes (a)(2) Systemic change(b)(3) Assess state needs(b)(2) Assess child needs(c)(1) Evaluate effectiveness

(c)(3) Advisory evaluation design?

Adjustments/Future Plans

All projects/all strategies: about 81% (Year One) v. 63% (Year Two) of strategies are to “continue as proposed”

In Year One, 8 projects accounted for 55% of planned changes

In Year Two, all projects plan some changes in strategy, with 7 adopting new strategies

The most common areas of adjustment across both years were priorities (c)(1), evaluation and (c)(2), measurement of child outcomes

Priorities (g) & (h)

Affirmative Response:Year One

Year Two

(g) OSEP Directed TA 27% 0%

(g) Web-based TA 92% 82%

(g) Community of Practice 92% 82%

(h) Advisory Standards 97% 100%

(h) Act on Advisory Recs 100% 100%

(h) Advisory Change 38% 36%

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

General Priorities (a) - (c)

General Priority AreaYear One

Year Two

(a) Employ people with disabilities? 59% 73%

(a) Try to employ people? 68% 73%

(a) Advance people with disabilities? 62% 36%

(a) Change employment practices? 35% 36%

(b) Involve people with disabilities? 100% 100%

(c) Does project have a website? 86% 91%

(c) Is web-site accessible? 81% 82%

(c) Planning website improvement? 95% 45%

How Projects Relate Priorities to Work Design (Part 1, Year 2)

Objs or Goals Priority Cites Priorities / Obj Objs / Priority

33 41 1.2 1.7

20 38 1.9 1.1

23 88 3.8 1.2

34 164 4.8 1.8

20 120 6.0 1.1

6 42 7.0 0.3

26 182 7.0 1.4

14 116 8.3 0.7

3 27 9.0 0.2

14 133 9.5 0.7

10 111 11.1 0.5

Total Priority Cites by 11 Projects (Year 2, Part 1)(a)(1) State capacity 117

(a)(2) Systemic change

76

(a)(3) R-B practices 96

(a)(4) Provider skills 87

(a)(5) Assess family & child needs

64

(a)(6) Other 17

(b)(1) Census 34

(b)(2) Assess critical child needs

60

(b)(3) Assess state needs

54

(c)(1) Evaluate effectiveness

84

(c)(2) Measure child outcomes

49

(c)(3) Advisory evaluation 42

(d) Collaboration 70

(e) Disseminate 80

(g) OSEP specified TA 26

(h) Maintain Advisory 23

(a) Employ Individuals with disabilities

7

(b) Involve Individuals 43

(c) Web site accessibility 33

Verification Visit Summary

Sites Visited (in order): Year One: IN, FL, NJ, NY, WA, MO, CO, MT, CA Year Two: KY, MI , NC, TN

Process: Team of 3 reviewers each rated their agreement with the Project’s ratings for each priority and offered comments on each priority.

Revisions to the process and report form were made during the first year (simplifications) and again before year two (in response to suggestions).

Site Review Participation# Staff # Stakeholders

IN 4 20FL 5 12NJ 5 8NY 5 19WA 4 14CO 4 9MT 4 10MO 5 7CA 8 12KY 6 5MI 5 13NC 2 12TN 3 5

Who were the reviewers and how many sites did they visit?

Reviewer Name Year One Year TwoZambone 1Sharpton 1 1Bove 2McLetchie 2 2Rafalowski Welch 2Syler 2 2Fankhauser 3 1Dalke 4 3Rachal 4 2Steveley 6 1

Did site reviews tend to validate project self-assessments?

Agreement with project: “any reviewer’s rating of each project item self assessment rating”

Agreed with Project Year One Year Two Strongly Agree 85% 60% Mostly Agree 10% 25% Somewhat Agree 3% 2% Somewhat Disagree 2% 1% Strongly Disagree <1% 0%

Were reviewer ratings (after discussion) reliable?

Agreement here was defined as “all reviewers rated the way on a given item”

Year One: 32 actual disagreements, or 96.9% complete agreement on all items

Year Two: 1 disagreement, or 99%+ agreement on all items

Only 5 sites where disagreements among reviewers occurred; most of them in one site

How did projects and reviewers view the value of these two processes?

Self-assessment and improvement planning is a necessary function for the system of state projects.

The past and current processes and forms are complex and redundant, given the way the work is organized.

Both processes (self-assessments and review visits) have value, but both need substantial redesign.

Projects’ View of the Value of the Self-Assessments

Project Ratings (# reporting): Year 1 Year

2

High value 9 3

Moderate value 18 7

Some value 9 1

More trouble than it was worth 1

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.Year Two

What Some Projects Liked

Prompted communication with state program sites Forced staff to consider value of work The forms forced project to focus and limit narrative Improvement over earlier self-evaluation processes Aligned proposal to RFP, so not hard to use Format was easy and more logical [than year 1] Separate narrative allowed project to show how

priorities were woven into goals & objectives

What Projects Didn’t Like Priorities, criteria & proposed work not aligned Evaluation rules were not in the RFP Accessibility problems with form Redundancy (e.g., attachments & narratives) Word functions don’t work in the template form Too many reports for one year Too much time - takes away from TA Form accessibility (couldn’t enlarge print?) Narrative, priorities & ratings in three documents Format - impossible to match priorities to activities

Review Team Recommendations to Sites:

Expand partnerships (B, C, 619, others) - others must do the work of system change

Family networking/support (parent-to-parent) Define/structure TA and intent - child change,

local capacity building, systems change Systematize data collection (census, needs,

efforts and effects on individuals/systems) Use evaluation for program improvement

Review Team Suggestions

Better self-assessment instructions Consolidate Progress Report & Project Evaluation Clarify evaluation standards in the RFP Cluster priorities (eliminate redundancies) Value of the review process is the TA provided Effort & effect need better definition Change forms: Neither Year 1 or 2 worked for all Align Priorities and evaluation model

In future evaluation & review processes:

Evaluator Recommendations

The next RFP should have 5 program priorities (e.g., skill development, system capacity/change, child census/performance, family services, dissemination R-B practices)

Combine self-assessment and reporting in a single system with prescribed indicator measures for each priority for all projects

For larger projects (>$500K) adopt standard 3+2 procedures

Discussion:Were Evaluation Purposes Achieved?

Summative project evaluation questions: Are projects goals and objectives being achieved? Do projects address each RFP priority? Do projects have appropriate outcome data?

Formative project evaluation: Provide a continuous improvement process for individual projects to use.

National report - both summative and formative: provide information OSEP can use to guide needed system improvements.

top related