appendixl.2:(additional(surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · l-...

21
L- 10 Appendix L.2: Additional Surveys This sub-appendix summarizes the data collection and analysis methods used to develop the selected evaluation findings discussed in Section 4 which were derived from surveys other than those administered for the 2014 APR. After a brief overview of methods applicable to all surveys, we present methods and graphed ratings for the following surveys: targeted TA post- event surveys addressing IC training, a post-event survey of the July 2014 meeting of Center partners and intensive TA recipients, and a survey addressing targeted TA recipients’ use of the IC tool on the NIC. General Survey Methods Design and Administration All surveys were administered online. Respondents were asked to provide ratings and open- ended feedback. Rating items employed a 4-point Likert-type scale, with higher ratings representing more positive responses. In general, respondents were asked to rate the extent to which they agree with statements related to quality, relevance, and usefulness according to the scale where 1 = strongly disagree, 2 = disagree, 3 = agree, and 4 = strongly agree. For measures of impact, the scale was 1 = no impact, 2 = some impact, 3 = strong impact, and 4 = very strong impact. Data Analysis Methods Survey items with rating scales were quantitatively analyzed. Distributions for key items were graphed to allow for visual analysis of patterns (see figures below). Open-ended responses were qualitatively analyzed to add specificity and depth of understanding to the ratings. IC Training in Targeted TA States Post-event surveys were administered following on-site targeted TA visits. For two states, these meetings included IC training. In these instances, post-event surveys included three items asking respondents to rate the extent to which they agree that the IC training was of high quality (i.e., well-organized and systematic), relevant to the work that they will undertake, and useful for learning to use the online system. Surveys were administered immediately after each state’s meeting (October and December 2013). Figure 11 shows ratings of IC training from 13 respondents in these two states.

Upload: others

Post on 26-Jun-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

10

Appendix  L.2:  Additional  Surveys  

This sub-appendix summarizes the data collection and analysis methods used to develop the selected evaluation findings discussed in Section 4 which were derived from surveys other than those administered for the 2014 APR. After a brief overview of methods applicable to all surveys, we present methods and graphed ratings for the following surveys: targeted TA post-event surveys addressing IC training, a post-event survey of the July 2014 meeting of Center partners and intensive TA recipients, and a survey addressing targeted TA recipients’ use of the IC tool on the NIC.

General  Survey  Methods  

Design  and  Administration  

All surveys were administered online. Respondents were asked to provide ratings and open-ended feedback. Rating items employed a 4-point Likert-type scale, with higher ratings representing more positive responses. In general, respondents were asked to rate the extent to which they agree with statements related to quality, relevance, and usefulness according to the scale where 1 = strongly disagree, 2 = disagree, 3 = agree, and 4 = strongly agree. For measures of impact, the scale was 1 = no impact, 2 = some impact, 3 = strong impact, and 4 = very strong impact.

Data  Analysis  Methods  

Survey items with rating scales were quantitatively analyzed. Distributions for key items were graphed to allow for visual analysis of patterns (see figures below). Open-ended responses were qualitatively analyzed to add specificity and depth of understanding to the ratings.

IC  Training  in  Targeted  TA  States  

Post-event surveys were administered following on-site targeted TA visits. For two states, these meetings included IC training. In these instances, post-event surveys included three items asking respondents to rate the extent to which they agree that the IC training was of high quality (i.e., well-organized and systematic), relevant to the work that they will undertake, and useful for learning to use the online system. Surveys were administered immediately after each state’s meeting (October and December 2013). Figure 11 shows ratings of IC training from 13 respondents in these two states.

Page 2: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

11

Figure 11. Ratings of IC Training

July  2014  Partners  Meeting  

A post-event survey was administered to Center partners and intensive TA recipients who attended the meeting held on July 1, 2014. Twenty respondents submitted the survey. Rating items addressed specific segments of the meeting, the extent to which the goals of the meeting were met, and the meeting overall. Figure 12 presents ratings of the presentations given by representatives from each intensive state. Participants were asked to rate the extent to which the state presentations (a) were of high quality, (b) were helpful in understanding the efforts of each state and relevant to my role, and (c) will be useful in thinking about how we can link state/organizational efforts.

Page 3: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

12

Figure 12. State Presentations

Similar ratings were obtained for the issue discussions (Figure 13) and communication plan process (Figure 14).

Figure 13. Ratings of Issue Discussions

Page 4: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

13

Figure 14. Ratings of Communication Plan Process

Figure 15 shows ratings of the extent to which the following meeting goals were met: • Develop cross-state and cross-partner collaboration on key issues • Identify potential strategies for addressing key issues around educator preparation policy

and reform to improve college and career readiness of students with disabilities • Identify products and services that partner organizations provide and that can be

leveraged to address key issues raised by state partners • Begin a communication plan for disseminating information about state leadership team

plans and accomplishments • Identify ways that representatives from partner organizations can assist with

communication in the states

Page 5: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

14

Figure 15. Ratings of Extent to Which Goals of Meeting Were Met

Figure 16 shows ratings of the meeting overall. The quality item was, “Overall, the meeting was well-organized and had a good flow.” The relevance item was, “Overall, the information presented and activities were relevant to our reform efforts.” The usefulness item was, “Overall, the meeting was a good use of time.”

Figure 16. Ratings of Partner Meeting Overall

Page 6: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

15

NIC  IC  Tool  Use  in  Targeted  TA  States  

In July 2014, a survey was sent to IHE representatives from targeted TA states who Center staff reported were familiar with the IC tool on the NIC. The purpose of this survey was to gather preliminary feedback on the online tool, how the results of IC reviews are being used, and the impact of the IC tool and targeted TA. Five respondents submitted the survey, but only four completing ratings of impact. Figure 17 presents ratings of the IC tool on the NIC. Figure 18 presents ratings of impact. See Section 4 for a summary of open-ended feedback.

Figure 17. Ratings of IC Tool on NIC

Page 7: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

16

Figure 18. Ratings of Impact

 

Page 8: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

17

Appendix  L.3:  Outputs  Analysis  

The Center evaluation plan discusses three sets of Center outputs: (a) activities, (b) participants, and (c) products and services. Evidence of the effectiveness of these outputs is summarized in Table A according to four indicators: evidence of reaching intended audiences, evidence that products and services are of high quality, evidence that audiences find activities relevant, and evidence that audiences find activities useful. Additional information is reported in Section 4.

Table A. Evidence of Effectiveness of Center Outputs.

Indicator Data Sources Summary of Selected Analytic Findings Evidence of reaching intended audiences

Surveys of • Center partners

(professional organizations)

• Targeted TA recipients • Intensive TA recipients

The Center has reached out to each of its three key constituencies, (i.e., SEA staff, IHE staff, and other Center consumers) as identified in OSEP’s RFA and the Center’s cooperative agreement. The Center also analyzed APR survey data to confirm that a purposive sample of each constituency rated Center products and services as being of high quality, relevant, and useful in aligning state professional learning systems.

Evidence that products and services are of high quality

Surveys on • Knowledge development

and universal TA products • Targeted TA services • Intensive TA services • NIC tools and resources

On the 2014 APR, the Center met seven of seven performance measures on quality (1.a., 1.e., 2.a., 3.a., 3.d., 4.b., and 5.b.). Additional data corroborate these findings while also identifying specific areas that respondents recommended for improvement (e.g., clarifying which MOU provisions can and cannot be negotiated).

Evidence that audiences find activities relevant

Surveys on • Knowledge development

and universal TA products • Targeted TA services • Intensive TA services • NIC tools and resources

On the 2014 APR, the Center met seven of seven performance measures on relevance (1.b., 1.g., 2.b., 3.b., 3.e., 4.c., and 5.c.). Additional data corroborate these findings while also providing respondent suggestions to tailor product topics and content to be more relevant to different segments of target audiences (e.g., products tailored for SEA vs. IHE staff).

Page 9: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

18

Indicator Data Sources Summary of Selected Analytic Findings Evidence that audiences find activities useful

Surveys on • Knowledge development

and universal TA products • Targeted TA services • Intensive TA services • NIC tools and resources

On the 2014 APR, the Center met six of seven performance measures on usefulness (1.c., 1.i., 2.c., 3.c., 3.f., and 5.d. were met; 4.d. was not met). Analyses of additional qualitative data offered suggestions to help improve the usefulness of intensive TA (e.g., increasing the role of state steering committees in drafting MOU provisions for state leadership teams to consider).

Note: This table relates performance measures reported in the 2014 APR to indicators of quality, relevance, and usefulness. Reported measures not included in this table are related to impact (1.j. and 2.d), subcontractor agreements (5.a.), work plan progress (6.b.), and cost of TA (7.b.). Impact items are discussed in the section on Outcomes.

Page 10: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

19

Appendix  L.4:  Outcomes  Analysis  

The Center’s evaluation plan discusses anticipated outcomes for the Center’s operation, including short-term, intermediate, and long-term outcomes for target audiences who receive Center products and services. The summative evaluation has been designed to directly measure outcomes in a sample of IHEs participating in intensive TA. In order to gauge our progress thus far in achieving short-term and intermediate outcomes in our first cohort of intensive TA states, we analyzed artifacts that were produced as we delivered intensive TA over the last year. Each state was assigned a summary status or stage of progress for each outcome: Not Yet Started, Commitment/Planning (which encompasses Commitment, Planning, or both), and In Progress. Commitment is indicated by relevant language in a state’s signed MOU. Planning may be evidenced by documented plans for related meetings, workgroups, or other efforts; the most common Planning artifacts are relevant blueprint goals, objectives, or tasks. Summary statuses reflect consensus between TA and evaluation staff. Tables B and C provide (a) a numbered list of short-term and intermediate outcomes, respectively, (b) the extant data reviewed to determine progress, and (c) a summary of progress for each Cohort 1 intensive TA state. Summary graphs are found in Section 4.

Figure 19: Artifact analysis of short-term outcomes.

Figure 20: Artifact analysis of intermediate outcomes.

Page 11: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

20

Table B. Summary of Progress in Achieving Short-Term Outcomes in Intensive TA States

Short-Term Outcome

Evidence Analyzed

Summary of Progress California Connecticut Florida Illinois South Dakota

1. SEAs, LEAs, and IHEs increase awareness of CEEDAR TA tools and how to use them

Artifacts of meetings (live or virtual), training events or resources, and other communications with CEEDAR staff.

In Progress Overview of ICs provided during in-person SLT meeting.

In Progress Tools have been introduced via SSC meeting, IHE webinar, SLT meeting, and professional development day for IHEs.

In Progress Tools have been discussed or demonstrated via SSC meeting, webinar, SLT meeting, and summer institute for IHEs across the state.

In Progress Tools have been described via webinar for the SSC.

In Progress Tools have been discussed or demonstrated via SSC meeting, webinar, and in-person and virtual SLT meetings.

2. Completed SEA policy self-assessments guide TA plans

MOUs; artifacts of NIC policy survey completion, other needs assessment, and blueprint (BP; TA plan) development.

In Progress 6 SLT members completed the policy survey; other needs assessment activities during meetings; BP entered on the NIC.

In Progress Policy review completed as part of Network for Transforming Educator Preparation (NTEP) process and used to guide BP development.

In Progress 6 SLT members completed the policy survey; certification workgroup informally surveyed stakeholders; BP developed via SLT virtual meetings and entered on NIC.

Commitment/ Planning Signed MOU indicates commitment to evaluating and revising policies and practices. Conducted survey on priority areas for reform and discussed needs and priorities at SLT meeting.

In Progress 7 SLT members completed the policy survey; SLT meeting addressed needs and prioritization; BP entered and substantially revised.

Page 12: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

21

Short-Term Outcome

Evidence Analyzed

Summary of Progress California Connecticut Florida Illinois South Dakota

3. At IHEs, completed TE and LE needs assessments guide TA plans

MOUs and BP artifacts.

Commitment/ Planning Signed MOU indicates commitment to evaluating and revising programs. IHE needs assessments1 and blueprints to be completed at 9/18/14 SLT meeting.

Commitment Signed MOU indicates commitment to evaluating and revising programs.

Commitment/ Planning Signed MOU indicates commitment to evaluating and revising programs. State BP goal addresses program revision.

Commitment Signed MOU indicates commitment to evaluating and revising programs.

Commitment/ Planning Signed MOU indicates commitment to evaluating and revising programs. State BP goal addresses preparation reform; differentiating IHE-level tasks.

4. IHEs gain increased knowledge and skill in implementing ICs and reform rubrics

Artifacts of trainings, relevant meetings, and BPs.

In Progress IC training completed in-person April 2014.

In Progress Professional development day with IHE teams on use of the ICs and developing a BP.

In Progress Related BP task. IC training through webinar and summer institute.

Not Yet Started

In Progress Multiple overviews of ICs, but no formal training.

1 Self-assessments will be conducted using the rubric from Collaborative programs in general and special teacher education: An action guide for higher education and state policy makers (Blanton & Pugach, 2007).

Page 13: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

22

Short-Term Outcome

Evidence Analyzed

Summary of Progress California Connecticut Florida Illinois South Dakota

5. IHEs gain increased knowledge of and skill in implementing and teaching EBPs

MOUs and artifacts of BPs and relevant meetings or trainings.

Commitment/ Planning Signed MOU indicates commitment to improving EBP learning opportunities. Some knowledge needs to be identified at 9/18/14 SLT meeting.

Commitment/ Planning Signed MOU indicates commitment to improving EBP learning opportunities. Professional Development Institute on EBP related to literacy planned for 10/6/14.

In Progress Commitment and planning indicated through MOU and BP. ICs (see #4) increase knowledge of EBPs.

Commitment Signed MOU indicates commitment to improving EBP learning opportunities.

Commitment Signed MOU indicates commitment to improving EBP learning opportunities.

6. SEAs & IHEs begin designing improved teacher & leader evaluation systems that use SWD outcome data

MOUs Commitment/ Planning Signed MOU indicates commitment. BP objective related to measuring special education teachers’ performance.

Commitment/ Planning Signed MOU indicates commitment. BP objective to “collect and analyze assessment data on candidate demonstration of EBP.”

Commitment Signed MOU indicates commitment.

Commitment Signed MOU indicates commitment.

Commitment Signed MOU indicates commitment.

Page 14: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

23

Table C. Summary of Progress in Achieving Intermediate Outcomes in Intensive TA States

Summary of Progress Intermediate Outcome

Evidence Analyzed

California Connecticut Florida Illinois South Dakota

1. SEAs, LEAs, and IHEs increase knowledge of how to initiate and sustain successful preparation systems reform

Artifacts of live and virtual meetings, resources provided to participants, relevant planning activities, etc.

In Progress SLT meetings provide related guidance, including activity on visioning and targeting efforts.

In Progress SEA meeting was held in November 2013 to discuss coherence among and leveraging of reform initiatives.

In Progress SLT meetings provide related guidance; initial meeting included activity on visioning and targeting efforts.

In Progress SLT meeting in August 2014 included discussion of coherence among and leveraging of reform initiatives.

In Progress Initial SLT meeting included activity on visioning and targeting efforts.

2. SEAs align licensure, accreditation, and evaluation policy

MOUs; artifacts of alignment plans and actions (e.g., in blueprints [BPs] or revised policies).

Commitment/ Planning Signed MOU indicates commitment to alignment; BP goal addresses alignment of certification policy and preparation programs.

Commitment/ Planning Signed MOU indicates commitment to alignment; meeting of SLT and Educator Preparation Advisory Council (EPAC) with focus on alignment planned for 11/21/14.

Commitment/ Planning Signed MOU indicates commitment to alignment; BP goal addresses alignment of certification policy and preparation programs.

Commitment Signed MOU indicates commitment to alignment.

Commitment/ Planning Signed MOU indicates commitment to alignment; BP goal addresses alignment of certification policy and preparation programs.

Page 15: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

24

Summary of Progress Intermediate Outcome

Evidence Analyzed

California Connecticut Florida Illinois South Dakota

3. IHEs infuse EBP content in TE and LE programs

MOUs; artifacts of EBP infusion plans or actions (e.g., in BPs or revised courses or programs).

Planning BP objective related to general education program standards that address effectively working with SWDs.

Planning IHE BP goals on improving the inclusion of EBP around literacy and culturally responsive teaching.

Planning2 Related goals and objectives in state BP. UDL workgroup established.

Not Yet Started

Planning State BP goal addresses EBP in reading.

4. IHEs improve TE and LE pedagogy

Artifacts of improved pedagogy (e.g., revised syllabi or programs) or related planning.

Planning Feedback to IHEs about their BPs will emphasize use of ICs to infuse EBP pedagogy into both general and special education programs.

Not Yet Started Not Yet Started

Not Yet Started

Not Yet Started

2 Note that Florida is currently planning to revise its special education certification policies before focusing on program revision.

Page 16: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

25

Summary of Progress Intermediate Outcome

Evidence Analyzed

California Connecticut Florida Illinois South Dakota

5. SEAs & IHEs pilot improved teacher and leader evaluation systems

Artifacts of revised system planning or implementation.

Not Yet Started

Not Yet Started Not Yet Started

Not Yet Started

Not Yet Started

Page 17: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

26

Appendix  L.5:  Website  Evaluation  

Website  Analytics  

Google Analytics for the CEEDAR website for the past 12 months are presented in Figure 18 below. These analytics lead to several important conclusions. For one thing, although the number of sessions (per month) has increased steadily, the number is disappointing and remains below expectation. (The same might be said for the number of Newsletter subscribers. Although it has increased by fourfold in the past 12 months, 1,410 is a disappointing total.) Pageviews also has trended upward in 2013-14, but its course has been less regular. The peak of pageviews in April 2014 is consistent with the national call application deadline. Because we see need for improvement in website usage, we do plan to report these data monthly to the Leadership Team and discuss performance, trends, and options for increasing website use.

Figure 21. 2013-2014 Google Analytics for ceedar.com Sessions

Page 18: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

27

Website  Usability  Evaluation  

Efforts to improve the Center website began in winter 2014 when the website team solicited feedback from Center staff on the website's usability. The team used this information to plan a website redesign and then solicited feedback from IHE and SEA representatives via a focus group and usability survey. Methods and findings are described below.

Data  Collection  Methods  

Participants  

Two IHE and two SEA representatives were selected based on Center staff’s knowledge of their use of the current website and their willingness to provide feedback. The participants represented four states: two intensive TA states, one targeted TA state, and one state not currently participating in direct TA services. The same participants completed the usability survey.

Focus  Group  

The website team developed a protocol in collaboration with website and technology experts from UF, AIR, and the IRIS Center. Participants were asked to familiarize themselves with the website before the focus group, thinking about what worked well and what could be improved. The focus group was conducted in June 2014 by a Center staff member via telephone and Adobe Connect, which allowed the facilitator to demonstrate specific areas or features of both the current website and the preliminary redesign. Questions solicited feedback on several aspects of the current and redesigned websites, including content, appearance, organization and navigability, and clarity of the Center’s mission. The session was recorded and transcribed. Additionally, notes were taken by the lead website developer and an evaluation team member. One participant was unable to join the focus group due to technical difficulties and later provided feedback via email.

System  Usability  Scale    

The System Usability Scale (SUS) is a valid, reliable survey for quickly assessing usability, even with a small sample (http://www.usability.gov/how-to-and-tools/methods/system-usability-scale.html). It consists of 10 statements, 5 positively phrased and 5 negatively phrased, about the system’s usability (see Table D for a full list of statements). Participants are asked to indicate their agreement with each statement using a 5-point Likert scale ranging from Strongly Disagree to Strongly Agree. The questionnaire items were entered into Survey Monkey for electronic data collection. A link was emailed to participants after the focus group.

Data  Analysis  Methods  

Focus  Group  

The evaluation team reviewed the transcript and notes to identify themes in participants’ responses, as well as key suggestions for improvements.

Page 19: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

28

System  Usability  Scale    

Survey responses were scored using standard SUS procedures (http://www.usability.gov/how-to-and-tools/resources/templates/system-usability-scale-sus.html). Responses are scored on a 0-4 point scale that takes statement phrasing into account so that higher scores reflect higher usability. A score of 0 reflects strong disagreement with a positive statement or strong agreement with a negative statement. A score of 2 reflects a neutral response. A score of 4 reflects strong disagreement with a negative statement or strong agreement with a positive statement. Thus, a total of 40 points are possible across all 10 items. This sum is multiplied by 2.5 to yield an SUS score range of 0-100. Scores above 68 are considered above average. In addition to calculating the average SUS score across the four participants, the evaluation team examined the distribution of agreement ratings by item and across all items. It is not recommended, however, that individual item scores are interpreted in isolation.

The SUS will be administered repeatedly over time to look for changes as the website is revised.

Key  Findings  

Focus  Group  

Overall, participants reported that they liked the current website and the demonstrated portions of the proposed redesign. They generally liked the current website's appearance and felt it was navigable and user friendly. Particular resources they had found useful included the webinars and the innovation configurations. One user specifically mentioned liking the Tweets. They provided specific suggestions for improving organization to make important features easier to find. Broader suggestions included:

• Improve the clarity of the Center’s mission and intended audience, including its relevance for various stakeholder groups such as general educators

• Organize resources by topic area or stakeholder group • Incorporate links (e.g., within graphics) to clarify terminology or provide additional

information on a topic or component • Clarify the National Call and all three levels of TA • Explain the relevance or purpose of "important links" • Focus on providing usable tools (e.g., syllabi and tools to help IHE deans and

administrators engage in reform) • Share examples of what other states are doing by

o Sharing success stories or best practices, as well as lessons learned (e.g., challenges and strategies to overcome them)

o Showcasing states working with CEEDAR by describing their efforts, sharing tools and processes they have used, and involving them in webinars

System  Usability  Scale    

The overall SUS score, averaged across the four participants, was 71.9. This suggests above average usability. Figure 19 shows the distribution of item scores across all items. As described above, each item is awarded 0-4 points, with higher scores reflecting higher usability. More than two thirds of ratings were better than neutral (2 points).

Page 20: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

29

Figure 22. Item-Level Scores Across All Items and Raters

Table D shows the number of participants selecting each agreement rating for each item. These ratings were converted to the 0-4 scale described above to yield a mean item score. This shows that all but one item had an average rating that was better than neutral.

Page 21: AppendixL.2:(Additional(Surveys( › wp-content › uploads › 2014 › ... · 2020-03-09 · L- 10 AppendixL.2:(Additional(Surveys(This sub-appendix summarizes the data collection

L-

30

Table D. Mean Scores and Rating Distribution by Item

Item # Statement

Mean Item Score

Counts of Agreement Ratings

Strongly Disagree

Strongly Agree

1 I think that I would like to use this system frequently. 2.5 0 1 1 1 1

2 I found the system unnecessarily complex. 3.25 3 0 0 1 0

3 I thought the system was easy to use. 3.25 0 0 1 1 2

4 I think that I would need the support of a technical person to be able to use this system. 3.25 2 1 1 0 0

5 I found the various functions in this system were well integrated. 2 1 1 0 1 1

6 I thought there was too much inconsistency in this system. 3 1 2 1 0 0

7 I would imagine that most people would learn to use this system very quickly. 2.5 0 1 0 3 0

8 I found the system very cumbersome to use. 3 2 1 0 1 0

9 I felt very confident using the system. 2.75 0 1 1 0 2

10 I needed to learn a lot of things before I could get going with this system. 3.25 2 1 1 0 0

Summary  

Both the SUS and the focus group suggest that CEEDAR’s original website has overall good usability, but there are areas that can be improved. The focus group provided specific suggestions that have informed the website’s redesign. Ongoing evaluation will gauge the success of the redesign and guide future improvements.