an implementation and monitoring toolkit - born in bradford · implementation & monitoring...
TRANSCRIPT
A
Version 1.0- 21/09/2017
An Implementation and
Monitoring Toolkit
Implementation & Monitoring Toolkit v2.0 22/09/17 2
Created in Partnership by Born in Bradford and Better Start Bradford.
To be used alongside the ‘Service Design of Early Years Interventions: An Operational Guide’
You are welcome to use this guide; all we ask is that you acknowledge us in your work:
Implementation and Monitoring Toolkit. 2017: Born in Bradford & Better Start Bradford.
Monitoring
Progression
Criteria
Reporting &
Review
Proformas
Implementation & Monitoring Toolkit v2.0 22/09/17 3
Contents
Introduction ................................................................................................................ 4
Monitoring .................................................................................................................. 5
Important Implementation Tools ................................................................................. 6
Implementation Timelines .......................................................................................... 7
Progression Criteria.................................................................................................. 10
Appendix 1: Intervention Review Form ..................................................................... 16
Appendix 2: Guidelines for reporting data ................................................................ 19
Appendix 3: Project Satisfaction Questionnaire ...................................................... 20
Implementation & Monitoring Toolkit v2.0 22/09/17 4
Introduction
The Better Start Bradford Innovation Hub (part of Born in Bradford) and Better Start Bradford
have been working together to design, implement, monitor and evaluate numerous early
years interventions. We have experienced many challenges in implementing and monitoring
so many interventions at one time, whilst working across organisational boundaries. We
have realised that it is important to think carefully about monitoring early on during
development of an intervention.
As a result, we have developed this pragmatic guide to the implementation and monitoring
process, with tools to aid successful implementation, monitoring and reporting of
interventions. This guide is designed for use by any organisation that is monitoring an
intervention. For support with the design (or redesign) and planning of a new intervention or
service please see our Operational Guide through the Service Design of Early Years
Interventions. For support with more thorough evaluation of an intervention please see our
Better Start Bradford Innovation Hub Monitoring & Evaluation Framework.
You are welcome to use this guide and adapt it to suit your needs; all we ask is that
you acknowledge us in your work: An Implementation and Monitoring Guide for
Interventions. 2017: Born in Bradford & Better Start Bradford.
Implementation & Monitoring Toolkit v2.0 22/09/17 5
Monitoring
Monitoring involves the regular collection and recording of data relating to the
implementation of key intervention activities. The data include process type measures such
as recruitment e.g. the number of families attending an intervention; reach or the extent to
which people who participate in an intervention are representative of the target population,
and fidelity which relates to the extent to which the key ingredients of an intervention have
been received by participants as often and for as long as planned. Monitoring uses standard
intervention data as specified in the data requirements during the service design process
(see An Operational Guide through the Service Design of Early Years Intervention). These
data should be reported regularly throughout the course of the intervention delivery.
Monitoring will answer questions such as:
How many families is the intervention seeing?
What are the demographics of the families participating in the intervention?
Is the intervention reaching their target group?
Monitoring data will be reviewed within a framework of progression criteria (see below),
which are agreed separately for each intervention, with the aim of supporting the
intervention’s implementation and data capture, and to inform commissioning decisions. The
table below provides a summary of monitoring and what it can and can’t tell you.
Monitoring
Objectives To facilitate periodic review of intervention inputs, activities and outputs against progression criteria.
Identify need for support around data capture and implementation
Data to be used Standard intervention data specified in data requirements
Method Descriptive statistics
Outputs & Timing Quarterly & annual reports
What it will tell you Data quality and suitability; informs evaluability assessment
The intervention’s performance against progression criteria
What it won’t tell you Effectiveness of the intervention
Why the intervention is or isn’t performing as expected
What you need to produce this output
Consent routinely collected
System for data capture
Agreed progression criteria
Implementation & Monitoring Toolkit v2.0 22/09/17 6
Important Implementation Tools
• Service Design Manual
• Operational Plans – Set Up & delivery
• Data collection plans – data specifications, data collection tools, roles and
responsibilities
• Risk logs
• Issues logs
• Lessons learnt/successes logs
• Reporting templates
• Agreed performance criteria and indicators e.g progression criteria
• Evaluation plans
7
Implementation Timelines
Implementation & Monitoring Toolkit v2.0 22/09/17 8
Implementation & Monitoring Toolkit v2.0 22/09/17 9
Implementation & Monitoring Toolkit v2.0 22/09/17
10
Progression Criteria
Often interventions collect far more data than can be analysed so it is important to select the top
three indicators that measure progress to help determine whether an intervention is being delivered
as planned, needs more support, or requires a commissioning/contract review.
Step 1: Selecting progression criteria
The first step is to select the progression criteria that will be used to monitor the intervention.
Selection should be based on the key objectives of the intervention, with decision making shared
between all stakeholders. An overview of suggested progression criteria to consider is included in the
table below alongside example data that could be used to monitor progress.
Progression criteria and example data
Recruitment Reach Fidelity Implementation Completion Satisfaction
Anticipated number of participants to be seen/attend each year.
Demographics characteristics of recruited participants compared to local population
Anticipated length of programme/ anticipated number of sessions per participant
Anticipated number of courses per year (where applicable)
Proportion of participants completing intervention – criteria defined during service design
Individuals’ satisfaction with the project
Number of participants referred who were eligible for intervention
% of participants receiving intervention as according to protocol
Anticipated and actual numbers of staff trained to deliver programme
Proportion of participants who withdrew/dropped out/lost contact
Number of eligible participants contacted
Staff/ volunteer retention
Number of eligible participants who started intervention
In our work with developing this process with commissioners, intervention developers, community
partners, academics and parents, there is unanimous consensus that the collection of good quality
and complete data is a fundamental prerequisite for successful monitoring. Only three progression
Implementation & Monitoring Toolkit v2.0 22/09/17
11
criteria should be selected for each intervention. The flowchart in figure 1 will help you to determine
the most important factors for successful delivery of an intervention. Where there are more than three
criteria selected, you must prioritise the most important three.
Implementation & Monitoring Toolkit v2.0 22/09/17
12
Is the intervention part of a statutory
service (e.g. participants receive the
intervention as a part of their standard
care)
Yes No
Yes No
Recruitment is a
progression criteria
Is the intervention offered
universally?
Reach is a progression
criteria
Are implementation factors critical to the
success of the intervention (e.g. staff
recruited/trained, no. of courses
delivered?
Yes No
Is fidelity to the proposed model critical
to the success of the intervention (e.g.
caseload of staff, continuity of care?
Yes No
Fidelity is a progression
criteria
Satisfaction is a
progression criteria
Is participants’ satisfaction a key
outcome for the intervention, or is it a
critical factor in the intervention’s
successful delivery?
Yes No
Figure 1: Flow chart for selecting progression criteria
Implementation is a
progression criteria
Implementation & Monitoring Toolkit v2.0 22/09/17
13
Step 2: Agree indicators with project
The next step will be to agree the targets for each of the selected progression criteria. This is based
on a 3 tiered system of Green (targets met, everything going to plan); Amber (falling short of targets -
initiate discussion about potential strategies); Red (targets not being met, serious concerns). Figure
2 depicts an example scale on which progression criteria could be monitored.
Figure 2: Example scale for monitoring progression criteria.
The green criteria should be comprised of information an intervention may have previously provided
for service design e.g. the anticipated number of families recruited. This would be reflected as
meeting 100% of a target for any given progression criteria. Using proportions rather than absolute
figures (e.g. recruiting 20 families per quarter) allows for greater consistency and standardisation
when monitoring multiple interventions.
Where an intervention appears to be falling short of their target, this places them amber. Intervention
teams may benefit from additional support at this point to identify the reasons why and formulate
action plans. The point at which an intervention falls intro ‘red’ may indicate serious concerns or the
need for a commissioning review. Proportions for what constitute red should always be time and
context bound (e.g. an intervention would not normally go straight into red if figures look bad for 1
quarter).
The work completed by the BSB Innovation Hub recommends that the Amber to Red percentage cut-
off for each progression criteria are as follows:
1. Recruitment: Amber Red cut off: 70% of anticipated
2. Reach: Amber Red cut off: 70% of anticipated
3. Implementation: Amber Red cut off: 85% of anticipated
4. Fidelity: Amber Red cut off: 80% of anticipated
5. Satisfaction: Amber Red cut off: 80% of anticipated
Key issues to remember
Implementation & Monitoring Toolkit v2.0 22/09/17
14
Moving from Green to Amber should instigate support for the intervention team, not criticism
Moving into Red should also result in support and instigate discussion in which context will be
considered. For example, if recruitment targets aren’t met because of unexpected staff absence,
or intervention implementation falls due to restructuring of services, this should be taken into
account.
The cut-offs can apply to multiple interventions, but targets should be agreed in conjunction with
individual intervention teams. Targets should always be realistic, feasible and measurable.
Step 3: Progression criteria in action
Interventions should receive regular performance reviews where progression criteria are also
monitored (see appendix 1 for the quarterly review proforma).The data required to monitor an
intervention’s progress against the progression criteria should be agreed as part of the service design
process (see the example minimum data sets for implementation monitoring in our ‘Service Design of
Early Years Interventions: An Operational Guide’. Further guidance on reporting data and
maintaining anonymity can be found in appendix 2.
Visual depictions of intervention performance can be helpful to compare performance trends over
different time periods. In figure 3 we have provided some example graphs showing how progression
criteria may be visually reported. For example, an intervention may be in red for recruitment for two
quarters of a year and exceed the target by the end of quarter 4. Similarly, an intervention’s reach
can be compared both over time and across different ethnic groups.
Implementation & Monitoring Toolkit v2.0 22/09/17
15
Figure 3: Example progression criteria graphs for reporting recruitment and reach
Implementation & Monitoring Toolkit v2.0 22/09/17 16
Appendix 1: Intervention Review Form
[Service name, date and version number]
Completed By: [name] & [date]
Project Name
Reporting Period [date] to [date]
Date of Review Meeting:
Names of Attendees:
The following three reports accompany this intervention review document [linked here]
1. Data report Single page summary of data report (dashboard style, including progression criteria), and full report
2. Implementation report from intervention
3. Finance report from intervention
Implementation & Monitoring Toolkit v2.0 22/09/17 17
Summary of findings from Report
Discussion of findings Agreed Actions (SMART: Specific, Measurable, Achievable, Realistic, Time defined)
Escalation of any issues arising
Example progression criteria
1. Recruitment
a. Comparison of number of women/families recruited to intervention with the anticipated number recruited
b. No. of eligible families not recruited or refused consent
2. Reach (including demographics, waiting lists)
3. Implementation
a. Staffing
b. Facilities/Resources
c. Deviations from manual/service design
d. Other issues affecting delivery
4. Outcomes
a. Short term
b. Long-term
Implementation & Monitoring Toolkit v2.0 22/09/17 18
5. Finances
6. Other risks identified
7. Successes (what has gone well)
8. Challenges / lessons learnt
9. AOB
Implementation & Monitoring Toolkit v2.0 22/09/17 19
Appendix 2: Guidelines for reporting data and protecting
anonymity and confidentiality We need to ensure that there is no breach of individuals’ anonymity or confidentiality when
reporting quantitative or qualitative data. This is important in formal reports and academic
papers, and also in less formal communications, such as on social media or emails.
There are three key types of disclosure that we need to be aware of:
Ensure that no individuals can be identified from the data reported
Ensure that no information about an individual is revealed that is not already in the public domain
Ensure that it is not possible to combine outputs from the same or different sources (e.g. tables, graphs) to reveal information about an individual (residual disclosure)
The risk of disclosure will vary depending on a wide range of factors, including the number of
people interviewed or in a survey, and the number of different tables produced from the
same dataset. The researcher needs to assess this risk every time they report data. They
also need to consider the consequences of disclosure for individuals, especially for sensitive
information (e.g. teenage pregnancy data).
Reporting quantitative information
All data reported should be anonymised. No identifiable information, e.g. names, NHS
numbers should be reported. Postcodes are potentially identifiable, and should be shortened
or grouped.
If tables and graphs have small numbers, we can reduce the risk of disclosure by
suppressing numbers (e.g. writing <5 instead of the actual number) or by merging rows or
columns (e.g. by merging two ethnicity categories together to create a larger number). We
should also be aware of the risk of residual disclosure when a new table or graph could be
combined with information in the public domain from the same or another source, in order to
identify individuals. Sensitive information, e.g. mental health data, teenage pregnancy should
be treated with extra care.
Reporting qualitative information
All data reported should be anonymised (e.g. using a pseudonym or participant number). We
also need to take care not to identify participants or individuals referred to through contextual
information, findings or quotes (e.g. by reporting job titles, or experiences of an event that
few individuals were involved in). In evaluation of local services it may be difficult to
completely protect anonymity of small teams or individuals. Therefore it is important to be
transparent in the informed consent process to ensure participants understand this. It is also
important to judge the most appropriate approaches to protecting anonymity for the given
evaluation, in relation to the aims, participants and findings.
Further information is available from: National Statistics (2006) Review of the Dissemination of Health Statistics: Confidentiality Guidance. London: Office for National Statistics.
Implementation & Monitoring Toolkit v2.0 22/09/17 20
Appendix 3: Project Satisfaction Questionnaire
Thank you for completing this questionnaire. We really appreciate your feedback about the
[NAME OF PROJECT]. Please note your answers will not affect the support you will receive
now or in the future.
Please tick the box which best describes your answer to each question.
1. Overall, I feel that the [NAME OF PROJECT] was helpful for [me/my family/my child]
Strongly disagree
Disagree Neither agree nor disagree
Agree Strongly agree
2. I am satisfied with the level of support [I/my family/child] received
Strongly disagree
Disagree Neither agree nor disagree
Agree Strongly agree
3. The information given was useful to [me/my family/my child]
Strongly disagree
Disagree Neither agree nor disagree
Agree Strongly agree
Implementation & Monitoring Toolkit v2.0 22/09/17 21
4. It was easy for me to get onto the [NAME OF PROJECT]
Strongly disagree
Disagree Neither agree nor disagree
Agree Strongly agree
5. I would recommend the [NAME OF PROJECT] to my friends and family
Strongly disagree
Disagree Neither agree nor disagree
Agree Strongly agree
6. Overall, I am happy with the [NAME OF PROJECT]?
Strongly disagree
Disagree Neither agree nor disagree
Agree Strongly agree
Thank you for completing this questionnaire. If you have any further comments about [NAME
OF PROJECT] please write them in the box below.