csb service delivery model review benchmarking report · 9 csb service delivery model review - 2019...
TRANSCRIPT
CSB Service Delivery Model Review
Benchmarking Report26th July 2019
CSB Service Delivery Model Review - 20192
7 Background and Approach
36 Insight Themes & Conclusions
3 Executive Summary
12 Current State Observations & Insights
CONTENTS
42 Appendix
3
Executive Summary
CSB Service Delivery Model Review - 20194
Benchmarking the CSB helps to provide insights around how it can improve its current state delivery model and thus suggest
some considerations for its future state.
Executive Summary | Benchmarking Objective and Assumptions
CSB’s primary activities are as a L1 and L2 IT support provider.
• Assume L3 support lies outside CSB
• Metrics that characterise whole-of-IT operations (e.g. system availability) will
not be the focus of benchmarking.
1
Where necessary inferences will be made from a sample set
• If a full population data set is not available, a sample inference has been
made
2
Definitions for metrics and their components are aligned
• As much as possible, metrics from benchmark sources and CSB data have
been aligned
• Wherever this was not possible but a comparison is still necessary,
differences between definitions or caveats have been stated
3
ASSUMPTIONS
The primary objective of the benchmarking exercise is to
help identify key opportunities for improvement within
CSB’s current service delivery model (SDM) and suggest
the implications they have on the future state SDM.
OBJECTIVE
Insight accuracy is dependent on data accuracy
• Insights drawn from metrics where data quality/accuracy issues have been
identified, should have the same level of accuracy attached.
4
APPROACH
The benchmarking exercise has been
conducted by building observations and
insights about the current state and
leveraging them to form conclusions on
what could be incorporated to improve in
the future state.
Observations
Insights
Conclusions
What does the data show?
What trends can be observed?
What might be the root cause for
this observation to occur?
What could be done in the future
state to address this insight?
CSB Service Delivery Model Review - 20195
Six dimensions across service demand and supply have been used to structure current state observations and insights.
Executive Summary | Insight Dimensions
Service Demand DSC Performance DSC Capability DSC Capacity DPT Performance Customer Satisfaction
INT
EN
T
Seeks to understand how
demand for the CSB’s
services has been evolving
and contextualise it against
industry peers.
Seeks to provide insight into
the DSC’s ability to meet
current demand for its
services.
Seeks to understand the
DSC’s level of capability to
resolve customer contacts.
Seeks understand if the
DSC’s resource capacity is
adequate to meet service
demand.
Seeks to understand the
DPT’s ability to meet
service demand and if
resource capacity is
adequate.
Seeks to identify if the
CSB’s customers are
satisfied with the level of
service being provided.
ME
TR
ICS
• Total tickets received by
the CSB
• HHS breakdown of
tickets received
• Tickets received per End
User
• Tickets resolved by the
CSB
• Average speed to
answer (call wait times)
• Abandonment Rate
• First Contact Resolution
Rate
• First Level Resolution
Rate
• Average Handling Time
• Agent Utilisation • Tickets resolved by the
DPT
• Mean Time to Resolve
• Customer Satisfaction
• Incidents resolved within
SLAs
INSIGHT DIMENSIONS
Demand Analysis Supply AnalysisCustomer Outcome
Analysis
CSB Service Delivery Model Review - 20196
Based on the CSB’s metrics in comparison to benchmarks, six key conclusions have been developed around how the CSB
may aim to actively reduce workload on its teams whilst improving its effectiveness and optimisation of capacity. No
conclusions could be drawn about customer satisfaction due to limitations in the data.
Executive Summary | Insights and Conclusions
DSC appears to need additional capacity
particularly around peak periods
DPT may need to reorganise capacity towards
resolving ticket related tasks
CSB customer satisfaction appears to be good
although there are limitations in the data
Demand for DSC services is stable but well below
benchmark averages
DSC workload appears to be higher than its
resource capacity or capability.
DSC appears to be not as effective at resolving
contacts compared to benchmarks
Develop problem management capabilitiesThe CSB should consider opportunities to introduce problem management to identify and address underlying
problems in order to reduce incidents (and therefore service demand).
1
Continue to AutomateThe CSB should continue to automate or divert (e.g. self-service) low complexity, transactional tasks to reduce
service demand on DSC agents.
2
Improve knowledge management and quality of trainingConsider improving usage of tools, knowledge management and quality of training for DSC agents in order to be
more effective at resolving tickets at the first level and first contact.
3
Focus DPT CapacityThere may be opportunities to deploy DPT capacity in order to assist with service demand management strategies
or with some of the first level workload.
4
Additional call handling resources around peak periodsAdditional call handling resources might be needed in order to handle current service demand particularly around
peak periods (depending on the efficacy of the previous recommendations).
5
Improve cross CSB and eHealth collaborationCloser collaboration between CSB teams and the wider eHealth organisation may be needed to optimise and
improve customer experience, by reducing ticket handoffs and thus delays to resolve customer tickets.
6
INSIGHT THEMES CONCLUSIONS
7
1 | Background and Approach
CSB Service Delivery Model Review - 20198
Benchmarking the CSB helps to provide insights around how it can improve its current state delivery model and thus suggest
some considerations for its future state.
Background and Approach | Benchmarking Objective and Assumptions
CSB’s primary activities are as a L1 and L2 IT support provider.
• Assume L3 support lies outside CSB
• Metrics that characterise whole-of-IT operations (e.g. system availability) will
not be the focus of benchmarking.
1
Where necessary inferences will be made from a sample set
• If a full population data set is not available, a sample inference has been
made
2
Definitions for metrics and their components are aligned
• As much as possible, metrics from benchmark sources and CSB data have
been aligned
• Wherever this was not possible but a comparison is still necessary,
differences between definitions or caveats have been stated
3
ASSUMPTIONS
The primary objective of the benchmarking exercise is to
help identify key opportunities for improvement within
CSB’s current service delivery model (SDM) and suggest
the implications they have on the future state SDM.
OBJECTIVE
Insight accuracy is dependent on data accuracy
• Insights drawn from metrics where data quality/accuracy issues have been
identified, should have the same level of accuracy attached.
4
APPROACH
The benchmarking exercise has been
conducted by building observations and
insights about the current state and
leveraging them to form conclusions on
what could be incorporated to improve in
the future state.
Observations
Insights
Conclusions
What does the data show?
What trends can be observed?
What might be the root cause for
this observation to occur?
What could be done in the future
state to address this insight?
CSB Service Delivery Model Review - 20199
IT support can be broken into service demand and service supply. When these factors are balanced, positive customer
outcomes can reached and optimal performance will be seen in metrics.
Background and Approach | Analysing IT Support Metrics
Service Demand
Service DemandWorkload from customers/end-users
that IT support needs to service
irrespective of its source or form.
Service Supply
PerformanceIT support’s ability to provide an adequate level of service that meets demand for its services. Consists of an
organisation’s capability and capacity.
CapabilityIT support’s efficacy in processing customer
issues. Product of skills, processes, experience,
knowledge, tools and collaboration within the
organisation.
Capacity IT support’s baseline level of resources that is
delivering services to customers. Includes
optimising the utilisation of resources.
Customer Outcome
Customer SatisfactionCustomer’s satisfaction with IT
support, first level support and IT as a
whole organisation.
Over-performing • Over-resourced or over-skilled organisation
• Positive customer outcomes
Optimally performing• Optimally resourced and skilled organisation
• Positive customer outcomes
Under-performing • Under-resourced or under-skilled organisation
• Negative customer outcomes
Cost/Price per
Contact
Quantity of Contacts
“Optimal” performance point
Service Supply
Quantity of Contacts
Price per
Contact
Service Demand OutcomesMatched against supply
Underperforming
Over-performing
CSB Service Delivery Model Review - 201910
Six dimensions across service demand and supply have been used to structure current state observations and insights.
Background and Approach | Insight Dimensions
Service Demand DSC Performance DSC Capability DSC Capacity DPT Performance Customer Satisfaction
INT
EN
T
Seeks to understand how
demand for the CSB’s
services has been evolving
and contextualise it against
industry peers.
Seeks to provide insight into
the DSC’s ability to meet
current demand for its
services.
Seeks to understand the
DSC’s level of capability to
resolve customer contacts.
Seeks understand if the
DSC’s resource capacity is
adequate to meet service
demand.
Seeks to understand the
DPT’s ability to meet
service demand and if
resource capacity is
adequate.
Seeks to identify if the
CSB’s customers are
satisfied with the level of
service being provided.
ME
TR
ICS
• Total tickets received by
the CSB
• HHS breakdown of
tickets received
• Tickets received per End
User
• Tickets resolved by the
CSB
• Average speed to
answer (call wait times)
• Abandonment Rate
• First Contact Resolution
Rate
• First Level Resolution
Rate
• Average Handling Time
• Agent Utilisation • Tickets resolved by the
DPT
• Mean Time to Resolve
• Customer Satisfaction
• Incidents resolved within
SLAs
INSIGHT DIMENSIONS
Demand Analysis Supply AnalysisCustomer Outcome
Analysis
CSB Service Delivery Model Review - 201911
Background and Approach | Metric Definitions
Metric Definition
Benchmarked
Metrics
Tickets received per end-user (TPEU) Average number of tickets received per end-user over a year period
Average Handling Time (AHT) Average duration of agent handling time from connecting with customer until end of “after call work” (e.g. inclusive of both).
Average Speed to Answer (ASA)Average duration between when customer connects to the IT Service Desk (e.g. via IVR) and when a live agent picks up.
Averaged over all incoming handled calls.
Abandonment Rate (AR) Percentage incoming calls that hang up or disconnect before they are answered
First Level Resolution Rate (FLR) Percentage of total tickets resolved by the DSC
First Contact Resolution (FCR)Percentage of DSC tickets received from customers that are resolved upon initial contact (excludes any type of hand-off
including warm or blind). Initial contact is defined as tickets resolved within 1 hour and with no hand-offs.
Mean Time To Resolve (MTTR)Elapsed time taken to resolve an incident within ticketing system (resolve is defined as close) which requires the assistance
of an on-site technician (e.g. DPT).
Agent Utilisation Average proportion of handle time to time spent on shift for each DSC agent.
Incidents resolved within SLA agreed
timeframesProportion of total tickets resolved within SLA timeframes as specified within ServiceNow
Customer SatisfactionPercentage of satisfied or very satisfied responses to end-of-ticket surveys (e.g. sum of satisfied and very satisfied
responses).
Annual Contacts per DSC FTE Total annual inbound contacts handled by DSC agents per DSC FTE
Supporting
Metrics
Total tickets resolved by CSB/DPT Total tickets that are resolved by the CSB/DPT over a 3 year period.
HHS breakdown of tickets received
Comparison of non-ieMR and ieMR HHS service demand, after removing effects of user growth (ieMR HHS defined as
having at least, one hospital which has had a full stack ieMR rolled out, non-ieMR does not have a hospital with full stack
ieMR rolled out).
Total tickets received by CSB Total tickets that are received by the CSB over a 3 year period. Breakdown is an estimate only.
12
2 | Current State Observations & Insights
CSB Service Delivery Model Review - 201913
Service Demand
CSB Service Delivery Model Review - 201914
Demand for CSB services across teams and channels (including the DSC and DPT) has been stable across the past 2 years.
Tickets received by the CSB
BENCHMARK DATA
• Volumes of tickets received for CSB services has been
relatively stable over the past 2 years (since January
2017)
• This trend is consistent across all channels
including the DSC and DPT
• There is a clear annual cycle to the volume of tickets
received
• Significant drop-off in volumes occur in December
of each year
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
Overall demand for CSB services has been relatively
stable across channels and teams
• The introduction of new systems and services do not
appear to have affected the overall volume of tickets
being received by the CSB
Demand for DPT’s services has been relatively stable over
the last 2 years, which means that DPT’s overall ticket
based workload has not changed. Further analysis (e.g.
from problem management) may be required to continue
control of overall demand.
• Regular end user device hardware refresh cycles may
possibly be helping to limit growth in demand for DPT
services.1 CSB Data | ServiceNow Ticket Extract Received 29/4/2019 from CSB Manager, Reporting and Analysis, SM&I
* incl. OPS, Portal, Auto resolved (e.g. Amazon Connect), Email and OITS
0
20,000
40,000
60,000
80,000
100,000
120,000
Jan-16 Jan-17 Jan-18 Jan-19M
onth
ly T
icket V
olu
mes R
eceiv
ed b
y C
hannel
Historical Volume of Total CSB Tickets Received (Opened)1
DSC (estimated) DPT (estimated) Online * (estimated) Total CSB
Date when ticket
opened
Note that breakdowns are estimates only.
Note: See
Appendix for detail
as to how the
ServiceNow ticket
data was used to
generate the
graphs shown.
CSB Service Delivery Model Review - 201915
Introduction of full ieMR to a HHS does not appear to have a significant impact on the service demand for the CSB’s services
from the relevant HHS.
HHS breakdown of tickets received (1 of 2)
BENCHMARK DATA
• There is no distinguishable change in tickets received
resulting from the introduction of a full stack ieMR hospital
to a HHS.
• Metro South HHS appears to have a notable
increase in total tickets received (this is explored
further on page 16).
• There is no significant patterns that distinguish a non full
stack ieMR HHS’s tickets received volume over time
compared to an ieMR HHS’s tickets received over time.
• Mackay and Cairns & Hinterland HHSs have been selected
as examples since they receive similar volumes of tickets
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
ieMR has no discernible impact on the overall volume of
tickets received from a HHS, meaning it is unlikely that the
introduction of ieMR to new HHSs will significantly
change workload for the CSB.
• Tickets received is not by itself, a direct measure of
workload for the CSB however,
• Data suggests that there should not be a
significant peak in volumes as a result of
ieMR.
• Relatively low volumes of ieMR tickets
received may not affect CSB workload since it
is not the primary responsible resolver1 CSB Data | ServiceNow End-of-Ticket Survey Results Extracts Received 29/4/2019 from CSB Manager, Reporting and
Analysis, SM&I
Mackay Average Decline per year (Jan 17 to Mar 19) : ~ 4.86%
Cairns Average Decline per year (Jan 17 to Mar 19) : ~ 1.35%
0
1,000
2,000
3,000
4,000
5,000
6,000
Jan-17 Jan-18 Jan-19
Month
ly T
icket V
olu
mes R
eceiv
ed f
rom
Cairns &
H
inte
rland H
HS
Non full-ieMR Example HHS (Cairns & Hinterland) - CSB Received Tickets1
0
500
1,000
1,500
2,000
2,500
3,000
Jan-17 Jan-18 Jan-19
Month
ly T
icket V
olu
mes R
eceiv
ed
from
Mackay
HH
S
Full ieMR Example HHS (Mackay) - CSB Received Tickets1
CSB Service Delivery Model Review - 201916
0
2,000
4,000
6,000
8,000
10,000
12,000
14,000
Jan-17 Jan-18 Jan-19
Month
ly T
icket V
olu
mes R
eceiv
ed f
rom
Metr
o S
outh
H
HS
Full ieMR HHS (Metro South) - CSB Received Tickets1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Jan-17 Jan-18 Jan-19
Month
ly T
icket V
olu
mes R
eceiv
ed p
er
End U
ser
from
M
etr
o S
outh
HH
S
Full ieMR HHS (Metro South) - CSB Received Tickets per End User1
Although the Metro South HHS has seen an increase in volume of tickets received, this is likely due to the rise in the number
of end users rather than due to the introduction of new ieMR systems.
HHS breakdown of tickets received (2 of 2)
BENCHMARK DATA
• There has been an average increase in volume of
tickets received from Metro South HHS of 10.36% since
January 2017
• Multiple ieMR hospitals in Metro South HHS have
gone live during this time period.
• There has been no change in the number of tickets
received per end-user from Metro South HHS since
January 2017
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
An increase in the number of end-users is likely the
primary driver for the increase in volume of tickets
received from the Metro South HHS.
• An ieMR implementation could contribute to an
increase in demand if there is a corresponding
increase in end users
• The change in volume of tickets received from
Metro South has been gradual
• Seasonal spikes (as noted on page 14)
appear to have a more significant impact
than periods when an ieMR
implementation has “gone live”.
1 CSB Data | ServiceNow End-of-Ticket Survey Results Extracts Received 29/4/2019
Metro South Average Increase per year (Jan 17 to Mar 19) : ~ 10.36%
Metro South Average Increase per end user per year (Jan 17 to Mar 19) : ~0%
CSB Service Delivery Model Review - 201917
The DSC has been receiving a relatively low demand for its services compared to benchmarks. This may be due to the
implementation of relatively reliable systems.
DSC Tickets received per End User
BENCHMARK DATA
• The DSC’s tickets per end-user (TPEU) is well below the
benchmark 25th percentile
• The DSC is receiving a decreasing TPEU which has been
largely driven by a decrease in DSC TPEU
• DSC TPEU rate of decline has reduced since
January 2017 until present
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
Demand for DSC services is low compared to industry
benchmarks and is relatively stable, possibly resulting
from the implementation of relatively reliable systems.
• Relatively reliable of supported systems means
that users need to raise relatively fewer tickets
• This may be due to the timely removal of
problematic legacy systems and implementation of
more evergreen solutions (e.g. such as Office 365
and Windows 10).
• The CSB likely has a relatively high number of
“latent users” (e.g. users who need to engage with
IT support on a less regular basis) compared to
benchmark organisations
1 Benchmark Data | Gartner - 2019 - IT Key Metrics Data 2019 Key Infrastructure Measures IT Service Desk Analysis2 CSB Data | ServiceNow Ticket Extract Received 10/4/2019 & End User numbers Received 24/4/2019 both from CSB
Manager, Reporting and Analysis, SM&I
0
5
10
15
20
25
Serv
ice D
esk E
ncounte
rs p
er
User
per
Year
DSC Service Desk Tickets per End-User1,2
DSC Average
(Mar 18 to Mar 19)Benchmark AverageInter-Quartile
Range
19.4 – Est. 75th Percentile
8.3 – Est. 25th Percentile
15.9 - Median
3.59
0
1
2
3
4
5
6
7
Jan-16 Jan-17 Jan-18 Jan-19
Tic
kets
Receiv
ed p
er
End U
ser
History of DSC Tickets Received per End-User (Estimate for Year)1
CSB Service Delivery Model Review - 201918
0
20,000
40,000
60,000
80,000
100,000
120,000
140,000
Jan-16 Jan-17 Jan-18 Jan-19
Tic
kets
resolv
ed b
y C
SB
(to
tal)
Tickets resolved by the CSB (Monthly Totals)1
CSB incl. Auto CSB excl. Auto
Date when ticket
resolved
CSB tickets resolved by live agents has been dropping over the previous 3 years – likely due to the implementation of more
automated systems.
Tickets resolved by the CSB
BENCHMARK DATA
• CSB tickets resolved by a live agent (e.g. DPT, DSC or
SMI), has been decreasing over the previous 3 years
• Tickets auto resolved (e.g. OPS, automated
password resets) have been increasing
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
Volume of live agent resolved tickets has been
decreasing, possibly resulting from automation of some
services and a potential decline in effectiveness of live
agents.
• Introduction of automation, streamlined processes
and other service management improvements
means tickets are resolved by methods other than
live agents
• Additionally live agents may be becoming less
effective as automation leaves only higher
complexity, non-transactional contacts
• The decrease in resolved tickets over the past 3
years is in line with decreasing demand (see page
14).
1 CSB Data | ServiceNow Ticket Extract Received 10/4/2019 from CSB Manager, Reporting and Analysis, SM&I
CSB Service Delivery Model Review - 201919
DSC Performance
CSB Service Delivery Model Review - 201920
0
100
200
300
400
500
600
Incom
ing C
onta
ct
Avera
ge S
peed t
o A
nsw
er
(s)
Average Speed to Answer (ASA)1,2
Although average speed to answer has been sharply decreasing over the previous 12 months, on average it has been
significantly higher than benchmarks suggesting that there may be capacity or effectiveness issues within the DSC.
Average Speed to Answer
434.3 s
564.3 s
BENCHMARK DATA
46 s - Average
DSC 24/7 Average
(Mar 18 to Mar 19)
Benchmark Average
63.62 s - 75th Percentile
20.78 s - 25th Percentile
DSC Core Hours
Average
(Mar 18 to Mar 19)
• The DSC’s inbound call average speed to answer (ASA)
is much higher than benchmark maximums and
averages (~8-10 times the benchmark average)
• The DSC’s ASA has been relatively steady between
December 2016 and February 2018 but has seen a sharp
increase between April 2018 and July 2018.
• From July 2018 to present there has been a sharp
decline in ASA, improving approximately 33.2% (when
incorporating callbacks as “0” waiting time).
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
High ASA indicates either resource capacity issues or
first level effectiveness issues within the DSC.
• High wait times suggests the DSC does not have
adequate resources or effectiveness to ensure it
can support the demand it is receiving
Customer wait times have been decreasing since July
2018, potentially due to the implementation of call-backs
and automation of low complexity, transactional contacts.
• The CSB has been successful in leveraging
automation to reduce the volume of customers
handled by DSC agents.
0
100
200
300
400
500
600
700
Jan-16 Jan-17 Jan-18 Jan-19
DS
C A
vera
ge S
peed t
o A
nsw
er
(s)
Historical DSC Average Speed to Answer (24/7 Average)2
1 Benchmark Data | Gartner - 2019 - IT Key Metrics Data 2019 Key Infrastructure Measures IT Service Desk Analysis2 CSB Data | Amazon Connect Extracts Received 10/4/2019 from CSB Manager, Reporting and Analysis, SM&I
Inter-Quartile
Range
CSB Service Delivery Model Review - 201921
0%
5%
10%
15%
20%
25%
30%
35%
40%
45%
Incom
ing C
onta
ct
Abandonm
ent
Rate
(%
)
Abandonment Rate (AR)1,2
The DSC’s abandonment rate is significantly higher than benchmarks. This result correlates with higher than benchmark ASA
and supports the insight that the DSC is facing challenges to meet demand for its services.
Abandonment Rate
34.40 %
40.86 %
BENCHMARK DATA
0%
5%
10%
15%
20%
25%
30%
35%
40%
45%
50%
Jan-16 Jan-17 Jan-18 Jan-19
Incom
ing C
onta
ct
Abandonm
ent
Rate
(%
)
Historical DSC Average Abandonment Rate (24/7 Average)2
7 % - Average
8.92 % - 75th Percentile
4.08 % - 25th Percentile
DSC 24/7 Average
(Mar 18 to Mar 19)
Benchmark AverageInter-Quartile
Range
DSC Core Hours
Average
(Mar 18 to Mar 19)
• The DSC’s inbound call abandonment rate (AR) is much
higher than benchmark maximums and averages (~6-7
times the benchmark average)
• Although it rapidly increased between December 2016 and
March 2017, following this period the DSC’s AR has
stabilised around its current average
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
The DSC’s high abandonment rate is likely being driven
by its high call wait times, potentially during peak periods
• Benchmarks indicate that in general there should
be a near proportional correlation between ASA
and AR (~7.19 seconds ASA per % AR).
• For reference the DSC’s current ratio is 9.1
seconds ASA per % AR indicating that there is
good correlation between ASA and AR
• Further investigation is required to quantify AR and
ASA during peak periods (and compare to non-
peak periods).
1 Benchmark Data | Gartner - 2019 - IT Key Metrics Data 2019 Key Infrastructure Measures IT Service Desk Analysis2 CSB Data | Amazon Connect Extracts Received 10/4/2019 from CSB Manager, Reporting and Analysis, SM&I
CSB Service Delivery Model Review - 201922
DSC Capability
CSB Service Delivery Model Review - 201923
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
First Level R
esolu
tion R
ate
(%
)
First Level Resolution (FLR) Rate1,2
The CSB’s first level resolution rate is relatively low compared to benchmark averages and has been steadily decreasing
over the past 12 months suggesting DSC agents aren’t adequately equipped to resolve contacts they are encountering.
First Level Resolution Rate
48.65 %
BENCHMARK DATA
81.9 % - 75th Percentile
73 % - 25th Percentile
77.3% - Average
• The CSB’s first level resolution rate (FLR) is
significantly lower than benchmark minimums
• CSB FLR has been relatively steady from July 2016 to
December 2017
• The CSB’s FLR has been steadily declining over the
previous 16 months (from January 2018 to Present).
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
DSC agents potentially do not have the optimum training,
knowledge or tools usage to resolve tickets compared to
industry peers.
• Peer organisations appear to have a greater
capability to resolve tickets at the first level
compared to the DSC
• DSC agents may not be as adequately equipped
through tools or training/knowledge to service
customers as a peer level 1 agent
1 Benchmark Data | MetricNet 2014 Service Desk Benchmark Data 2 CSB Data | ServiceNow Ticket Extract Received 10/4/2019 from CSB Manager, Reporting and Analysis, SM&I
DSC Average
(Mar 18 to Mar 19)Benchmark AverageInter-Quartile
Range
0%
10%
20%
30%
40%
50%
60%
70%
Jan-16 Jan-17 Jan-18 Jan-19
First Level R
esolu
tion R
ate
(%
)
Historical CSB First Level Resolution Rate over Time2
CSB Service Delivery Model Review - 201924
The DSC’s first contact resolution rate is relatively low compared to benchmark averages and has been steadily decreasing
over the past 12 months although this may be due to a combination of ServiceNow configuration and auto-password resets
First Contact Resolution Rate
BENCHMARK DATA
• The DSC’s first contact resolution rate (FCR) is below
benchmark averages assuming zero re-allocations in the
Service Now data.
• ServiceNow is currently configured to report a re-
allocation when the auto-generated assignment is
changed by the first DSC call agent
• Taking into account the potential re-allocation prior to
first contact, and assuming a time of 15 minutes is a
reasonable proxy for First Contact Resolution,
increases the apparent FCR by close to 20%.
• The DSC’s FCR has been declining since April 2018
regardless of this ServiceNow configuration feature.
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
DSC agents potentially do not have the optimum training,
knowledge or tool usage to quickly diagnose and resolve
customer incidents compared to industry peers. This trend
is exacerbated by the use of automation removing tickets
that are more straightforward to resolve at first contact.
• If agents are unable to resolve contacts at first interaction
they may need to hand off tickets to other levels of support
which do have the training, knowledge or tools to resolve
tickets.
• Reconfiguration of ServiceNow may make service
performance data more comparable with peer benchmarks
in future.
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
First Level R
esolu
tion R
ate
(%
)
First Contact Resolution (FLR) Rate1,2,3
49. 5%
81.9 % - 75th Percentile
73 % - 25th Percentile
77.3% - Average
0
500
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
Jan-17 Jan-18 Jan-19
DS
C F
irst C
onta
ct
Resolu
tion R
ate
(%
)
Historical DSC First Contact Resolution Rate2
FCR (15 Minutes, 0 - 1 re-allocations)
FCR (all durations, 0 re-allocations)
Auto Password Resets
75%
DSC Average Mar 18 - Mar 19
(0 re-allocations)
Benchmark Average
Inter-Quartile Range
DSC Average Mar 18 to Mar 19
(0 - 1 re-allocations)
Note that for dates prior to June 2017 the FCR data source is different from
the current FCR data source and thus cannot be meaningfully compared
1 Benchmark Data | Gartner - 2019 - IT Key Metrics Data 2019 Key Infrastructure Measures IT Service Desk Analysis2 CSB Data | ServiceNow Ticket Extract Received 10/4/2019 from CSB Manager, Reporting and Analysis, SM&I3 Benchmark Data | Computer Economics 2018 – Help Desk Staffing Ratios – Healthcare Industry
CSB Service Delivery Model Review - 201925
The DSC’s first contact resolution rate appears low when applying the strict definition of FCR due largely to the way in which
ServiceNow is configured.
First Contact Resolution Rate – ServiceNow Configuration
User calls DSC
IVR to direct call
DSC agent receives
call
Agent raises ticket
using template
Template based
allocation occurs
DSC agent re-
allocates back to
DSC and resolves
Can
DSC
resolve
?
Ticket allocated to
appropriate group
Re-allocation count = 1
Clock starts
Clock stops
Non-DSC
DSC agent resolves
Re-allocation count = 0
Clock stopsDSC
Both scenarios should fall into the FCR
category but only the re-allocation count = 0
would under the strict definition of FCR (shown
as FCR(all durations, 0 re-allocations) in slide
24.
When a ticket is raised in ServiceNow, the DSC agent chooses a template on
which to base the ticket. The templates are configured such that the ticket is auto-
allocated to a support team based on the template chosen.
However, in some instances, the DSC agent may be able to resolve the ticket and
so re-assigns the ticket back to DSC. This results in an apparent re-allocation
count of 1 in the ServiceNow data which does not accurately represent the path
the ticket has taken.
To obtain a realistic view of FCR, such tickets should also be included in the FCR
count. However, a re-allocation count of 1 may also be a valid re-allocation rather
than due to the feature described above. It is unlikely, however, that such genuine
cases would be resolved in a short timeframe.
To this end, tickets with a re-allocation count of 0 or 1 and a resolution time of 15
minutes or less have been included in the FCR (15 minutes, 0 – 1 re-allocations)
data set shown in slide 24.
Reconfiuration of ServiceNow to remove template based auto-allocation may
make DSC’s FCR data more readily comparable with industry bencharks in future.
YesNo
CSB Service Delivery Model Review - 201926
0
100
200
300
400
500
600
700
800
Inbound A
vera
ge H
andlin
g T
ime (
s)
Average Handling Time (Inbound)1, 2
The DSC’s average handling time is below benchmark averages but has been increasing over the last 3 years. This
correlates with the low FCR - agents are unable to resolve jobs at first contact and thus need to hand off relatively quickly.
Average Handling Time
441.7 s
BENCHMARK DATA
462.3 s
544.8 s - Average
587.4 s - 75th Percentile
505.8 s - 25th Percentile
• The DSC’s inbound call average handling time (AHT) is
below the benchmark 25th percentile but above
minimums
• The DSC’s AHT has been gradually increasing over the
last 3 years (since October 2016)
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
DSC agents are handling contacts faster than
benchmarks potentially because they are not able to
resolve customer contacts at the same rate as peers.
• Combined with a lower than benchmark FLR and
FCR, a lower than benchmark AHT suggests
agents are spending less time on calls as they are
handing them off to other resolver groups
Over the long term the DSC agents are handling more
time consuming, non-transactional contacts.
• Non-transactional tasks such as trouble-shooting
are innately more time consuming due to additional
complexity to resolve
The dip in AHT from September 2018 may be due to the
implementation of callbacks.
• Callbacks may be shorter in duration because end-
users are self-helping (and no longer need DSC
assistance) or callbacks are reaching voicemail.
0
100
200
300
400
500
600
Jan-16 Jan-17 Jan-18 Jan-19
Avera
ge H
andlin
g T
ime (
s)
Historical DSC Average Handling Time (24/7 Average)2
1 Benchmark Data | MetricNet 2014 Service Desk Benchmark Data 2 CSB Data | Amazon Connect Extracts Received 10/4/2019
DSC 24/7 Average
(Mar 18 to Mar 19)
Benchmark AverageDSC Core Hours
Average
(Mar 18 to Mar 19)
Inter-Quartile
Range
CSB Service Delivery Model Review - 201927
DSC Capacity
CSB Service Delivery Model Review - 201928
Benchmark Average
Inter-Quartile
Range
DSC Average
Utilisation
(Estimated)
(Mar 19)
0%
10%
20%
30%
40%
50%
60%
70%
80%
Agent
Utilis
ation p
er
day (
%)
Agent Utilisation1,2
The DSC is over-utilising its agents compared to industry benchmarks – the average DSC agent utilisation is above the
maximum benchmark range. This suggests the DSC may find it difficult to further optimise utilisation of its resources.
Agent Utilisation
BENCHMARK DATA
55.7 %Average
62.3% - 75th Percentile
50.7 % - 25th Percentile
74.45 %
• The DSC’s agent utilisation is much higher than the
benchmark average and above the benchmark range
maximum.
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
The DSC may not have additional capacity to handle
incoming service demand.
• Individual agent utilisation cannot be increased in
order to better handle service demand without
further exceeding benchmark maximums
• More agents may be required to adequately handle
incoming service demand
High agent utilisation in the DSC may drive higher
turnover rates resulting in the loss of knowledge. This
could contribute to the decreasing FCR observed earlier.
• MetricNet analysis3 suggests that agent utilisation
should be directly proportional to staff turnover
• High staff turnover results in tacit knowledge and
experience leaving the organisation whilst bringing
fresh replacement agents who are less efficient or
capable of resolving contacts (hence lower FCR).
1 Benchmark Data | MetricNet 2014 Service Desk Benchmark Data2 CSB Data | DSC Roster (31 Dec 18 to 30 Jun 19) & DSC Agent Handle Times (Mar 19) both received 15/4/2019 from
DSC FAMMIS Team Leader3 MetricNet “The Seven Most Important Performance Indicators for the Service Desk”, Jeff Rumburg & Eric Zbikowski
CSB Service Delivery Model Review - 201929
DPT Performance
CSB Service Delivery Model Review - 201930
Date when ticket
resolved
The DPTs have been resolving a constant volume of tickets over the past 3 years. Combined with the relatively constant
volume of tickets received by the DPT, this indicates that the DPT is performing well under the workload.
Tickets resolved by the DPT
BENCHMARK DATA
• Total DPT resolved ticket volumes have remained
relatively constant over the previous 3 years
• There has been a slight incline since January
2017, however this increase is not significant,
particularly when compared to the overall volume
of tickets resolved by the CSB (see page 18).
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
Given that both tickets resolved and received volumes
have remained constant, this implies there has not been
any significant accumulation of tickets in the DPT’s
backlog.
• If DPTs are resolving tickets at the same rate as
they are receiving them, its backlog should not be
growing significantly.
1 CSB Data | ServiceNow Ticket Extract Received 10/4/2019 from CSB Manager, Reporting and Analysis, SM&I
0
20,000
40,000
60,000
80,000
100,000
120,000
140,000
Jan-16 Jan-17 Jan-18 Jan-19
Tic
kets
resolv
ed b
y C
SB
(to
tal)
Tickets resolved by the CSB and DPT (Monthly Totals)1
CSB incl. Auto DPT Only
CSB Service Delivery Model Review - 201931
Benchmark AverageInter-Quartile
Range
0
20
40
60
80
100
120
140
160
180
200
Mean T
ime to R
esolu
tion (
Busin
ess H
ours
)
Mean Time to Resolution (On-Site Incidents)1,2
The DPT’s mean time to resolution is well above benchmark maximums. This indicates that the DPT may be spending more
time on non-ticket related tasks compared to peer organisations – although data quality issues may be skewing the metric.
Mean Time To Resolve
BENCHMARK DATA
17.8 h - 75th Percentile
3 h - 25th Percentile
8.42 h - Average
• The DPT’s mean time to resolve incidents (MTTR)
appears to be far above benchmark averages and
maximums
• The DPT’s MTTR has been gradually increasing from
September 2016 to present.
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
The DPT’s increasing and relatively high MTTR, indicates
there could be delays in DPT coordinating resolution with
other support teams or that the DPT is spending time on
non-ticket tasks.
• DPT MTTR is calculated from tickets the DPT resolves,
regardless which group it is raised with first, i.e. averages
are calculated inclusive of time taken for the ticket to
eventually reach a DPT resolver group.
• The DPT reports an average 1.48 hours effort to
resolve a ticket (as opposed to elapsed time).
• It is possible the DPT is experiencing delays (as per
page 30) and, anecdotally, this may be due to the
recent CWP rollout although such work is reported
as being supported by additional resources so
should not represent a resource constraint.
• Data quality could be compromised if there are large
numbers of unclosed tickets. This may be obscuring a view
of additional available DPT capacity.DPT Filtered (< 30
day) Average (Mar 18
to Mar 19)
DPT Unfiltered
Average (Mar 18 to
Mar 19)
175 h
107.05 h
1 Benchmark Data | MetricNet 2012 Service Desk Benchmark Data2 CSB Data | ServiceNow Ticket Extract Received 10/4/2019 from CSB Manager, Reporting and Analysis, SM&I
1.48 h - Average DPT
Effort to Resolve0
50
100
150
200
250
Jan-17 Aug-17 Feb-18 Sep-18 Mar-19
Mean T
ime to R
esolv
e I
ncid
ents
(h)
Historical DPT MTTR (Incidents)2
CSB Service Delivery Model Review - 201932
The much higher MTTR than peer benchmarks could be due to a number of potential causes
Mean Time To Resolve – Discussion
Ticket allocated to
DPT
DPT investigate
ticket
DPT resolve incident
Ticket resolved and
recorded as closed
Does
DPT
close
ticket?
Ticket resolved but
not recorded as
closed
Clock running
Non-DSC Clock stops
Potential Cause – Late Ticket Closure
A ticket may be resolved but not closed by the DPT. This restores service
for the affected user(s) but this restoration is not recorded in ServiceNow.
At some indeterminate point in future, the ticket is closed but this
extended period significantly skews the MTTR result which is based on
the period between ticket opening and closing.
YesNo
Clock continues
running
Potential Cause – DPT Utilisation
The increase in MTTR may also be due to DPT staff being engaged in Fee
for Service (FFS) work. Such work may not engage external resources to
either undertake the work or to backfill the DPT resources. This would
reduce the DPT capacity available to resolve tickets and thus increase the
average MTTR. Anecdotally, however, it is reported that significant FFS is
usually supported with external resources. As such, the FFS work should
not represent a significant additional impost on DPT capacity.
Summary
The ServiceNow data does not support analysis of the extent to which
either of these potential causes may be the root cause of high MTTR.
Further analysis is therefore required to ascertain the reason for CSB’s
MTTR being so much higher than peer benchmarks.
CSB Service Delivery Model Review - 201933
Customer Service
CSB Service Delivery Model Review - 201934
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Custo
mer
Satisfa
ction (
positiv
e r
esponse,
%)
Customer Satisfaction1,2
CSB customer satisfaction appears to be above benchmark averages – however better data quality is required in order to
draw more accurate insights about customer satisfaction.
Customer Satisfaction
BENCHMARK DATA
78.2 % - Average
84.8% - 75th Percentile
73.2 % - 25th Percentile
89.77%
86.87%
• The DSC and eHealth’s overall customer satisfaction is
above the 75th percentile but less than benchmark
maximums
• From the data the CSB has provided Customer
Satisfaction appears to be relatively stable across the
previous 12 months
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
The CSB customer satisfaction data may not reflect true
customer sentiment due to the higher than average
abandonment rate – customers who have abandoned
calls are likely not satisfied.
• This conclusion aligns with workshops and
interviews with a broad range of CSB stakeholders
• There should be strong correlation between ASA
and customer satisfaction
• Industry trends demonstrate an expected drop-off
in customer satisfaction when the ratio of ASA on
AHT surpasses ~10% (MetricNet)
• For reference the CSB’s ASA on AHT ratio as of
April 2019 is close to 48% DSC Average
Satisfaction
(Dec 18 to Jun 19)
Benchmark AverageeHealth Average
Satisfaction
(Dec 18 to Jun 19)
50%
55%
60%
65%
70%
75%
80%
85%
90%
95%
100%
May-18 Jul-18 Aug-18 Oct-18 Dec-18 Mar-19
Custo
mer
Satisfa
ction (
% +
ve)
Recent History of Customer Satisfaction2
DSC Average Customer Satisfaction
eHealth Average Customer SatisfactionInter-Quartile
Range1 Benchmark Data | MetricNet 2014 Service Desk Benchmark Data 2 CSB Data | ServiceNow End-of-Ticket Survey Results Extracts Received 5/4/2019 from CSB Review Senior BA
CSB Service Delivery Model Review - 201935
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Perc
enta
ge o
f In
cid
ents
Resolv
ed w
ithin
SLA
Tim
efr
am
es
(%)
Incidents Resolved within SLA timeframes1,2
CSB Average
The CSB is resolving incidents within SLA timeframes set in ServiceNow indicating it is meeting agreed customer
expectations – although this depends on priority reporting consistency and leniency of SLAs compared to industry.
Incidents resolved within SLA Timeframes
BENCHMARK DATA
• The CSB is resolving incidents within SLA timeframes at
a rate within benchmark upper and lower bounds (no
median/average available).
• The CSB has been resolving incidents within SLA
timeframes at a steady rate over the last 12 months,
although it has been slightly decreasing
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
The CSB appears to be resolving incidents within the SLA
timeframes set in ServiceNow.
• CSB resolver groups appear to be effectively
resolving incidents according to customer
expectations.
• SLAs defined according to incident priority level
entered into ServiceNow – SLAs are met if tickets
are closed within timeframe associated to priority
level. The CSB’s performance against benchmarks
therefore depend on,
• Consistency with which tickets are
assigned priority
• Alignment of CSB SLAs with industry SLAs
• ServiceNow ticket SLAs differ from SLAs applying
to DSC performance (e.g. for AHT etc.).
88.6 % CSB Average
(Apr 18 to Apr 19) 90 % - Upper
Bound
81 % - Lower
Bound
1 Benchmark Data | Deloitte Sourced Industry Benchmark (2018)2 CSB Data | DSC Roster (31 Dec 18 to 30 Jun 19) & DSC Agent Handle Times (Mar 19) both received 15/4/2019 from
DSC Analytics
Benchmark Range
LimitsBenchmark
Range
40%
50%
60%
70%
80%
90%
100%
Apr-18 Jul-18 Oct-18 Jan-19
Perc
enta
ge o
f In
cid
ents
Resolv
ed w
ithin
SLA
T
imefr
am
es (
%)
Recent History of Incidents Resolved within SLA Timeframes2
36
3 | Insight Themes & Conclusions
CSB Service Delivery Model Review - 201937
The following insight themes have been drawn for each dimension, from the range of insights gathered by benchmarking the
CSB against its industry.
Insight Themes & Conclusions | Insight Themes (1 of 2)
SUPPORTING INSIGHTSINSIGHT DIMENSION & THEME
• Overall demand for DSC services is stable (Total tickets)
• Demand for DPT and overall CSB services is also relatively stable
• There is no observable impact of introducing ieMR on a HHS’s demand for CSB services (see page 15).
Demand for DSC services is
stable but well below
benchmark averages
Service
Demand
• The DSC is still being overwhelmed with the volume of contacts they are receiving (ASA, AR) despite a lower than
benchmark average demand. This result could be driven to peak demand periods.
DSC workload appears to be
higher than its resource
capacity or capability.
DSC
Performance
• The DSC appears to be less effective than benchmarks at handling tickets at first contact (through chat or phone)
(FCR).
• Potentially as a consequence of lower effectiveness, the DSC appears to need to hand off more contacts than other
peer organisations (FLR). DSC agents are thus spending shorter than benchmark times on calls (AHT)
• High agent utilisation may be also be causing a loss in effectiveness (e.g. by driving staff turnover) (AU).
DSC appears to be not as
effective at resolving contacts
compared to benchmarks
DSC
Capability
CSB Service Delivery Model Review - 201938
• The DSC is being overwhelmed with the volume of contacts they are receiving (ASA, AR) on average
• This may be primarily driven by inadequate DSC capacity during peak periods
• The DSC’s agents are being highly utilised which means that at its current capability level (AU), it may be difficult to
further optimise existing capacity within the DSC.
DSC appears to need
additional capacity particularly
around peak periods
DSC
Capacity
The following insight themes have been drawn for each dimension, from the range of insights gathered by benchmarking the
CSB against its industry.
Insight Themes & Conclusions | Insight Themes (2 of 2)
INSIGHT DIMENSION & THEME SUPPORTING INSIGHTS
• DPT may be expending a high proportion of its capacity on non-ticket tasks compared to benchmark averages
(MTTR, Effort to Resolve)
• The DPT may also be waiting relatively long to receive the tickets it resolves, likely due to challenges co-
ordinating with other teams
• It is likely that the DPT’s backlog is not growing significantly since the volumes of tickets received and
resolved have both been stable (Tickets received by the DPT, Tickets resolved by the DPT).
DPT may need to reorganise
capacity towards resolving
ticket related tasks
DPT
Performance
• The DSC and eHealth has received above benchmark average customer satisfaction ratings (CS) from end-of-ticket
surveys
• Data collected via this methodology does not capture abandoning customer sentiments which is likely to be
lower than benchmarks due to high ASA etc.
• The CSB as a whole is resolving incidents within ServiceNow ticket SLA timeframes at above benchmark averages
(SLAs)
• Insight limited by dependency on leniency of the SLAs entered in ServiceNow and data quality
CSB customer satisfaction
appears to be good although
there are limitations in the data
Customer
Satisfaction
CSB Service Delivery Model Review - 201939
Problem management and automation can help to the CSB to actively reduce the volume and complexity of tickets it
receives.
Insight Themes & Conclusions | Conclusions (1 of 3)
How can the CSB actively
reduce the complexity and
volume of tickets it receives?
Service Demand
1 Develop problem management capabilitiesThe CSB should consider opportunities to introduce problem management to identify and address underlying problems in order to reduce incidents (and
therefore service demand)
• DSC agents are handling an increasing proportion of time consuming, non-transactional contacts
• First Level Resolution Rate has been steadily decreasing over the last 12 months and is below benchmarks
• Average Handling Time has been gradually increasing from October 2016
• Although demand is stable, problem management may help to make the DSC more effective by reducing the complexity of contacts they
need to handle
• DSC agents potentially do not have the optimum training, knowledge or tools usage to resolve tickets
• First Contact Resolution Rate is below benchmark averages
• First Level Resolution Rate has been steadily decreasing over the last 12 months and is below benchmarks
2 Continue to AutomateThe CSB should continue to automate or divert (e.g. self-service) low complexity, transactional tasks to reduce service demand on DSC agents
• Customer wait times have been decreasing potentially due to the implementation of call-backs and automation of low complexity, transactional
contacts
• Average Speed to Answer (e.g. call waiting times) has been rapidly decreasing since July 2018
• Proportion of tickets resolved by automated services has been increasing (e.g. helping to decrease the workload on CSB live agents).
CSB Service Delivery Model Review - 201940
How can the CSB improve
its current state Service
Delivery Model?
Service Supply
Improving the effectiveness of first level support through strong knowledge management and cross team collaboration can
help to optimise the CSB’s supply of services.
Insight Themes & Conclusions | Conclusions (2 of 3)
Improve knowledge management and quality of trainingConsider improving usage of tools, knowledge management and quality of training for DSC agents in order to be more effective at resolving tickets at the first
level and first contact.
• DSC agents potentially do not have the optimum training, knowledge or tools usage to resolve tickets
• First Level Resolution Rate is below benchmark averages
• First Contact Resolution Rate is below benchmark averages
• Average Handling Time has been gradually increasing from October 2016
• High agent utilisation in the DSC is likely driving high turnover rates and thus loss of knowledge from DSC.
• Agent Utilisation is close to benchmark maximums
1
Focus DPT CapacityThere may be opportunities to deploy DPT capacity in order to assist with service demand management strategies or with some of the first level workload
• There appears to be opportunities for DPT to reorganise capacity towards ticket related tasks.
• DPT Mean Time to Resolve is above benchmark maximums although, there appears to be a low effort to resolve as a proportion of MTTR
• DPT Tickets resolved and received has been relatively stable indicating the DPT backlog is likely not growing
2
CSB Service Delivery Model Review - 201941
Improving the effectiveness of first level support through strong knowledge management and cross team collaboration can
help to optimise the CSB’s supply of services.
Insight Themes & Conclusions | Conclusions (3 of 3)
Additional call handling resources around peak periodsAdditional call handling resources might be needed in order to handle current service demand particularly around peak periods (depending on the efficacy of
the previous recommendations)
• The DSC appears to be handling more tickets than it is capable of resolving
• Average Speed to Answer (e.g. call waiting times) is much higher than average benchmarks
• Abandonment Rate is much higher than average benchmarks
• The DSC does not have capacity to handle incoming service demand
• Agent Utilisation is close to benchmark maximums
3
Improve cross CSB and eHealth collaborationCloser collaboration between CSB teams and the wider eHealth organisation may be needed to optimise and improve customer experience, by reducing ticket
handoffs and thus delays to resolve customer tickets
• There may be delays in DPT coordinating resolution with other support teams
• DPT Mean Time to Resolve is above benchmark maximums and increasing despite consistent DPT performance
• Stronger collaboration within the CSB and wider eHealth organisation may help to improve first level support effectiveness
• First Level Resolution Rate is below benchmark averages
• First Contact Resolution Rate is below benchmark averages
4
How can the CSB improve
its current state Service
Delivery Model?
Service Supply
42
Appendix
CSB Service Delivery Model Review - 201943
Appendix A | Benchmark Calculations (1 of 2)
Metric Calculation Components
Tickets received per end-user (TPEU)
1. Total number of tickets received by CSB channels (DSC comprises of Chat, Auto Password Reset and Phone, Other units/remainder
estimated from proportions of “last assigned resolver group” tickets) between March 2018 and March 2019 (inclusive of start date)
2. Total number of end users supported by the CSB (total number of active accounts on AD)
Average Handling Time (AHT)1. Total handling time between March 2018 and March 2019 (inclusive of start date)
2. Total incoming contacts handled (not inclusive of automated password reset or callbacks)
Average Speed to Answer (ASA)1. Total waiting time for queued contacts between March 2018 and March 2019 (inclusive of start date)
2. Total contacts handled (inclusive of callbacks – therefore assumes callbacks represent zero waiting time).
Abandonment Rate (AR)1. Total abandoned contacts between March 2018 and March 2019 (inclusive of start date)
2. Total incoming contacts queued (not inclusive of automated password reset or callbacks)
First Level Resolution Rate (FLR)1. Total number of tickets provided from ServiceNow tickets provided
2. DSC resolved tickets
First Contact Resolution (FCR)
1. DSC resolved tickets regardless of final resolver
2. DSC resolved tickets with “0” reassignments (or “1” reassignment) and resolved within 1 hr (assumes first contacts resolved should last
less than an hour in duration – this is likely fair since AHT ~ 10 minutes).
3. DSC tickets Note: filter is “actual restore hours” as it is an indicator of how long a ticket takes to close
Mean Time To Resolve (MTTR)
1. DPT resolved incidents irrespective of number of hand-offs or hours spent to resolve.
2. Total hours elapsed between when a ticket is opened and when a ticket is resolved
3. Note: filter is “actual restore hours” as it is an indicator of how long a ticket takes to resolve
Agent Utilisation 1. DPT resolved incidents irrespective of number of hand-offs or hours spent to resolve.
2. Total hours spent to resolve the incident
Incidents resolved within SLA agreed
timeframes
1. Total number of incident tickets logged in ServiceNow
2. Incidents resolved within SLA time frames according to priority and SLAs inputted into ServiceNow (using “actual restore hours”)
Customer Satisfaction 1. Positive responses to end-of-ticket survey expressed as a percentage of total responses
DSC to DPT Hand-Offs 1. Proportion of the total tickets the DSC hands off, that is assigned to the DPT (e.g. proportion of level 1 hand-offs to level 2).
CSB Service Delivery Model Review - 201944
Appendix A | Benchmark Calculations (2 of 2)
Metric Calculation Components
Annual Contacts per Service Desk FTE1. Total number inbound contacts handled by DSC for 2018
2. Total FTE for DSC (note, 2019 figure).
EUDs per EUC FTE1. Total registered eHealth Queensland EUDs (including smart devices, printers, desktops and laptops)
2. Total DPT FTEs recorded as a role within an EUC team (by Org Chart) for 2019 (inclusive of TSB EUC FTEs)
Service Desk FTE to IT FTE ratio1. Total DSC FTEs for 2019
2. Total eHealth Queensland FTEs for 2019
EUC FTE to IT FTE ratio1. Total DPT FTEs recorded as a role within an EUC team (by Org Chart) for 2019 (inclusive of TSB EUC FTEs)
2. Total eHealth Queensland FTEs for 2019
CSB Service Delivery Model Review - 201945
ST
EP
DE
FIN
ITIO
NA
CT
IVIT
IES
Benchmarking has been conducted through the first three weeks of the CSB SDM review in an iterative process of
identifying, sourcing and analysing metrics.
Appendix B | Benchmarking Process
Agree on metrics that best align
with the agreed purpose of
benchmarking
• Consolidate and agree on list of
benchmarks
• Estimate effort required to
extract/consolidate metrics
• Identify key contacts and
systems where CSB metrics1
need to be extracted from
1Key dependency, CSB data to be provided
no later than CoB Friday 12 April 2019.
• Define and agree on the
benchmarking objectives
• Outcomes needed to
support SDM design
• Define and agree on benchmarking
assumptions
Set the context and objectives for
benchmarking that will guide how
metrics will be chosen/analysed.
1. Contextualise 2. Define3. Extract &
Consolidate
4. Compare &
Analyse
Work with key CSB contacts to
extract agreed metrics.
• Ensure extracted metrics align
with benchmark definitions
• Determine and agree on
alternative or additional
metrics/proxies as necessary
• Iterate and refine
benchmarking approach
Benchmark against analyst
reports/data made available to eHealth
• Finalise visualisation of data
• Add contextual analysis (may
require some additional engagement
with stakeholders)
• Develop benchmarking draft report
which will include,
• Current state insights
• Future state considerations
Some iteration may be necessary as the list of metrics
to be benchmarked or used is refined
CSB Service Delivery Model Review - 201946
The majority of DSC hand-offs are to DPT teams. Other major “2nd level” support channels include the HHSs and Business
Application teams.
Appendix C | DSC to DPT hand-offs
BENCHMARK DATA
20%
30%
40%
50%
60%
70%
80%
Mar-18 May-18 Jul-18 Sep-18 Nov-18 Jan-19
Pro
port
ion o
f D
SC
Tic
kets
Handed
-Off
to D
PT
(%
)
Recent History of DSC to DPT Hand-off Proportions
• The majority of DSC hand-offs are to DPT teams
• The proportion of DSC to DPT hand-offs is relatively
steady and does not display a trend to increase
CURRENT STATE OBSERVATIONS
2.74%
57.74%
7.82%
20.34%
7.07%
2.68% 1.62%
DSC Ticket Hand Off Channels (Feb 18 to Feb 19 Avg)
Contemporary Workspace Program
DPT
HHS
Business Applications
Other
Hosting & Directories
ieMR
CSB Service Delivery Model Review - 201947
0
2,000
4,000
6,000
8,000
10,000
12,000
14,000
16,000
Annual C
onta
cts
per
Serv
ice D
esk F
TE
Contacts handled per Service Desk FTE (Annual)1,2
The DSC is handling fewer contacts per FTE than the benchmark average. This may mean the DSC is adequately resourced
compared to benchmark peers, or that the DSC is not adequately equipped for the volume of contacts it handles.
Appendix D | Staffing Ratios - Annual Contacts per Service Desk FTE
BENCHMARK DATA
• The DSC is handling fewer contacts per FTE than the
benchmark average
• Total Contacts handled by the DSC has been relatively
constant (steady), particularly over the past 2 years.
• This trend is in line the DSC’s TPEU (see page 13)
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
Given that the DSC is handling fewer contacts per FTE
than benchmarks –the DSC may be facing first level
effectiveness issues and/or it may be adequately
resourced compared to peer organisations.
• Relatively high contact complexity compared to
peer organisations may be causing first level
effectiveness issues
• First level effectiveness issues is evident in
other benchmarked metrics (FCR, FLR).
• Fewer contacts per FTE compared to benchmarks
may imply that the DSC has adequate resources,
however this insight runs counter to,
• High ASA and AR compared to
benchmarks
• High AU compared to benchmarks
• Therefore it is more likely that the DSC is
facing first level effectiveness issues
4, 337
9, 125 - 75th Percentile
4, 049- 25th Percentile
6, 846 - Average
1 Benchmark Data | Gartner - 2019 - IT Key Metrics Data 2019 Key Infrastructure Measures IT Service Desk Analysis 2 CSB Data | Amazon Connect Extracts Received 10/4/2019 from CSB Manager, Reporting and Analysis, SM&I, DSC FTE
number received from
DSC Average
(2018)Benchmark AverageInter-Quartile
Range
0
100,000
200,000
300,000
400,000
500,000
600,000
2016 2017 2018
Tota
l C
onta
cts
Handle
d b
y D
SC
per
Year
Historical Contacts Handled by DSC2
CSB Service Delivery Model Review - 201948
In terms of FTEs, the DSC’s size relative to eHealth Queensland’s size is close to benchmark averages. This suggests that
the DSC is close to adequately staffed compared to peer organisations.
Appendix D | Staffing Ratios – Service Desk FTEs to IT FTEs Ratio
BENCHMARK DATA
• DSC FTEs as a percentage of total IT FTEs is close to
benchmark averages (for 2019)
• Compared to industry benchmarks (specifically for
healthcare and for similarly sized IT organisations)
the CSB’s ratio of DSC to total IT FTEs is slightly
above averages
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
The DSC appears to be close to adequately staffed
compared to benchmarks and industry averages.
• Relative to the number of eHealth FTEs, there is a
close to benchmark average amount of DSC FTEs
compared to industry and benchmark averages
• Service Desk FTEs to IT FTEs ratio is a more
direct comparator of resource capacity for a
Service Desk (compared to Agent Utilisation).
0%
5%
10%
15%
20%
25%
30%
Perc
enta
ge o
f S
erv
ice D
esk (
DS
C)
FT
Es o
f T
ota
l IT
F
TE
s (
eH
ealth)
(%)
Service Desk FTEs on Total IT FTEs Ratio1,2,3
8.05%
14.51 % - 75th Percentile
4.84 % - 25th Percentile
10.6 % - Average
5.2 % - Similar Sized IT
Organisation Median
1 Benchmark Data | Gartner - 2019 - IT Key Metrics Data 2019 Key Infrastructure Measures IT Service Desk Analysis 2 CSB Data | CSB FTE Data Received 10/5/2019 from CSB CCEO, DSC FTE Data Received 10/5/2019 from CSB CCEO3 Benchmark Data | Computer Economics 2018 – Help Desk Staffing Ratios
Benchmark Average
Inter-Quartile
Range
DSC Average 2019
6.5 % - Healthcare
Industry Average
CSB Service Delivery Model Review - 201949
0
100
200
300
400
500
600
700
End U
Ser
Com
puting D
evic
es p
er
End U
ser
Com
pute
F
TE
EUDs per EUC FTE1,2,3
Proportional to the number of EUC FTEs, the DPT supports more end-user devices compared to peer organisations. This is
potentially contributing to the DPTs workload away from direct ticket related activities.
Appendix D | Staffing Ratios – EUDs per EUC FTE
BENCHMARK DATA
249Average
322 - 75th Percentile
168 - 25th Percentile
612.3
• The EUDs to DPT EUC FTE ratio is much higher than
benchmark averages although closer to similarly sized IT
organisations and health specific organisations
• The number of EUDs supported include any
eHealth Queensland asset registered and in use
(assets purchased but not deployed, and retired
assets are not included).
• Assets include desktops, smart devices and
printers
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
Given that the ratio of devices to DPT staff supporting
them is high compared to benchmark – this may mean
that the DPT’s asset management workload is higher than
peer organisations.
• A higher overall number of devices means there
could be a proportionally higher asset management
(non-ticket related) workload particularly around,
• Asset monitoring (e.g. lifecycle)
• Vendor management
• Purchases and invoicing
• The true number of EUDs supported by the DPT
may be even higher than shown right if considering
non-eHealth devices that DPTs still provide
services for.1 Benchmark Data | Gartner – 2018 - IT Key Metrics Data 2018 Key Infrastructure Measures End User Computing Analysis2 CSB Data | ServiceNow Ticket Extract Received 10/4/2019 from CSB Manager, Reporting and Analysis, SM&I 3 Benchmark Data | Computer Economics 2018 – Desktop Staffing Ratios
449 - Similar Sized IT
Organisation Median
545 - Healthcare
Industry Average
Benchmark Average
Inter-Quartile
Range
eHealth Average
2019 (incl. TSB
Presentations Team)
eHealth Average
2019 (excl. TSB
Presentations Team)
667.8
CSB Service Delivery Model Review - 201950
0%
5%
10%
15%
20%
25%
30%
Perc
enta
ge o
f E
UC
(D
PT
) F
TE
s o
f T
ota
l IT
FT
Es
(eH
ealth)
(%)
EUC FTEs on Total IT FTEs Ratio1,2,3
DPT EUC FTEs as a proportion of total IT FTEs is close to benchmark averages. This suggests the DPT may be adequately
staffed compared to benchmarks – if EUC FTEs are a fair representation of the DPT’s size.
Appendix D | Staffing Ratios – EUC FTEs to IT FTEs Ratio
BENCHMARK DATA
10.3 %Average
12.9 % - 75th Percentile
4.95 % - 25th Percentile
13.3 %
• DPT EUC FTEs as a percentage of total IT FTEs is
close to benchmark averages (for 2019)
• Ratio is between industry specific averages (higher
than healthcare but lower than similarly sized
organisations).
CURRENT STATE OBSERVATIONS
CURRENT STATE INSIGHTS
The DPT may be adequately staffed or slightly under-
staffed compared to benchmarks.
• The DPT may have a slightly higher than average
ratio of EUC FTEs to total eHealth FTEs due to,
• The need to support significantly more
EUDs compared to benchmarks
• The CSB’s geographical spread compared
to benchmark organisations requiring a
greater “on-site technician” presence (this
appears to be shown by the CSB’s slightly
lower ratio compared to industry specific
medians).
• This insight assumes that EUC FTEs proportion of
IT FTEs is a fair representation of the DPT’s overall
proportion of IT FTEs.
• Note that TSB EUC FTEs (Presentations
Team) is equivalent to approx. 13 FTEs1 Benchmark Data | Gartner – 2018 - IT Key Metrics Data 2018 Key Infrastructure Measures End User Computing Analysis2 CSB Data | CSB FTE Data Received 10/5/2019 from CSB CCEO, DPT EUC FTE Data Received 10/5/2019 from CSB
Business Support Officer 12/4/20193 Benchmark Data | Computer Economics 2018 – Desktop Staffing Ratios
12.7 % - Healthcare
Industry Average
15.5 % - Similarly
Sized IT Organisation
Benchmark Average
Inter-Quartile
Range
eHealth Average
2019 (incl. TSB
Presentations Team)
eHealth Average
2019 (excl. TSB
Presentations Team)
12.2 %
CSB Service Delivery Model Review - 201951
Demand for CSB services across teams and channels (including the DSC and DPT) has been stable across the past 2 years.
Tickets received by the CSB – Data Analysis Approach
BENCHMARK DATA
• The ServiceNow ticket data does not provide a view as to the
first assigned group for a ticket; only the last assigned group
• A significant number of tickets are raised to undertake internal
jobs and should therefore not be included in the overall ticket
count as this is specifically concerned with user demand. These
internal jobs will include tasks undertaken by a number of eHQ
branches and units, e.g. the SIM team.
• Given the ServiceNow data, distinguishing these internal tickets
from genuine user jobs is difficult.
DATA CONSIDERATIONS
APPROACH TAKEN TO ANALYSIS
In the absence of data to identify the first point of contact of a
ticket, the following approach has been taken to generating the
graphs shown:
• The last assigned group of the ServiceNow ticket data has been
used as a proxy for whether a ticket was raised with CSB.
• This approach is based on an underlying assumption that if a
ticket was last assigned to a CSB unit, then it was also raised
with CSB. This assumption makes sense as the tickets assigned
to CSB units are those that constitute the majority of the load for
CSB. Other tickets will either never be assigned to CSB or will
be directed to another unit with minimal CSB effort.
• The graph shows the number of tickets with a last assigned
group equal to DSC, DPT or SM&I. The DPT and Online
breakdowns have been estimated using the number of DPT and
online tickets as a percentage of the total ticket volumes and
applying this percentage to the subset of CSB specific tickets.1 CSB Data | ServiceNow Ticket Extract Received 29/4/2019 from CSB Manager, Reporting and Analysis, SM&I
* incl. OPS, Portal, Auto resolved (e.g. Amazon Connect), Email and OITS
Date when ticket
opened
Note that DPT and Online breakdowns are estimates only.
0
20,000
40,000
60,000
80,000
100,000
120,000
Jan-16 Jan-17 Jan-18 Jan-19M
onth
ly T
icket V
olu
mes R
ecie
ved b
y C
hannel
Historical Volume of Total CSB Tickets Received (Opened)1
DSC DPT (estimated) Online * (estimated) Total CSB
CSB Service Delivery Model Review - 201952
0
1000
2000
3000
4000
5000
6000
2014-2015 2015-2016 2016-2017 2017-2018
Devic
es
Years
Cairns End User Device Fleet