sharing the evidence: clinical practice benchmarking to improve continuously the quality of care

11
Sharing the evidence: clinical practice benchmarking to improve continuously the quality of care Judith Ellis MBE MSc BSc(Hons) RGN RSCN PGCE RNT Principal Lecturer, University of Central Lancashire, Preston, Lancashire 2 Accepted for publication 27 October 1999 ELLIS ELLIS J. (2000) (2000) Journal of Advanced Nursing 32(1), 215–225 Sharing the evidence: clinical practice benchmarking to improve continuously the quality of care It is unacceptable for health care professionals to acquiesce quietly to incon- sistencies in the quality of health care received by patients. In the United Kingdom, the introduction of clinical governance has formalized the expecta- tion that professionals’ practice will meet recognized standards of care consis- tently. It is being stated that all available evidence is being used to identify national standards of excellence. This will inform professionals not only of expected outcomes but of also the structures and processes that need to be in place to support the attainment of such outcomes. Clinical practice bench- marking is one continuous quality improvement approach, which is being used by paediatric units in 27 National Health Service Trusts in the north-west of England to promote the utilization of available evidence in to practice. The evidence base for benchmarks of best practice is considered continuously using a hierarchy of evidence. This clarifies the different evidence available, upon which benchmarks or standards of excellence can be based, but reinforces the kudos awarded quantitative research evidence within health care. Once benchmarks have been agreed, benchmarking activity supports practitioners in a continuous cycle of comparison and sharing that is aimed at ensuring that children and their families receive evidence-based care, wherever they are admitted in the north-west of England. Keywords: quality, improvement, patient focused, clinical practice benchmarking, comparison, sharing good practice, evidence base, consistent service provision, children’s health services, paediatrics INTRODUCTION The north-west paediatric practice-benchmarking project was established in the north-west of England in 1994. Now, in 1999, membership includes paediatric nurses from 27 National Health Service (NHS) Trusts and academic staff from five universities. The group was formed in response to members’ concerns that there appeared to be inconsistencies in the quality of care being received by children and their families across the north-west of England. In addition, resources were being wasted through repetition of effort as practitioners in all areas strived independently to ensure delivery of evidence-based care in the same areas of practice, e.g. paediatric pain control. The origins of benchmarking are from within industry, where it is now accepted as providing a structured approach for the compilation of comparative data between Correspondence: Judith Ellis, NHSE, Quarry House, Quarry Hill, Leeds LS2 7UE, England. E-mail: [email protected] 1 Ó 2000 Blackwell Science Ltd 215 Journal of Advanced Nursing, 2000, 32(1), 215–225 Nursing and health care management issues

Upload: judith-ellis

Post on 06-Jul-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Sharing the evidence: clinical practicebenchmarking to improve continuouslythe quality of care

Judith Ellis MBE MSc BSc(Hons) RGN RSCN PGCE RNT

Principal Lecturer, University of Central Lancashire, Preston, Lancashire2

Accepted for publication 27 October 1999

ELLISELLIS JJ. (2000)(2000) Journal of Advanced Nursing 32(1), 215±225

Sharing the evidence: clinical practice benchmarking to improve continuously

the quality of care

It is unacceptable for health care professionals to acquiesce quietly to incon-

sistencies in the quality of health care received by patients. In the United

Kingdom, the introduction of clinical governance has formalized the expecta-

tion that professionals' practice will meet recognized standards of care consis-

tently. It is being stated that all available evidence is being used to identify

national standards of excellence. This will inform professionals not only of

expected outcomes but of also the structures and processes that need to be in

place to support the attainment of such outcomes. Clinical practice bench-

marking is one continuous quality improvement approach, which is being used

by paediatric units in 27 National Health Service Trusts in the north-west of

England to promote the utilization of available evidence in to practice. The

evidence base for benchmarks of best practice is considered continuously using

a hierarchy of evidence. This clari®es the different evidence available, upon

which benchmarks or standards of excellence can be based, but reinforces the

kudos awarded quantitative research evidence within health care. Once

benchmarks have been agreed, benchmarking activity supports practitioners in

a continuous cycle of comparison and sharing that is aimed at ensuring that

children and their families receive evidence-based care, wherever they are

admitted in the north-west of England.

Keywords: quality, improvement, patient focused, clinical practice

benchmarking, comparison, sharing good practice, evidence base,

consistent service provision, children's health services, paediatrics

INTRODUCTION

The north-west paediatric practice-benchmarking project

was established in the north-west of England in 1994.

Now, in 1999, membership includes paediatric nurses

from 27 National Health Service (NHS) Trusts and

academic staff from ®ve universities.

The group was formed in response to members'

concerns that there appeared to be inconsistencies in the

quality of care being received by children and their

families across the north-west of England. In addition,

resources were being wasted through repetition of effort as

practitioners in all areas strived independently to ensure

delivery of evidence-based care in the same areas of

practice, e.g. paediatric pain control.

The origins of benchmarking are from within industry,

where it is now accepted as providing a structured

approach for the compilation of comparative data betweenCorrespondence: Judith Ellis, NHSE, Quarry House, Quarry Hill,

Leeds LS2 7UE, England. E-mail: [email protected]

Ó 2000 Blackwell Science Ltd 215

Journal of Advanced Nursing, 2000, 32(1), 215±225 Nursing and health care management issues

organizations, which can then support realistic develop-

ment (Codling 1992, Zairi & Leonard 1994). Within the

health service, benchmarking data gathered have also

tended to concentrate upon organizational issues, e.g. the

reduction of waiting times, staf®ng ratios, etc. (Mitchell

1996, Aspling & Lagoe 1996, Phillips 1995). This however,

means that members are always copying the best actual

practice identi®ed, which as Harrington (1996) highlights

can only ever make the copier second best.

The north-west group has adapted benchmarking

principles from industry to support the development of

clinical practice, but comparison is not limited to the

actual practice of benchmarking partners. In clinical

practice, the benchmark against which practice is

considered can be de®ned as professional consensus of

best possible achievable practice (Ellis & Morris 1997).

Rather than comparison with actual best practice, clin-

ical practice benchmarking ensures utilization of all

levels of evidence in the identi®cation of standards of

excellence, with benchmarking activity supporting struc-

tured comparison and sharing (Ellis 1995, Ellis & Morris

1997).

Figure 1 illustrates the stages involved in a clinical

practice benchmarking cycle for continuous quality

improvement towards best possible practice.

The experiences of the paediatric benchmarking group

will be referred to in this paper to support consideration of

the possible value of clinical practice benchmarking

activity in relation to:

· Continuous development of quality health care.

· How best practice in structures and processes supports

the attainment of required patient-focused outcomes.

· Use of all sources of evidence in the compilation of

evidence-based benchmarks of best practice.

· Practitioner and multidisciplinary involvement at all

stages of benchmarking activity.

· Impact of benchmarking upon consistent quality health

care.

QUALITY HEALTH CARE

The National Health Service (NHS) in the United

Kingdom (UK) is being encouraged to ensure uniform

Figure 1 Practice bench-

marking cycle (Ellis 1999).

J. Ellis

216 Ó 2000 Blackwell Science Ltd, Journal of Advanced Nursing, 32(1), 215±225

provision of high quality health care (Department of

Health, DOH 1997). The concept of quality as it relates

to health care has seldom been de®ned and it has been

widely accepted that it is a complex multidimensional

concept in constant need of analysis and clari®cation

(Attree 1993, Gillies 1996). Undeterred, the modern

health service has extended the requirements for assured

quality care. For example, the recent introduction of the

notion of clinical governance (Department of Health

19984 ) suggests that quality can be identi®ed, evaluated

and managed (Attree 1993, National Health Service

Executive, NHSE 1998). It is suggested that clinical

practice benchmarking may be one approach that may

provide a quality assessment and continuous quality

improvement approach that supports the development of

quality care (see Figure 1) (Ellis 1995, Ellis & Morris

1997).

QUALITY ASSESSMENT AND IMPROVEMENT

Quality assessment can be used to prove the level of

service being delivered but it is of no bene®t unless

undertaken in conjunction with quality improvement.

For example, national indicators (e.g. national compar-

ative data in the United Kingdom available from the

National Institute for Clinical Excellence (NICE) and in

National Service Frameworks (Department of Health

19985 ) tend to concentrate upon mortality and morbidity

®gures, the measurable outcomes. The utilization of

such assessment information to develop quality practice

is vital but seldom forthcoming (Gillies 1996). As

Nightingale demonstrated in the Crimean war, knowing

about abysmal outcomes for patients is only useful if it

is used to improve the structures and processes which

transform the outcome for the patients (Woodham-

Smith 19506 ).

Clinical practice benchmarking acknowledges the

need in health care to consider best practice not only

in attaining a required outcome, but also as suggested

originally by Donabedian (1966) and later by Bond &

Thomas (1993), within the structures and processes that

need to be in place. It is suggested that clinical practice

benchmarking activity allows structured comparison of

these practices (see Figure 1). The, overall patient-

focused outcome for each considered area of practice

is agreed. Structures and processes are then identi®ed as

factors that would support the attainment of the

outcome. Through consideration of all available

evidence, the best possible practice within each factor

is then arrived at and accepted as the benchmark of best

practice. Unlike organizational benchmarks where best

practice refers to actual systems and processes (Codling

1992, Zairi & Leonard 1994), clinical practice bench-

marks include external consideration of what the stan-

dard of excellence consists of.

STANDARD OF EXCELLENCE/BENCHMARK

The Department of Health Document A First Class Service

refers to national standards of excellence (Department of

Health 19987 ), a term ascribed to clinical practice bench-

marks (Ellis & Morris 1997). It is, however, extremely

dif®cult to identify the level of performance required for a

standard of excellence (Attree 1993).

Excellence in a health service may be considered as

fully meeting the needs of patients. Indeed, patient-

focused outcomes are frequently referred to as the aim in

Department of Health documents in the UK, particularly

A First Class Service (Department of Health 19988 ).

However, if patient-focused care is accepted as the goal

of excellent health care, then each individual served may

arrive at their own de®nition of excellence, with value

judgements made (Attree 1993). This utopia is impracti-

cable for a national health service with limited resources,

and in the United Kingdom (UK) such a view of quality is

not strenuously pursued. For example, patient views

appear in the evaluative phase of clinical governance

(Department of Health 19989 ). It could be suggested that to

close the quality cycle such patient-led requirements,

which may be concerned with the process of health care,

should indeed inform the standards of excellence sought

and not just be part of the evaluation stage.

In clinical practice benchmarking, patients are not

directly involved in compilation meetings where bench-

marks for structures and processes are agreed but are

represented by appropriate consumer group members at

the overall outcome stage. Rather than personal views,

they can offer a wider patient perspective upon what

constitutes best practice, accepting the utilitarian view

that ®nite resources may have to be used in such a way as

to ensure the greatest good for the greatest number.

Aneuran Bevan at the inception of the NHS in the UK

recognized the rationalization required in a publicly

funded health service. He stated in 1948, referring to staff

and equipment availability, `¼ we shall never have all we

need. If we are short it is all the more reason why we

should intelligently use what we have got¼ ' (Cole 1998

p. 4). Intelligent use has recently been translated in the UK

into viewing quality as ensuring that whenever possible

practice is based upon available evidence of effectiveness

with `¼ clinical interventions¼ doing what they are

intended to do¼' (NHSE 1996 p. 45).

EVIDENCE-BASED PRACTICE

Where de®nitive evidence of the care required to achieve a

stated outcome exists, e.g. a particular pharmaceutical

intervention with measurable physiological effect, it can

be suggested that practice expected may be stated in an

uncompromising dictat. The outcome of action is so clear

that there is no room for discussion or compromise

Nursing and health care management issues Clinical practice benchmarking

Ó 2000 Blackwell Science Ltd, Journal of Advanced Nursing, 32(1), 215±225 217

(Grimshaw & Russel 199310 ) or comparative benchmarking

activity. Professionals must comply. This is the basis for

the introduction of clinical governance in the UK (Depart-

ment of Health 199811 ). For example, National Service

Frameworks are being presented as offering `¼ clear

quality standards which all parts of the NHS will be

expected to meet¼' (Department of Health 199812 p. 14).

It would appear that in spite of professional bodies

releasing guidelines and locally adapted protocols to

assist practitioners in making clinical decisions (Long

1994), a professional peer review system is no longer

considered suf®cient to monitor and evaluate practice.

Even where care or a particular course of action's

effectiveness is dictated by high level evidence, national

guidelines are apparently seen by some professions, for

example medicine, as unacceptable restrictions upon

professional autonomous practice (Haines & Feder 199213 ,

Walby & Greenwell 1994). Mckenna (199514 ) suggests that

doctors take individual responsibility for their action.

They expect total freedom of clinical decision-making

which is potentially dangerous for both patients and

practitioners. Parkin (199515 ), however, states that nurses

act as an autonomous group accepting peer review of

professional standards.

The need for clinical governance powers was recog-

nized in the United States of America (USA) in 1992 when

the Joint Commission on Accreditation of Health Care

Organizations was established with powers to peer review

organizations, and the UK is following their lead. The

National Institute for Clinical Excellence and Commission

for Health Improvement (Department of Health 199816 ) are

being established with a remit to set and monitor stan-

dards.

In the complex world of health care, however, the

number of such stringent dictats is limited. Even if the

evidence base for practice appears undeniable, it must

always be remembered that the contexts of care vary, as do

the actual needs and desires of individual clients. In the

provision of a customer-led service, many variables are

beyond control. De®nitive outcomes cannot therefore be

categorically prescribed. It is in the process and where

indeterminate outcomes exist, e.g. patient satisfaction and

levels of partnership, that benchmarking activity may be

of particular value within practice development. It is also

in these areas that the availability of indisputable

evidence upon which to base a benchmark of best practice

remains problematic, particularly when it is the `softer'

elements of care (Wright 1994) that are under consider-

ation.

In clinical practice benchmarking it is accepted that the

identi®cation of benchmarks of best practice should

preferably be based upon, but not limited by, the avail-

ability of a scienti®c evidence base. Evidence is therefore

considered from any level of a hierarchy of evidence.

Figure 2 presents the classi®cation of evidence used by

the north-west paediatric benchmarking group. This is a

simpli®ed version of the NHS Centre for Research and

Dissemination's hierarchy (NHSCRD). This grades the

validity of available evidence to ensure that the value

attributed to the evidence and the conclusions drawn are

appropriate to the reliability of the results being used

(Oxman 1994).

The higher levels are reserved for reviews, for example

systematic reviews (Droogan & Song 1996) and meta-

analysis (Greener & Grimshaw 1996) like those presented

by the Cochrane Centre or NHSCRD. These higher level

reviews are, however, extremely limited in number and

take time to produce even when topics have been iden-

ti®ed. Meanwhile, patients are receiving care. In addition,

in some areas of practice the evidence considered is itself

poor. So, although reviewed, the ®nal report just further

highlights the weak evidence base available, with ®ndings

again classi®ed according to a hierarchy, rather than

giving de®nitive evidence around which benchmarks can

be built.

After reviews subsequent hierarchical levels continue to

concentrate upon quantitative methods of inquiry, e.g.

randomized controlled trials emphasizing the value of

quanti®able aspects of care (Oxman 1994). The kudos still

attributed to quantitative research methodologies (Bassett

1992) raises concerns that a dearth of such evidence could

prevent best practice being sought.

Using lower level evidence is only accepted in the

absence of more empirical, higher level evidence. If

viewed as a weighted hierarchy, qualitative studies and

indeed evidence that supports the provision of a human-

istic health service, are to be found at the bottom. The

lowest levels include the opinions and experience of

respected authorities based on clinical experience,

descriptive studies and reports and indeed the views of

Figure 2 Classi®cation of evidence.

J. Ellis

218 Ó 2000 Blackwell Science Ltd, Journal of Advanced Nursing, 32(1), 215±225

those the health service actually serves, the patients

(Oxman 1994).

As previously stated, in the identi®cation of bench-

marks of best practice, all evidence is considered. The

benchmarking group have not used a hierarchy to categ-

orize evidence but have approached the classi®cation of

evidence without overt value judgements (see Figure 2),

still recognizing that empirical evidence should be

actively sought and used wherever available. This is

where clinical practice benchmarking differs from organ-

izational benchmarking where comparison is against

norms in practice (Codling 1992, Phillips 1995). The use

of all available evidence ensures that clinical practice

benchmarking allows comparison against best possible

practice with wide external focus, particularly for those

benchmarks based upon lower level evidence.

Using the classi®cation of evidence shown in Figure 2,

the evidence base that supports the benchmark statements

arrived at for 34 factors within ®ve areas of paediatric

practice is shown in Figure 3. This clearly demonstrates

that the identi®cation of an evidence base for best practice

for structures and processes involved in achieving patient-

focused outcomes, is most reliant upon opinions and

experiences of professionals or is identi®ed from national

reports or the published views of consumer groups.

RESPECTED AUTHORITIES/EXPERTS

If the evidence base available represents the views of

respected authorities, for the compilation of each bench-

mark the membership of the meeting has to vary, to ensure

involvement of experts in the particular ®eld. In addition,

written support and evidence is sought from key stake-

holders. The national picture is ascertained for the north-

west group through literature reviews undertaken by

representatives from six universities and through an

external review panel of national paediatric experts. The

external focus is as wide as possible to ensure that best

practice is identi®ed.

It is important to involve all leading experts in the ®eld,

taking into account geographical strengths and limita-

tions. For example the north-west paediatric bench-

marking group membership includes three paediatric

lead hospitals, specialist centres and many district general

hospital units. All trusts are invited to send a represen-

tative to these meetings. Where possible this will be the

acknowledged expert practitioners who possess a special-

ized body of knowledge and have extensive expertise in

the area of practice under consideration, a de®nition of

experts that is supported by English (1993) and Eraut

(198517 ).

The involvement of experts is important for two

reasons. First it ensures that consideration of how to

develop quality health care services is not limited or

delayed by the lack of research ®ndings. Expert consensus

from across 27 Trusts is accepted in the identi®cation of

factors and in arriving at benchmarks of best practice. In

addition, involving experts allows the `¼untapped

resource¼' (Meerabeau 1992 p. 108) of experts' tacit

knowledge to be extracted so that best practice bench-

marks bene®t from experts' holistic views and artistry and

do not merely become a tick list of skills.

Experts not only attend compilation meetings but also

are encouraged to publicly verbalize and share the nature

of expert practice. This assists practitioners in compiling

action plans that will enable them to develop holistic best

practice. Experts involved in identifying benchmarks and

assisting with action planning can come from many

disciplines, depending upon the area under consider-

ation.

NURSING-LED INTERDISCIPLINARYWORKING?

The health care team includes members from many

different professional groups. Unless all are uni®ed in

seeking to achieve a mutual cause in achieving patient-

focused outcomes, interactions will be protective of

professional self-interest and communication poor (Covey

198918 ). All will approach effective delivery of quality care

from a different professional perspective (Walby & Green-

well 1994). Initiatives such as clinical practice bench-

marking require an agreed vision that reminds all that the

central tenet of health care services is to provide high

quality care to patients. If such a vision is shared it should

prevent divisive fragmentation of health care services and

repetition of effort caused by power games and profes-

sional protectionism (Poulton & West 1993, Walby &

Figure 3 Evidence bases for factor benchmark statements

(according to classi®cation in Figure 2). Total number of factors

considered � 34.

Nursing and health care management issues Clinical practice benchmarking

Ó 2000 Blackwell Science Ltd, Journal of Advanced Nursing, 32(1), 215±225 219

Greenwell 1994), and thus help to promote multidisci-

plinary collaboration.

Benchmarking in the north-west of England is a

nursing-led initiative. Although members of the multidis-

ciplinary team are invited to meetings the attendance and

input remains inconsistent, with greater enthusiasm from

the professions allied to medicine and support services

than the medical profession. The compilation of patient-

focused benchmarks may therefore, according to attend-

ance, require members to problem-solve beyond the

con®nes of their particular knowledge base (Diller 1990).

Multidisciplinary effort may have to occur outside the

formal benchmarking working groups. It could be

surmized that the playing of the doctor±nurse game (Stein

et al. 1990, Sweet & Norman 1995) continues in imple-

menting changes and completing the quality improvement

benchmarking cycle (see Figure 1).

If effective channels of communication did not exist

previously, they have to be established under the guise of

practice development. The resultant improved relation-

ships, shared knowledge and expertise appear to lead to

mutual recognition of merit. It is, however, interesting to

note that the experience of the north-west group is, in

areas where improvement in practice is reliant upon

collaborative action with medical colleagues, that bench-

marking scores appear to suggest that practice has

worsened. For example, when the organization of dedi-

cated paediatric theatre lists was identi®ed as best prac-

tice within the paediatric elective surgery benchmark, the

pre-benchmarking activity score was better than the score

attained after 12 months of benchmarking activity. This is

the only factor where practice did not develop in the area

of elective surgery and was also the only factor where

development was dependent upon collaborative multidis-

ciplinary working.

PRACTITIONER INVOLVEMENT

Clinical practice benchmarking focuses upon practice and

therefore must involve practitioners at the patient inter-

face at all stages of the cycle (see Figure 1). Practitioners

should identify the structures and processes that underpin

good practice, score and comment upon actual practice,

and then use the comparative scores to share develop-

ments and innovations. Practitioners provide and receive

practical support, preventing unnecessary repetition of

effort (Ellis 1995, Ellis & Morris 1997). This is a notable

difference from organizational and clinical benchmarks,

where managers and external experts (Phillips 1995)

arrive at the benchmarks.

In patient-focused practice benchmarking, managers

formally commit themselves to providing practical

support for practitioners and assist in the identi®cation

of areas for consideration. The practitioners, however,

actually agree and score the benchmarks (Ellis 1995, Ellis

& Morris 1997). Their involvement heightens their moti-

vation to achieve (Mears 1995) and this is supported by

ensuring that the scoring is not unnecessarily complex or

time consuming (Locke 1968). The benchmarks are seen as

realistic goals of achievable best practice which will,

according to Locke's (1968) motivational change theory,

promote change.

SCORING

Clinical practice benchmarking continuum statements for

scoring are made as objective as possible to ensure

reliability and validity of scoring. They are also reviewed

by the external review panel. However, due to the nature

of the outcomes, and the evidence upon which they are

based, it is accepted that there is an element of subjectivity

in benchmark scoring.

Benchmarking continuums cannot be considered

continuous variables. Comments made to explain the

scores awarded are therefore probably of more value than

the actual numeric scores. The comments are used to

consider parity of scoring and objectivity of benchmarking

continuum statements, but most importantly, they support

the actual sharing and compilation of action plans.

It could be suggested that for some practitioners consis-

tently low scoring might de-motivate or create an

unhealthy competitive spirit. However, as Codling (1992

p. 19) states, dissatisfaction may be useful as it stimulates

a desire for change. In addition, it is important to

acknowledge the effect of health professionals' commit-

ment to patients. Deming, an international ®gure in

quality improvement, stated when he was asked to look

at quality improvement in health care that those entering

health care professions are eager to perform well (Deming

1982). Indeed being able to identify how current practice

compares with others appears to be a motivator for

developments. The professionalism of practitioners is

supported by the apparent honesty of scores that practi-

tioners award practice. Scores obtained are randomly

validated through cross-scoring with nursing students

who are on clinical placements and quali®ed nursing

colleagues from the same practice area.

Practitioners appear to accept the need to score honestly

so that they can use the comparison results directly to

develop practice and to improve quality through moti-

vating and in¯uencing others.

SHARING AND NETWORKING

Sharing and networking can occur through the circulation

of comparative data to all member units or at speci®c topic

meetings. Practitioners and experts from all member units

attend these meetings and top scorers are asked to share

directly their good practice. Unlike organizational bench-

marking, in clinical practice benchmarking scores are not

J. Ellis

220 Ó 2000 Blackwell Science Ltd, Journal of Advanced Nursing, 32(1), 215±225

anonymized, which facilitates openness and sharing. It is

this sharing and networking that differentiates clinical

practice benchmarking from other quality improvement

initiatives. Without it clinical practice benchmarking may

be considered a quality assessment or audit activity,

showing that a unit is `¼doing right things right¼' (NHSE

1998 p. 3) but not providing realistic and practical support

in the development of best practice.

Practitioners can demonstrate their acceptance of their

professional duty to ensure that through all possible

avenues of action their patients receive the best possible

care (UKCC 199219 ). Indeed clinical practice benchmarking

provides a quality improvement approach that realisti-

cally meets the aims stated in the UK's A First Class

Service document which demands that `¼evidence based

practice is supported and applied routinely in everyday

practice¼' (Department of Health 199820 p. 36).

INCONSISTENCIES IN SERVICE

If this aim is fully achieved it could be argued that there

should be no inconsistencies or geographical variance in

the standard of care. As Frank Dobson (former UK Secre-

tary of State for Health) stated, the delivery of high quality

care `¼should not depend upon the geographical accident

of where¼' patients `¼happen to live' (Department of

Health 199821 p. 2).

Prior to benchmarking activity in the north-west, prac-

titioners were, as Lipsky (199222 ) suggests, apparently

ignorant of developments outside their own area of

practice or sphere of in¯uence. This had led to unneces-

sary repetition of dissemination effort and wasteful dupli-

cation of actual research activity. For example, at the ®rst

north-west meeting, practitioners from paediatric units

across 27 NHS Trusts identi®ed that current isolated

development efforts in each trust were concentrated

around the same six areas of practice. Indeed researchers

appeared to be failing to accept their responsibilities in

ensuring dissemination of research ®ndings (Luker &

Kenrick 1995). This was true even where an evidence

base for practice was identi®able and effectiveness of

practice application had been clearly demonstrated. There

was no consistency in the evidence base in use or the

quality of health care.

Figure 4 shows practice-benchmarking scores obtained

from NHS paediatric units around the north-west of the

Figure 4 Range of scores

awarded by practitioners: pre-

benchmarking activity

(10 � best practice).

Nursing and health care management issues Clinical practice benchmarking

Ó 2000 Blackwell Science Ltd, Journal of Advanced Nursing, 32(1), 215±225 221

UK, before quality improvement benchmarking activity.

Each chart relates to a particular area of practice, e.g.

adolescent care. Along the x axes are the different struc-

tures and processes that professional consensus identi®ed

as essential for the attainment of patient-focused outcomes

in that area of practice. They appear as Factor 1 (F1),

Factor 2 (F2), etc. The y axes relate to the scores awarded

for that factor. Practitioners were asked for each factor to

compare their actual practice against a continuum of

practice descriptors with a 10 score signifying attainment

of the benchmark of best practice. Scores for the number of

Trusts stated have been collated, to show for each factor

the range of scores, self-awarded by practitioners across

the north-west.

The graphs (Figure 4) highlight the inequalities in

practice at the commencement of clinical practice bench-

marking and do not promote con®dence in the equality of

National Health Service provision in the UK. Figure 5

compares these initial scores with scores awarded after

approximately 24 months of benchmarking quality

improvement activity.

The y axes again show the scores awarded and the x

axes relate to the structures and processes, the factors.

This time, however, the `pre' column relates to scores

prior to any actual quality improvement activity, and the

`post' column, relates to scores after 24 months of quality

improvement activity. The cycle in Figure 1 has been

completed and the inner update circle commenced with

benchmarks re-scored. The dotted line divides the

different factors.

Figure 5 shows that in most factors the range is closing.

This indicates that after 24 months of clinical practice

benchmarking activity, there is apparently less variance in

the benchmarking scores awarded by practitioners, which

may suggest greater consistency in practice in the partic-

ular areas considered. In addition, for some factors, the

median scores are also improving which suggests that the

quality of care may also be improving. However, the same

hospitals were included in the sample for both `pre' and

`post' results and due to non-receipt of some scores,

comparable sample numbers are small. Therefore, results

have to be approached with some scepticism. In addition,

there is dif®culty in ascertaining whether these improve-

ments can be directly attributed to benchmarking and

further evaluation is required.

Comparison of re-score results is, however, only one

measure of the effects of clinical practice benchmarking

activity. It is important also to acknowledge that the

bringing together of practitioners has a far wider impact

upon practice. Networking promotes general exchange of

Figure 5 Benchmark scores

comparing pre- and post-

benchmarking activity in a

speci®ed area of practice:

showing the range of scores

and the median scores for

each factor. j � range of

scores; --- � division between

different factors; j � pre-

benchmarking activity

median score; r � post-

benchmarking activity

median score.

J. Ellis

222 Ó 2000 Blackwell Science Ltd, Journal of Advanced Nursing, 32(1), 215±225

information and also creates a wider supportive culture,

especially important in areas of specialist practice where

practitioners can feel isolated and ill-informed.

NATION-WIDE DISSEMINATION

Having identi®ed benchmarks of best practice these could,

as suggested earlier, provide invaluable standards of

excellence to be achieved nation-wide (Department of

Health 199823 ). However, if practice benchmarks are based

on lower level evidence (Oxman 1994) they may re¯ect

only local needs and provision. The greater the specialist

knowledge and the higher the level of evidence upon

which a benchmark is based, the wider the applicability of

a clinical practice benchmark will be.

Another concern with widening the use of set bench-

marks is that if practitioners' ownership is lost, the

motivation effect upon the practitioners may be reduced

(Mears 1995). One possible solution could be to use

national benchmarks for scoring but to ensure then that

benchmarking activity is undertaken locally with local

networks for comparison of scores and the sharing and

supporting of developments in practice. With this

scenario, practitioners may accept national benchmarks

as helping them to identify areas for practice development

effort, with high scoring colleagues locally providing

practical guidance for the development of best practice.

The national quest to meet the identi®ed benchmarks

would help to ensure support from managers and

employers.

IMPACT OF BENCHMARKING

Benchmarking provides all those involved with a `¼real-

istic achievable picture of the desired future¼' (Codling

1992 p. 19). Clinical practice benchmarking has a greater

impact as it not only promotes the copying of examples of

best practice, but through the use of all available evidence

advances the continuous quest for best possible practice.

The requirement to attain set outcomes may for many

practitioners be considered beyond their sphere of in¯u-

ence and expertise. Through benchmarking activity prac-

titioners can themselves identify the structures and

processes that will support them in attaining required

outcomes and may indeed enable them to surpass expec-

tations.

The perceived impact of benchmarking over other

quality initiatives is summarized in Table 1.

Practitioners are in control. They not only participate in

identifying the goals to be attained but are mutually

supportive in achieving developments in practice that

have a positive effect upon patient care. Benchmarking

activity is not only about auditing practice to ensure

practice is achieving required measurable outcomes

(Gillies 1996, NHSE 1998) but supports open comparison

and sharing to allow continuous improvement and devel-

opment. Benchmarking pushes the boundaries of best

practice ever onwards. Practitioners, aware of develop-

ments elsewhere, can develop practice with minimal

effort, concentrating resources on new areas for practice

development.

CONCLUSION

The central tenets of a clinical practice benchmarking

group are that:

· The focus is the provision of best possible care for a

patient.

· Benchmarks are not dependent upon the existence of

high level research but utilize all available evidence.

· Factors considered include not only desired outcomes

but also the structures and processes that support the

attainment of patient-focused outcomes.

· Practitioners lead the benchmarking activity.

· Managers support practitioners' efforts.

· Honesty, openness and willingness to share is essen-

tial.

Table 1 Perceived impact of

benchmarking over other

quality initiatives

Quality initiatives Benchmarking

Fit for purpose ® Best possible practice

Traditional practice ® Evidence-based practice

Internal focus ® External focus

Professional fragmentation of care ® Patient-focused care

Internal ef®ciency ® Recognized excellence

Management-led ® Practitioner-led with management support

Pockets of good practice ® Dissemination of good practice

Outcome measurement ® Process improvement to outcome improvement

What is done ® How it is done

Achieving agreed standards ® Continuous improvement

Repetition of effort ® Sharing

Competitive protectionism ® Open comparison and sharing

Nursing and health care management issues Clinical practice benchmarking

Ó 2000 Blackwell Science Ltd, Journal of Advanced Nursing, 32(1), 215±225 223

Clinical practice benchmarking activity is increasingly

being recognized as a valuable practice development and

quality improvement initiative with groups appearing in

many specialities throughout the United Kingdom. The

paediatric-benchmarking group in the north-west was

established 5 years ago with support from the chief

nurses' of®ce and the Department of Health. Initial

scores demonstrated the wide variety in the quality of

care children and families could have expected to

receive in the north-west. Benchmarks are re-scored

every year and initial evaluation of the north-west

project suggests not only that practice is developing,

but also that by working together, sharing developments

and innovations, practitioners are helping to ensure that

wherever they are cared for, patients can expect a

similar high standard of care.

Clinical practice benchmarks may be considered to be

an aspiration for gold standards for practice, formalizing

the utilization of all levels of available evidence. However,

developments in practice are reliant upon how the

benchmarks and the comparative data obtained are

utilized. It is suggested that the effectiveness of bench-

marking activity is reliant upon practitioner commitment

and openness.

Professionals employed in caring professions should

not approach developments in patient-focused best prac-

tice as competitive. All professionals involved in health

care are under a duty of care, which involves ensuring the

uniform provision of a high quality health service. Clinical

practice benchmarking is one continuous quality improve-

ment initiative that professionals may wish to consider to

support the sharing of evidence-based practice.

References

Attree M. (1993) An analysis of the concept `quality' as it relates

to contemporary nursing care. International Journal of Nursing

Studies 30, 355±369.

Aspling D.L. & Lagoe R. (1996) Benchmarking for clinical path-

ways in hospitals: a summary of sources. Nurse Economist 14,

92±97.

Bassett C. (1992) The integration of research in the clinical setting:

obstacles and solutions. A review of the literature. Nursing

Practice 6, 4±8.

Bond S. & Thomas L. (1993) Outcomes of Nursing. A Workshop

Report.24 7, 35±36.

Codling S. (1992) Best Practice Benchmarking. Gower,

Cambridge.

Cole A. (1998) Pages of History. Nursing Times 94, 4±6.

Corey S.R. (1989) The Seven Habits of Highly Effective People.

Simon & Schuster, New York.

Department of Health (1997) The New NHS: Modern and Depend-

able. Department of Health, Stationery Of®ce, London.

Department of Health (1998) A First Class Service. Department of

Health, Stationery Of®ce, London.

Diller L. (1990) Fostering the interdisciplinary team: fostering

research in a society in transition. Archives of Physical and

Medical Rehabilitation 71, 275±278.

Donabedian A. (1966) Some issues in evaluating the quality of

nursing care. American Journal of Public Health 59, 1833.

Droogan J. & Song F. (1996) The process and importance of

systematic reviews. Nurse Researcher 4, 15±26.

Ellis J.M. (1995) Using benchmarking to improve practice.

Nursing Standard 9, 25±28.

Ellis J.M. & Morris A. (1997) Paediatric benchmarking: a review of

its development. Nursing Standard 12, 43±46.

English I. (1993) Intuition as a function of the expert nurse: a

critique of Benners novice to expert model. Journal of

Advanced Nursing 17, 448±456.

Eraut M. (1985) Knowledge creation and knowledge use in

professional contexts. Studies in Higher Education 10, 117±133.

Gillies A. (1996) Improving the Quality of Patient Care. Wiley,

Chichester.

Greener J. & Grimshaw J. (1996) Using meta-analysis to summarise

evidence with systematic reviews. Nurse Researcher 4, 27±38.

Grimshaw J. & Russel I. (1993) Achieving health gain through

clinical guidelines. I: Developing scienti®cally valid guidelines.

Quality in Health Care 2, 243±248.

Haines A. & Feder G. (1992) Guidance on Guidelines (Editorial).

British Medical Journal 305, 785±786.

Harrington H.J. (1996) The Complete Benchmarking Implementa-

tion Guide. Total Benchmarking Management. McGraw-Hill,

New York.

Lipsky J.G. (1992) Commentary on how to steal the best ideas

around. Nursing Scan in Administration, 8, 3.

Long A. (1994) Guidelines, protocols and outcomes. International

Journal of Health Care Quality Assurance 5, 4±7.

Locke E.A. (1968) Towards a theory of task motivation and

incentives. Organisational Behavior and Human Performance

3, 157±189.

Luker K. & Kenrick M. (1995) Towards knowledge-based practice:

an evaluation of a method of dissemination. International

Journal of Nursing Studies 32, 59±67.

McKenna H.P. (1995) A multidisciplinary approach to audit.

Nursing Standard 46, 32±35.

Mears P. (1995) Quality Improvement Tools and Techniques.

McGraw-Hill, New York.

Meerabeau L. (1992) Tacit nursing knowledge: an untapped

resource or a methodological headache? Journal of Advanced

Nursing 17, 108±111.

Mitchell L. (1996) Benchmarking, benchmarks, or best. Best

Practices and Benchmarking in Healthcare 1, 70±74.

National Health Service Executive (1998) Achieving Effective

Practice Ð A Clinical Effectiveness and Research Information

Pack for Nurses, Midwives and Health Visitors. NHSE/Nursing

Times, London.

NHSE (1996) Promoting Clinical Effectiveness. Department of

Health, Stationery Of®ce, London.

Oxman A.D. (1994) The Cochrane Collaboration Handbook.

Cochrane Collaboration, Oxford.

Parkin P.A.C. (1995) Nursing the future: a reexamination of the

professionalization on thesis in light of some recent develop-

ments. Journal of Advanced Nursing 21, 561±567.

J. Ellis

224 Ó 2000 Blackwell Science Ltd, Journal of Advanced Nursing, 32(1), 215±225

Phillips S. (1995) Benchmarking: providing the direction for

excellence. British Journal of Health Care Management 1,

705±707.

Poulton B.C. & West M. (1993) Effective multidisciplinary team-

work in primary health care. Journal of Advanced Nursing

18, 918±925.

Stein S.J., Watts D.T. & Howell T. (1990) The doctor nurse game

revisited. Nursing Outlook 38, 264±266.

Sweet S.J. & Norman I.J. (1995) The nurse-doctor relationship: a

selective literature review. Journal of Advanced Nursing 22,

165±170.

UKCC (1992) Code of Conduct for Nurses, UKCC, London.

Walby S. & Greenwell J. (1994) Medicine and Nursing: Professions

in a Changing Health Service. Sage, London.

Woodham-Smith C. (1950) Florence Nightingale. Constable,

London.

Wright S. (1994) De®ning nurses and doctors duty of care.

Nursing Standard 9, 12±14.

Zairi M. & Leonard P. (1994) Practical Benchmarking. The

Complete Guide. Chapman & Hall, London.

Nursing and health care management issues Clinical practice benchmarking

Ó 2000 Blackwell Science Ltd, Journal of Advanced Nursing, 32(1), 215±225 225