evaluation and change management: rhetoric and reality

15
HUMAN RESOURCE MANAGEMENT JOURNAL, VOL 14 NO 3, 2004 5 Evaluation and change management: rhetoric and reality Denise Skinner, Oxford Brookes University, UK Human Resource Management Journal, Vol 14, no 3, 2004, pages 5-19 Despite its inclusion in prescriptions that are offered for successful change management and the benefits this could bring, it is widely recognised that systematic, planned evaluation of initiatives rarely takes place. On the basis of the findings from qualitative case study research undertaken in the public sector, this article explores both the rhetoric, as represented by the literature, and the reality of evaluation in the context of three change initiatives. What emerges is the importance of informal, personal evaluation which appears both to negate the need and to act as a replacement for systematic planned evaluation for the management group. Equally significant is the evidence of informal evaluations occurring at every level of the organisation that were not recognised by management as important – and which were being neither captured nor shared, other than in a very restricted sense. Consequently, decisions were being made on the basis of an assumed reality that did not necessarily reflect the experience of those affected by the change. Rather than emphasising the need for planned, systematic evaluation processes for change initiatives, it is suggested that inclusion of approaches that facilitate recognition and sharing of perception and experience across group boundaries may be more acceptable and productive. Contact: Denise Skinner, Business School, Oxford Brookes University, Wheatley, Oxford OX33 1HX, UK. Email: [email protected] T he vast majority of prescriptions for the successful management of change have expressed the need for the inclusion of reviews of progress and/or an assessment of results or outcomes at some point (for examples, see Buchanan and Boddy, 1992; Carnall, 1995; Hayes, 2002; Kirkpatrick, 1985). Some have specifically articulated the importance of evaluation as part of the implementation of the change process (for examples, see Hollinshead and Leat, 1995; Owen, 1993; Salaman and Butler, 1994; Thornhill et al, 2000). Yet, despite most change initiatives having failed to achieve their desired outcomes, few organisations have sought to understand why such efforts have been unsuccessful (Preskill and Torres, 1999a). Change initiatives have continued to follow one after the other with little evaluation of their impact being made before organisations progress to the next (Doyle et al, 2000; Torraco, 1997). Using evidence from public sector case studies, this article considers the issue of evaluating change in the context of three HRM initiatives, and explores the reasons for the differences between the rhetoric, as represented by the advice and guidance in the literature, and the reality as it occurred in the context of these initiatives. The article begins by outlining the place of evaluation in the change

Upload: denise-skinner

Post on 21-Jul-2016

220 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Evaluation and change management: rhetoric and reality

HUMAN RESOURCE MANAGEMENT JOURNAL, VOL 14 NO 3, 2004 5

E valuation and change management:rhetoric and reality

Denise Skinner, Oxford Brookes University, UKHuman Resource Management Journal, Vol 14, no 3, 2004, pages 5-19

Despite its inclusion in prescriptions that are offered for successful changemanagement and the benefits this could bring, it is widely recognised thatsystematic, planned evaluation of initiatives rarely takes place. On the basis of thefindings from qualitative case study re s e a rch undertaken in the public sector, thisarticle explores both the rhetoric, as re p resented by the literature, and the reality ofevaluation in the context of three change initiatives. What emerges is theimportance of informal, personal evaluation which appears both to negate the needand to act as a replacement for systematic planned evaluation for the managementg roup. Equally significant is the evidence of informal evaluations occurring atevery level of the organisation that were not recognised by management asimportant – and which were being neither captured nor shared, other than in a veryrestricted sense. Consequently, decisions were being made on the basis of anassumed reality that did not necessarily re flect the experience of those affected bythe change. Rather than emphasising the need for planned, systematic evaluationp rocesses for change initiatives, it is suggested that inclusion of approaches thatfacilitate recognition and sharing of perception and experience across gro u pboundaries may be more acceptable and pro d u c t i v e .C o n t a c t : Denise Skinner, Business School, Oxford Brookes University,W h e a t l e y, Oxford OX33 1HX, UK. Email: denise.skinner@bro o k e s . a c . u k

The vast majority of prescriptions for the successful management of changehave expressed the need for the inclusion of reviews of pro g ress and/or anassessment of results or outcomes at some point (for examples, s e e B u c h a n a n

and Boddy, 1992; Carnall, 1995; Hayes, 2002; Kirkpatrick, 1985). Some haves p e c i fically articulated the importance of evaluation as part of the implementationof the change process (for examples, see Hollinshead and Leat, 1995; Owen, 1993;Salaman and Butler, 1994; Thornhill et al, 2000). Yet, despite most change initiativeshaving failed to achieve their desired outcomes, few organisations have sought tounderstand why such efforts have been unsuccessful (Preskill and To r res, 1999a).Change initiatives have continued to follow one after the other with little evaluationof their impact being made before organisations pro g ress to the next (Doyle et al,2000; Torraco, 1997).

Using evidence from public sector case studies, this article considers the issueof evaluating change in the context of three HRM initiatives, and explores thereasons for the diff e rences between the rhetoric, as re p resented by the advice andguidance in the literature, and the reality as it occurred in the context of theseinitiatives. The article begins by outlining the place of evaluation in the change

Page 2: Evaluation and change management: rhetoric and reality

Evaluation and change management: rhetoric and reality

p rocess as re p resented by the management literature. This is followed by details ofthe re s e a rch design and the case studies. The article then contrasts the rh e t o r i cwith the evidence from the case studies, the reality being that any evaluation thattook place did so on a personal, informal basis rather than in a planned andsystematic way. On the basis of the findings, it is then suggested that the informalevaluation of the organisational culture and context made by those involved in thechange may offer an explanation for the absence of any planned evaluation, whileinformal evaluation of the change initiative by the management group itself servesas a surrogate.

EVALUATION – THE RHETORIC

As debate and practice in the field of evaluation have developed, notably ineducation, health and social policy, various approaches to evaluation have arisen.These range from the experimental and solely quantitative, in which the evaluator’ srole is that of scientist, through to the responsive and purely qualitative, wherein theevaluator adopts a facilitative, counselling role (Stecher and Davis, 1987). Patton(1997: 192) identifies 57 types of evaluation as ‘illustrative’ of the many typesavailable. Key issues in diff e rentiating between evaluations include: at what pointduring implementation they occur, what is being evaluated, the basic evaluationa p p roach adopted and who carries them out. Although diff e rent definitions takeslightly diff e rent views, important commonalities include evaluation being conductedas a p l a n n e d and p u r p o s e f u l activity rather than as an afterthought.

The process of planned evaluation has been variously defined as an activity forp roviding information for decision-making (Alkin, 1969), an activity focused onassessing the achievement of objectives (Eisner, 1979; Guba and Lincoln, 1981) or anactivity focused on assessing actual effects and outcomes re g a rdless of stated goals(Scriven, 1972). Patton (1997) highlights the contribution to improvement, describingevaluation as any effort to increase human effectiveness through systematic data-based inquiry; Russ-Eft and Preskill (2001) draw these ideas together, arguing thatevaluation should be a systematic pro c e s s for enhancing knowledge and decision-making which involves the collection of data.

As Doyle et al (2000) note, planned evaluation is a common element inp rescriptions for the implementation of change found within the managementliterature (for examples, see Buchanan and Boddy, 1992; Carnall, 1995; Hayes, 2002).Nelson (2003) asserts that the management of change should incorporate the regularreview of progress, and that strategy should change in response to feedback. Otherauthors specifically identify important contributions that the inclusion of a plannedp rocess of evaluation can make to the successful management of change. Love(1991), for example, outlines the role of effective evaluation in impro v i n gmanagement decision-making through provision of information andunderstanding. Kirkpatrick (1985) highlights the importance of feedback in gainingacceptance and commitment to change initiatives, while Carnall (1995) suggests thatpeople need information to understand new systems and their place in them.P reskill and To r res (1999a) argue that evaluative inquiry helps org a n i s a t i o nmembers reduce uncertainty, clarify direction, build community and ensure thatlearning is part of everyone’s job. The sharing of information, they argue, isessential if new insights and mutual understanding are to be created, for, as Calder

6 HUMAN RESOURCE MANAGEMENT JOURNAL, VOL 14 NO 3, 2004

Page 3: Evaluation and change management: rhetoric and reality

(1994) notes, we can make an evaluation only on the basis of the information towhich we have access. Indeed, ‘the conclusions that we reach will be limited by thequality of that information, its comprehensiveness, relevance, up-to-dateness andaccuracy’ (Calder, 1994: 16).

Patrickson et al (1995: 6) argue that evaluation is a necessary precursor to morechange ‘in a cycle of continuous improvement’, a pivotal point that provides anopportunity for analysis and re flection before making adjustments to the course ofchange. Bruce (1998: 56) suggests that evaluation can provide closure to a pro j e c t ,ensuring that, across the organisation, ‘learning takes place and motivates people tobe willing to participate actively in the future’. Doyle et al (2000) pose the question: ifthe change is not monitored, how can the experience contribute to org a n i s a t i o n a llearning? Hendry (1996) and Mellander (2001), among others, also argue that anychange process must fundamentally be about learning. Hendry’s stance isunderpinned by the emphasis on the importance of feedback and its effect in eitherchanging or re i n f o rcing people’s perceptions and behaviour. Similarly, Patton (1997)maintains that an evaluation process is, in itself, a benefit, due to the learning thatoccurs among those involved in it, not least because evaluation both depends on,and facilitates, clear communication.

The theory and good practice as defined in the literature position a planned,systematic and rigorous evaluation process as a key part of successful changemanagement. The inclusion of evaluation in implementation plans, linked with clearcriteria for success, it is argued, can convey positive messages about both theimportance and the intent of an initiative. A s t ru c t u red evaluation process canp rovide a conduit for communication, a means of involving interested parties andc reating shared understanding. It can also contribute to continuous improvement, allof which are among the key activities widely identified as necessary for successfulchange management. However, despite the arguments about the need for, andbenefits of, an explicit evaluation process, the reality appears to be that changeinitiatives in organisations continue to resist systematic planned evaluation (Doyle e ta l, 2000). This allows valuable knowledge to escape, condemning both individualsand organisations to repeat the, often unsuccessful, past (Garvin, 1993).

RESEARCH DESIGN

Sample

In order to explore the reality of evaluation in the context of change initiatives,qualitative case study re s e a rch was undertaken in three public sector org a n i s a t i o n s ,given the names ‘University’, ‘Agency’ and ‘College’ in the article. Public sectoro rganisations were chosen in the belief that the changes aimed at impro v i n ge fficiency and effectiveness that have been made since the early 1990s (Ghobadianand Ashworth, 1994; Kouzmin et al, 1999) might reasonably be expected to haveraised awareness of process and measurement issues. The introduction ofperformance targets, measurement and league tables have led to a gre a t e remphasis on the need for objective setting, monitoring, control and accountability(Driscoll and Morris, 2001; Hyndman and Eden, 2000) in pursuit of individual ando rganisational improvement. Kouzmin et al (1999) argue that this has shifted theemphasis in the public sector to means rather than to ends. On this basis, publicsector organisations might reasonably be expected to have an increased aware n e s s

Denise Skinner

7HUMAN RESOURCE MANAGEMENT JOURNAL, VOL 14 NO 3, 2004

Page 4: Evaluation and change management: rhetoric and reality

of process and outcome issues relating to the implementation and assessment ofchange initiatives.

When the re s e a rch was undertaken, University employed approximately 3,000staff and had responsibility for the development of policy, regulations, the creationof materials and the delivery of services to its consumers. Agency was one of anumber of ex-civil service departments in the UK to be given Agency status underthe Next Steps programme in 1991. At the time of the research, Agency had around65,000 staff based in a national network of offices which provided a variety ofinformational and support services to their client groups. In both cases, the focus forthe re s e a rch was an HRM initiative, concerning open re c ruitment andempowerment re s p e c t i v e l y, which had been implemented for a five-year periodwithout any explicit, planned evaluation taking place. The third org a n i s a t i o n ,College, employed 450 people across three sites within the same Midlands town,and was involved in designing, delivering and administering further educationcourses. The focus for the re s e a rch in this case was a mentoring initiative, morelimited in terms of scope, timing and impact than in the other case studies, butimplementation also began without any formal plan for evaluation.

Data collection

In each case, data was collected from a variety of sources (interviews, focus gro u p s ,observations, internal and published documents) over a six-month period, details ofwhich are given in Table 1.

I n i t i a l l y, key stakeholders in the change initiative were approached, and furtherdecisions about respondents were made using stakeholder analysis (Burgoyne, 1994)to ensure that all useful contributors were included and that ‘completeness’ would beachieved (Rubin and Rubin, 1995).

Data analysis

In each case, the data collected from multiple sources was analysed before emerg i n gthemes were tested against both the literature and the data from the other cases. Thea p p roach used to analyse the data is captured in Boyatzis’ (1998) thematic analysis inwhich the raw data is used to generate themes or patterns that, at a minimum, describeand organise observations; and, at a maximum, interpret aspects of the phenomenon.Thematic analysis begins with ‘sensing themes’ (Boyatzis, 1998: 11) within the data,p ro g ressing through the development of codes to the interpretation of the informationand themes in order to contribute to the development of knowledge.

The data collection began in University, followed by Agency and then College.The categories derived from the data in each case were considered in the light of thefindings from previous data collection for both commonality and inconsistency. Inthe second (Agency) and third (College) cases, the data was analysed and codedwithin the case before being considered in the light of the findings from the pre v i o u scases. The intention was to maintain an openness to what was contained within thedata, thereby allowing new thoughts to emerge, which might not have been the caseif the data had simply been considered within the confines of a framework cre a t e dby the previous findings. Emergent themes and patterns were also compared withthe literature in order to re fine categories and to look for relationships that might beexpected to exist. This also served as a means of re flecting on the emerging fin d i n g sto encourage deeper insight, which in turn served to re i n f o rce the credibility of thefindings (Eisenhardt, 1989).

Evaluation and change management: rhetoric and reality

8 HUMAN RESOURCE MANAGEMENT JOURNAL, VOL 14 NO 3, 2004

Page 5: Evaluation and change management: rhetoric and reality

Denise Skinner

9HUMAN RESOURCE MANAGEMENT JOURNAL, VOL 14 NO 3, 2004

TA B L E 1 Details of those interviewed in each case study

C A S E S T U D YT W O –A G E N C Y

● Senior managementd e v e l o p m e n tconsultant (thre ei n t e r v i e w s )

● The first CE,original championof empowerment

● Chair of them a n a g e m e n tdevelopment gro u p ,a rea dire c t o r

● T h ree area dire c t o r s

● D i rector ofpersonnel (re t i re d1 9 9 5 )

● Head of personnelb r a n c h

● Two seniorm a n a g e r s

● Nine middlem a n a g e r s

● Six juniormanagement focusg roups, each gro u pf rom a diff e re n tl o c a t i o n

● Six clerical staff –grade focus gro u p s ,each group from ad i ff e rent location

C A S ES T U D Y O N E –U N I V E R S I T Y

● P roject dire c t o r(four interviews)

● D i rector of diversityunit

● Two project teammembers (oneinterviewed thre etimes; the othero n c e )

● Two trainers

● Head of training

● Two consultants(two interviewse a c h )

● Manager andre c ru i t e r

● Two re c ru i t e e s

● Observation oft h ree meetings ofp roject team withconsultants andtwo focus gro u p s(one with re c ru i t e r sand one withre c ru i t e e s )

C A S ES T U D Y T H R E E –C O L L E G E

● S t a ff training andd e v e l o p m e n tm a n a g e r, championof the project (fouri n t e r v i e w s )

● Personnel manager

● Chief executive

● T h ree lecture r /m e n t o r s

● L e c t u rer andm e n t e e

● Manager andm e n t e e

● Administrator andm e n t e e

● M a n a g e r

● Observation of twog roup meetings

Page 6: Evaluation and change management: rhetoric and reality

EVALUATION – THE REALITY

The evidence from the case study organisations was that managers could articulate,and would express support for, the arguments found in the literature about theimportance and potential contribution of evaluation in the context of changeinitiatives. There was recognition that ‘we should re flect on what we do and the endp roduct and say, was it worth it?’ (project team member, University). The chiefexecutive (CE) of College felt that:

Evaluation is important because you actually learn from it. It closes aloop, it gives you feedback ...I think it’s a major part of your learningexperience… you should never, ever do anything without havingevaluated what you’re doing and why you’re doing it … It’s actually away of making the next group hopefully learn and be aware of theexperiences that other people have had so they perhaps don’t fall into thesame trap. CE, College

In the context of the empowerment initiative in A g e n c y :

The organisation needs to measure the effect of what people have beendoing about empowerment because we’ve been pouring huge amountsof time and money into it, and I think most people think that it’s beensuccessful, but I think there’s also a body of opinion within theo rganisation that remains to be convinced and will only be convinced bywhat I call a manufactured process to produce what they consider isobjective evidence… I think in order for people, for some people inp a r t i c u l a r, the kinds of people that are in our organisation, to buy intothat kind of belief, they want what they would describe as some kind ofobjective evidence. Senior development consultant, Agency

The CE of College also noted that ‘we should, at the beginning, give it quite a lot ofthought’, yet this had not happened in these three cases. In each, plans for theimplementation of the change initiatives considered did not include any explicitassessment of either the implementation or the impact of that initiative on theo rganisation. In the context of each of these change initiatives, senior management’si n t e rest and involvement were focused on the creation of the initiative rather than onthe specifics of the initiative and implementation. In A g e n c y, for example:

If you haven’t got a Board pronouncement of it and there hasn’t beensomething gone out in writing and it hasn’t found its way onto theiragendas then there is no stimulus to actually monitor pro g ress in as t ru c t u red way or evaluate it in a stru c t u red way ...nobody came to meand said are you going to measure it, nobody in any other function as faras I know embarked on their own measurement.

Senior development consultant, Agency

The consequences of the senior management focus included the failure at theoutset of each change initiative to define objectives and success criteria, other thanin the broadest terms, or to designate specific responsibility for monitoringprogress. As a consequence, neither the information nor the resources necessary for

Evaluation and change management: rhetoric and reality

10 HUMAN RESOURCE MANAGEMENT JOURNAL, VOL 14 NO 3, 2004

Page 7: Evaluation and change management: rhetoric and reality

a systematic, data-based evaluation were readily available. The re c ru i t m e n ttraining initiative ‘has gone on for five years and no one’s checked that anylearning has gone on’ (consultant, University). ‘We didn’t have a framework ofaccountability’ (former CE, Agency). ‘I would have hoped they would have stoodback and looked at how it impacted on people but I mean literally from listening toother people, I don’t think that happens at all’ (lecturer/mentor, Agency). This wasnot unusual. Respondents below senior management level perceived little historicalevidence within these organisations of effective evaluation of change, particularlyin the context of ‘soft’ initiatives. Indeed, in each of the three org a n i s a t i o n s ,respondents’ experience and observation suggested that ‘we don’t tend to evaluatetouchy-feely stuff’ (area director, Agency); ‘reviews of training and development inthe past have been done on the nod’ (project team member, University); ‘no, Ihaven’t seen it [evaluation of non-curriculum-related change] happen before, andwithin the changes that have already occurred, there has been literally nodiscussion about it anywhere’ (lecturer/mentor, College).

A further consequence was that those implementing the initiatives perceived thatsenior management did not place any value on planned, explicit evaluation. As thehead of training in University said, ‘if your line manager isn’t interested in it, it’s notgoing to be in your objectives, and if there’s nothing formal about it …’, the clearimplication being that it just will not happen because ‘managers are not encouraged tothink in evaluative terms’ (project dire c t o r, University); ‘there is no stimulus toevaluate in a stru c t u red way’ (staff training and development manager, Agency); ‘youthink it’s just another chore and you think, well, what is the point?’ (lecture r / m e n t e e ,College). This perception, combined with performance pre s s u re (Preskill and To r re s ,1999a), meant time for re flection was a luxury that could be ill aff o rd e d :

[ We are] in a climate where everyone is rushing on to the next thing.They are always rushing on to something else and it’s questionablewhether we ever sit down and really think things through pro p e r l y.

D i re c t o r, University

It was easier and quicker to keep doing what had been done in the past, even if itwas not best or most effective, for the evidence suggested that measurement wasp e rceived as difficult, and that the literature off e red very limited help to managers:

T h e re are some people that say it [empowerment] can’t be measure dbecause it’s about people’s views and feelings and that’s not objectivedata. S t a f f training and development manager, Agency

People felt the outcomes weren’t looked at, in fact they ignored theoutcomes, they also ignored the process because they didn’t re a l l yunderstand the process that was going on, so, in fact, you’ve got here theworst of both worlds, you have no real measure.

CE, College, referring to previous evaluations

I think little of the literature suggests any kind of systematic appro a c h .…I mean, I’ve worked in a number of soft change areas and I think it’sparticularly hard to pin down evaluation techniques that are useful forthose are a s . M a n a g e r, University

Denise Skinner

11HUMAN RESOURCE MANAGEMENT JOURNAL, VOL 14 NO 3, 2004

Page 8: Evaluation and change management: rhetoric and reality

This was compounded by the expectation, based on previous experience andobservation, that planned, explicit evaluation would result in criticism and theapportioning of responsibility for failure. In each case, there was evidence thatindividuals perceived themselves as operating in a blame culture in which the use ofp revious evaluation had been negative and divisive rather than a vehicle for positivei m p rovement and shared learning:

How can we evaluate what we’ve been doing while still retaining a senseof being a team, because some of the evaluation is bound to go back inand say, well I think you were wrong there, or that was wrong there, orwe should have done it better, but I’m also suggesting that you couldhave done it better. P roject dire c t o r, University

I work in a blame culture, increasingly so …partly because times aregetting tougher, there are less re s o u rces, if you read the bits of paper,strategies to manage change, it’s all about covering backs, a ‘handwashing job’. District manager, Agency

I would say that the problem is: what does evaluation mean to mostpeople? They are expecting evaluation to mean, ‘I will evaluate you andtell you [that] you are awful,’ and there is huge fear around evaluation.

Chief executive, College

Evaluation is about openness, it needs confidence and a degree ofcourage. You have to acknowledge the fact that you might be wro n g .

S t a f f training and development manager, College

I n e v i t a b l y, on this basis, defensive reasoning and routines at both an individualand an organisational level (Argyris, 1994) are unlikely to encourage pursuit ofplanned, explicit evaluation. For, as Tyson (1999) notes, managers are more than passive bystanders when it comes to the importation of new ideas, often selecting, re i n t e r p reting and giving relative emphasis to ideas according to their own agendas. For these reasons, they become the obvious targets in a blame culture.

The absence of any planned evaluation did not, however, mean that evaluation ofthe initiative was not taking place. In each of the three cases, informal evaluation(that which takes place at an individual level based on personal experience andobservation) was an ongoing activity that occurred at all levels. Informal evaluationhad had an impact from the beginning of each of these change initiatives. There wasno evidence of a detailed or considered assessment of either the organisational needor the appropriateness of these particular strategies before they were intro d u c e d .Senior managers responsible for initiating the process began from the pre m i s e ,resulting from their personal assessment, that benefits would inevitably result fro mtheir introduction of the change. In each case, there appeared to be a firm belief inthe inherent value of the initiative in question (Brunnson and Olsen, 1998), despiteauthors such as Asch and Salaman (2002) cautioning that ‘fashionable’ ideas maynot always be good, while change that fails to identify, or takes a simplistic view of,

Evaluation and change management: rhetoric and reality

12 HUMAN RESOURCE MANAGEMENT JOURNAL, VOL 14 NO 3, 2004

Page 9: Evaluation and change management: rhetoric and reality

the origins and nature of organisational difficulties may simply replace one set ofp roblems with another.

The CE in Agency believed that empowerment was ‘perfect for that org a n i s a t i o n ’ ;the CE of College ‘always thought that mentoring schemes were good’; and thed i rector of diversity unit, University, felt that the ‘open re c ruitment’ approach wasc o n g ruent with the dominant, unquestioned, cultural values of University whichincorporated ‘the mainstreaming of equal opportunities’. Senior management inthese cases perceived the initiative selected as a means of addressing change issues,both potential and actual, which related to the desired, new state. Within University,the concept of equality of opportunity was a widely shared and deeply heldo rganisational value, expressed in the Recruiters’ Focus Group as ‘something we’vebeen doing for 20 years anyway’ and, referring to discrimination, ‘we don’t go in forthat sort of thing, even without the training’. It was assumed that the ‘openre c ruitment’ training programmes would ensure that there continued to be equity inthe selection processes. In A g e n c y, empowerment would give ‘staff and colleaguesm o re space to use their initiative, to take decisions so they can respond more quicklyto client need, innovate’ (former CE), thus enabling the organisation to become moree fficient, to offer a better service to clients and increased job satisfaction to staff. InCollege, mentoring would not only socialise staff more quickly but also impro v ecommunication, increase motivation and reduce turnover:

I think that one of the reasons why (a) you have demotivated staff, and(b) the reasons why we don’t have the quality that I would like to see [ofs t a ff] here is because we don’t have an effective mentoring system andpeople don’t see why things are done in a particular way. CE, College

The power and impact of such assessments is widely recognised in the literature .Writing about the measurement of business excellence, Kanji (2002: 7) observes that‘too many management decisions are based on “gut feeling” rather thanm e a s u rement and fact. They are made on the basis of unverified information,experience, instinct or the opinion of the most influential people rather than oninformation correctly extracted from reliable data’ (Conti, 1997). Easterby-Smith(1994) also notes the pre f e rence of managers, particularly at senior levels, forinformation received via their own informal information channels, and observes thatthis information tends to be far more influential than that produced via more formalchannels. Clark and Salaman’s (1998) ideology of management re i n f o rces the belief ofmanagers in the value of their own judgements and disposes them towards pre -re flective reasoning (King and Kitchener, 1994), making them unlikely to questiontheir own interpretations or to acknowledge the limits of their own understanding.

The unquestioned belief that there would be a benefit to the organisation fro mi n t roducing these particular change initiatives reduced any perceived need toincorporate planned evaluation – those responsible for their initiation already ‘knew’that their effect would be positive, particularly in situations where the successfulactions of others were being imitated (Brunsson and Olsen, 1998). This tendency wasnot confined to senior managers. Managers involved in the implementation of thet h ree initiatives also made informal evaluations, believing that data provided by theirown personal assessments were sufficient in themselves. In University, for example,evaluation of training had traditionally been ‘informal, sitting-in or hearsay’ (trainer,University). In A g e n c y, ‘if it was left to me I wouldn’t evaluate it [empowerment] atall, because I think it’s observable’ (senior development consultant).

Denise Skinner

13HUMAN RESOURCE MANAGEMENT JOURNAL, VOL 14 NO 3, 2004

Page 10: Evaluation and change management: rhetoric and reality

I did not attend the evaluation meeting, number one because I was toob u s y, number two because I think I knew enough about it, I knewenough about people’s views …yeah, we will evaluate it, but we thinkwe know anyway what it was. Personnel manager, College

Yet, while accepting the power of their own informal evaluation processes, themanagers in the case studies did not consider, or were unaware of, the potential eff e c tof similar informal evaluation processes among other staff. Preskill and To r res (1999b:51) observe that:

At any one time, most individuals in an organisation will havec o n s i d e red issues and solutions for the dilemmas facing theiro rganisation – just as a matter of their own daily observations and reflections.

It was clear that all who were involved/affected by the initiative were makingtheir own personal informal evaluations, not always at a conscious level, of thep rocess, its impact and those involved; these were often shared within peer groups orcommunities-of-practice (Hendry, 1996). On the basis of this individual experienceand observation, people were reaching their own, sometimes conflicting, conclusions.For example, in University, some felt that the organisation was being too critical ofitself as it was clearly practising ‘open re c ruitment’ (even though there was noc o n c rete evidence to support this) and, there f o re, any formal or planned evaluationwas unnecessary. Others, however, felt uncertainty about how successful theo rganisation truly was in its intent to be open and equal:

I think I work in quite a male-oriented [environment]… I think it’s asubconscious bias because everybody knows that if they did somethingobviously sexist they’d get the book for it, so it becomes more subtle.

Recruitee, University

For most people in A g e n c y, the informal evaluation based on personal experienceand the knowledge shared within their own communities-of-practice was that themove to Agency status and empowerment had been positive. For the most part, theyno longer ‘felt hindered and shackled compared to years gone by’ (middle manager);t h e re was ‘more openness’, ‘freedom to manage’, ‘more consultation’ and more‘accountability’. Without exception, respondents said that a return to the ‘old ways’would be strongly resisted, for some observed that:

T h e re is some feeling that, at higher levels, things are going backward s ,laying down the way that things should be done rather than acting onthe advice and input from lower levels. There’s a feeling that they’repaying lip service to the idea of empowerment and involvement.

Junior manager, Agency

Individuals, particularly in the middle manager grades and below, did, however,recognise that their evaluation was largely limited to personal experience; and therewas a desire to know what had been done, either successfully or unsuccessfully, inother places. Many wished to have the ‘big picture’ that personal experience on itsown was unlikely to pro v i d e .

Evaluation and change management: rhetoric and reality

14 HUMAN RESOURCE MANAGEMENT JOURNAL, VOL 14 NO 3, 2004

Page 11: Evaluation and change management: rhetoric and reality

In College, although there was a general consensus that the change initiative wasuseful, there were diff e rent views about the role of mentors within the pro g r a m m ewhich had led to disappointed expectations for some:

I think it was fairly clear what the organisation saw as a mentorp rogramme ...my understanding of the term ‘mentor ’ comes fro me l s e w h e re, was a professional mentor and so I found it slightly strange.

Personnel manager/mentee

I don’t think they were selected pro p e r l y. You see, I think the mentor’ sm o re of a champion, more as a mentor i e the individual looks up andthey set good examples and the mentor re p resents the culture of theo rg a n i s a t i o n . L e c t u re r / m e n t o r

This had left some individuals with a sense of the organisation having missed anopportunity by introducing a very limited scheme. In contrast, there appeared to be aw i d e s p read acceptance that any form of mentoring was a good thing, and no attemptwas made to question or test its value in an organisational context. This absence had,h o w e v e r, been noted by some:

It was more the system that we’re looking at rather than what’smentoring all about, what’s this initiative supposed to achieve and hasit achieved it? I think they’ve not asked the right questions.

L e c t u re r / m e n t o r

CONSEQUENCES

Buchanan and Badham (1999) suggest that the management of change equates to themanagement of meaning and attempts to establish the legitimacy and credibility withother people of particular definitions of problems and solutions. Similarly, Reichers e ta l (1997), in their consideration of the defensive role of cynicism and the expectationof failure based on past experience as a foundation for resistance, suggest that peopleneed to understand not only the reasons for change but also its ongoing pro g ress andits results. Such understanding is, however, inevitably dictated by the informationthat is available.

Patton (1997: 26) observes that ‘what is perceived as real is real in its consequences’and that being in touch with reality is not something that can be assumed. We i c k(1995: 190) also cautions that ‘what is real is more up for grabs’ than practitionersrealise, and that realities cannot be taken for granted or assumed that they areobvious to anyone else. Yet, this appeared to be precisely what the management ofthese organisations were doing in the context of change.

Many of the benefits argued for planned evaluation, and the problems identified in current change practices relate to the common themes of information gatheringand sharing. Yet, in relation to the implementation of these initiatives, there was no recognition of the potential, or need, for shared learning or interpre t a t i o n ,particularly outside management circles. The focus was solely on the needs of themanagement agenda, with little acknowledgement of the needs or interests of otherstakeholder groups. However, in each of these case studies, individuals at all levelsw e re engaged in making their own assessments and constructing their own ‘re a l i t y ’

Denise Skinner

15HUMAN RESOURCE MANAGEMENT JOURNAL, VOL 14 NO 3, 2004

Page 12: Evaluation and change management: rhetoric and reality

in relation to their experience of the change initiative. While, in some cases, theseviews were shared and tested with peers, this was on a limited basis, and muchremained tacit rather than explicit. Consequently, future actions were beingi n fluenced by assessments based on personal perception and subjective evaluationsthat resulted from relatively narrow perspectives.

Kuipers and Richardson (1999) observe that each process of change is unique andcan be understood only from the experience of the participants, but this needs tohappen within a more general analytical framework. Without a planned process ofevaluation, there was no apparent mechanism for capturing individual learning thathad occurred and/or for sharing it across the organisation. As a result, valuableknowledge was being allowed to ‘dissipate to nothing’ (Anderson and Boocock,2002); there was no sense of closure to the experiential learning cycle (Hendry, 1996);and the inability to learn from experience increased the likelihood of re p e a t e dmistakes (Gustafson et al, 2003).

CONCLUSIONS

It is undeniable that explicit, planned evaluation can serve a variety of valuablepurposes, many of which could address common change problems. However, thefindings from the three cases studies highlight the influence and importance of theinformal, individual evaluation that takes place, both in assessing the impact of aninitiative and in undermining any perceived need for a systematic, explicit evaluationp rocess. From inception, these initiatives were based on a personal, informalassessment by senior management that such change would inevitably benefit theo rganisation. The assessment of those managers involved in the implementation,based on personal experience and observation in relation to previous evaluationa c t i v i t y, led to the conclusion that systematic, planned processes were both high riskand low value within their organisational context. Consequently, implementationplans did not include explicit evaluation activity. The nature of these factors,embedded as they are in experience, culture and context, mean that it would beu n realistic to suggest that they can be easily overcome or removed, particularly in theshort term. Neither would senior management interest or ‘top-down’ directives bes u fficient in themselves to change these perceptions. Easterby-Smith (1994) observesthat any evaluation process is a complex one that cannot be divorced from issues ofp o w e r, politics and value judgements. It should not be surprising, there f o re, if changeinitiatives continue to resist systematic evaluation.

An alternative approach to the problem may lie in the processes of informalevaluation that occur continually at every level of the organisation. A key factor inany move to shape processes and culture, to build cohesiveness or to develop andp romote shared values, must lie in the sense-making processes of the individualsa ffected (Weick, 1995). What is needed is an increased awareness and acceptance ofthe potential of these experiences, assessments and conclusions to contributepositively to the creation of shared understanding and continuous impro v e m e n t .This needs to be linked with the recognition that all parties actively involved in achange process have an interest in the process and outcomes, and a contribution tomake in establishing exactly what they were in reality – and that this is not solely thep reserve of the management group. If sense-making and learning are social as wellas individual processes (Weick, 1995), and dialogue is seen as critical in eff e c t i v echange interventions (Weick and Quinn, 1999), then a shift of focus in relation to

Evaluation and change management: rhetoric and reality

16 HUMAN RESOURCE MANAGEMENT JOURNAL, VOL 14 NO 3, 2004

Page 13: Evaluation and change management: rhetoric and reality

evaluation may be called for. Rather than emphasising systematic, plannedm e a s u rement as part of change management, the key may lie in focusing on theincorporation of processes that facilitate and encourage dialogue across gro u pboundaries, the pooling of experiences, and informal assessment that has thepotential to lead to shared learning.

REFERENCES

Alkin, M. (1969). ‘Evaluation theory development’. Evaluation Comment, 2, 2-7.Cited in Nevo, D. (1986). ‘Conceptualisation of educational evaluation’ in N e wDirections in Educational Evaluation. E. House (ed). London: Falmer Press.

Anderson, V. and Boocock, G. (2002). ‘Small firms and internationalisation: learningto manage and managing to learn’. Human Resource Management Journal, 12: 3, 5-24.

A rgyris, C. (1994). ‘Good communication that blocks learning’. Harvard BusinessReview, 72: 4, 77-85.

Asch, D. and Salaman, G. (2002). ‘The challenge of change’. E u ropean BusinessJournal, 14: 3, 133-143.

Boyatzis R. (1998). Transforming Qualitative Information: Thematic Analysis and CodeDevelopment, Thousand Oaks: Sage.

B ruce A. (1998). ‘Aiming for change? Stay on target’. P rofessional Manager, 7: 5, 24-25.

Brunsson, N. and Olsen, J.P. (1998). ‘Reform as routine’ in Strategic Human ResourceManagement, C. Mabey, G. Salaman and J. Storey (eds). London: Sage.

Buchanan, D. and Badham, R. (1999). P o w e r, Politics and Organisational Change,London: Sage.

Buchanan, D. and Boddy, D. (1992). The Expertise of the Change Agent: PublicPerformance and Backstage Activity, New York: Prentice Hall.

B u rgoyne, J. (1994). ‘Stakeholder analysis’ in Qualitative Methods in Org a n i s a t i o n a lResearch. C. Cassell and C. Symon (eds). London: Sage.

Calder, J. (1994). Programme Evaluation and Quality, London: Kogan Page.Carnall, C. (1995). Managing Change in Organizations, London: Prentice Hall.Clark, T. and Salaman, G. (1998). ‘Telling tales: management gurus’ narratives

and the construction of managerial identity’. Journal of Management Studies , 35: 2,137-61.

Conti, T. (1997). Organizational Self-Assessment, London: Chapman Hall.Doyle, M., Claydon, T. and Buchanan, D. (2000). ‘Mixed results, lousy process: the

management experience of organizational change’. British Journal of Management ,11: 3 (special issue), S59-S80.

Driscoll, A. and Morris, J. (2001). ‘Stepping out: rhetorical devices and culturechange management in the UK Civil Service’. Public Administration, 79: 4, 803-824.

Easterby-Smith, M. (1994). Evaluating Management Development, Training andEducation, Aldershot: Gower.

E i s e n h a rdt, K. (1989). ‘Building theories from case study re s e a rch’. Academy ofManagement Review, 14: 4, 532-550.

Eisner, E. (1979). The Educational Imagination, New York: MacMillan.Garvin, D. (1993). ‘Building a learning organization’. Harvard Business Review, 71: 4,

78-91.

Denise Skinner

17HUMAN RESOURCE MANAGEMENT JOURNAL, VOL 14 NO 3, 2004

Page 14: Evaluation and change management: rhetoric and reality

Evaluation and change management: rhetoric and reality

18 HUMAN RESOURCE MANAGEMENT JOURNAL, VOL 14 NO 3, 2004

Ghobadian, A. and Ashworth, J. (1994). ‘Performance measurement in localgovernment – concepts in practice’. International Journal of Operations and ProductManagement, 14: 5, 35-51.

Guba, E. and Lincoln, Y. (1981). Effective Evaluation: Improving the Usefulness of Evaluation Results Through Responsive and Naturalistic Appro a c h e s, San Francisco:Jossey Bass.

Gustafson, D.H., Sainfort, F., Eichler, M., Adams, L., Bisognano, M. and Steudel, H.(2003). ‘Developing and testing a model to predict outcomes of org a n i z a t i o n a lchange’. HSR: Health Services Research, 38: 2, 751-776.

Hayes, J. ( 2002). The Theory and Practice of Change Management, Basingstoke: Palgrave.H e n d r y, C. (1996). ‘Understanding and creating whole organisational change

through learning theory’. Human Relations, 49: 5, 621-41.Hollinshead, G. and Leat, M. (1995). Human Resource Management: an International

and Comparative Perspective, London: Pitman.Hyndman, N. and Eden, R. (2000). ‘A study of the coordination of mission,

objectives and targets in UK executive agencies’. Management AccountingResearch, 11: 2, 175-191.

Kanji, G. (2002). Measuring Business Excellence, London: Routledge.King, P.M. and Kitchener, K.S. (1994). Developing Reflective Judgement, San Francisco:

Jossey-Bass.Kirkpatrick, D.L. (1985). How To Manage Change Effectively, San Francisco: Jossey Bass.Kouzmin, A., Loffler, E., Klages, H. and Korac-Kakabadse, N. (1999). ‘Benchmarking

and performance measures in public sectors: towards learning for agencyeffectiveness’. International Journal of Public Sector Management, 12: 2, 121-144.

Kuipers, H. and Richardson, R. (1999). ‘Active qualitative evaluation: core elementsand procedures’. Evaluation, 5: 1, 61-79.

Love, A. (1991). Internal Evaluation, Newbury Park: Sage.Mellander, K. (2001). ‘Engaging the human spirit: a knowledge evolution demands

the right conditions for learning’. Journal of Intellectual Capital, 2: 2, 165-171.Nelson, L. (2003). ‘A case study in organisational change: implications for theory’.

The Learning Organisation, 10: 1, 18-30.Owen, J.M. (1993). P rogram Evaluation: Forms and Appro a c h e s, Sydney: Allen and

Unwin.Patrickson, M., Bamber, V. and Bamber, G. (1995). O rganisational Change Strategies,

Melbourne: Longman.Patton, M.Q. (1997). Utilization-focused Evaluation: the New Century Te x t, Thousand

Oaks: Sage.P reskill, H. and To r res, R. (1999a). Evaluative Inquiry for Learning in Org a n i z a t i o n s,

Thousand Oaks: Sage.P reskill, H. and To r res, R. (1999b). ‘Building capacity for organisational learning

through evaluative inquiry’. Evaluation, 5: 1, 42-60.Reichers, A.E., Wanous, J.P. and Austin, J.T. (1997). ‘Understanding and managing

cynicism about organizational change’. Academy of Management Executive, 11: 1, 48-59.

Rubin, H. and Rubin, I. (1995). Qualitative Interviewing: the Art of Hearing Data,Thousand Oaks: Sage.

Russ-Eft, D. and Preskill, H. (2001). Evaluations in Organizations, Cambridge, Mass:Perseus Publishing.

Salaman, G. and Butler, M. (1994). ‘Why managers won’t learn’ in M a n a g i n gLearning, C. Mabey and B. Mayon-White (eds), London: Routledge.

Page 15: Evaluation and change management: rhetoric and reality

Scriven, M. (1972). ‘Pros and cons about goal free evaluation’. EvaluationComment: The Journal of Educational Evaluation, 3: 4, 1-7.

Stecher, B. and Davis, A. (1987). How to Focus an Evaluation, Newbury Park: Sage.Thornhill, A., Lewis, P., Millmore, M. and Saunders M.N.K. (2000). M a n a g i n g

Change, Harlow: Financial Times Prentice Hall.Torraco, R.J. (1997). ‘Theory building research methods’ in HRD Research Handbook .

R. Swanson and E. Holton III (eds), San Francisco: Berrett-Koehler.Tyson, S. (1999). ‘How HR knowledge contributes to organisational performance’.

Human Resource Management Journal, 9: 3, 42-52.Weick, K. (1995). Sensemaking in Organizations, Thousand Oaks: Sage.Weick, K. and Quinn, R.E. (1999). ‘Organizational change and development’.

Annual Review of Psychology, 50: 1, 361-386.

Denise Skinner

19HUMAN RESOURCE MANAGEMENT JOURNAL, VOL 14 NO 3, 2004