managing the implementation of innovations

9
Evaluation and Program Planning, Vol. 8, pp. 261-269,198s Printed in the USA. All rights reserved. 0149-7189185 $3.00 + .OO Copyright o 1985 Pergamon Press Ltd MANAGING THE IMPLEMENTATION OF INNOVATIONS CYNTHIA ROBERTS-GRAY The BDM Corporation ABSTRACT Data from a field survey of acceptance and implementation of a new training system are used to assess the validity and utility of a conceptual model of implementation processes. Results demonstrate the model’s applicability, provide evidence of its explanatory power, and suggest that the model can be usefully applied to guide implementation planning and monitoring. Such applications are expected to improve the return realized on investment in new tech- nologies, policies, and programs. In the last decade it has become clear that implemen- tation is a missing link in the process of innovation and planned change. Too many potentially useful innova- tions - for example, new instructional technologies, organizational and policy changes, management deci- sion aids- end up in the closet or file cabinet rather than being aggressively integrated into actual practice. Faulty implementation reduces the return realized on investments made in new programs and technologies. Its cost is counted not only in money wasted on devel- opment or acquisition of the innovation, but is counted also in lost opportunity for achieving promised bene- fits and in the erosion of confidence in innovation as a process for solving problems and achieving real im- provements. Recognizing that implementation would be critical in determining the return realized on a multimillion dollar investment in a new training system in the U.S. Army, the system’s training developers took extraor- dinary steps to ensure that implementation would be a high quality effort. One of those steps was to include implementation monitoring as one element of the sup- port package that was fielded with the system. The purpose of monitoring was to detect emergent prob- lems and make recommendations to help integrate the training system into routine practices in operational units. A preliminary step in designing the monitoring pro- gram for this training system was the development of a conceptual model of implementation processes. Although there is a relatively large literature concerned with implementation issues, review of the literature showed that there were few models that explore the in- ternal dynamics of implementation as a process (Gray, 1981; Tornatzky et al., 1983). Of the few process models that were available - for example, Zaltman and Duncan’s (1977) models of planned change, none appeared to provide sufficient conceptual integration to serve as guides or aids in planning for and evaluat- ing the implementation phase of innovation and planned change. A model was developed, therefore, for the spe- cific purpose of guiding implementation monitoring. The model is a conceptual integration of elements and relationships documented in the implementation litera- ture. The purpose of this paper is to describe the model and explore its utility as a guide to implementation monitoring. The paper is presented in two sections. In the first part, the model is described and arguments are made in support of its content validity. In the second section, data collected during a case of implementation monitoring are used to evaluate the model’s criterion validity. This article grew out of work conducted under contract to the U.S. Army Research Institute for the Behavioral and Social Sciences (AR1 Con- tracts 1070-80-7(4) and 903-82-C-0656). The views expressed here are the author’s own and do not necessarily reflect the position of the U.S. Army Research Institute or the Department of the Army. Dr. Thomas Gray, Ms. Judith J. Nichols, and Dr. James Banks made thoughtful contributions during the preparation of this paper. Their assistance is very much appreciated. Requests for reprints should be sent to the author at 7600 Woodhollow 1313, Austin, TX 78731. 261

Upload: cynthia-roberts-gray

Post on 19-Nov-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Managing the implementation of innovations

Evaluation and Program Planning, Vol. 8, pp. 261-269,198s Printed in the USA. All rights reserved.

0149-7189185 $3.00 + .OO Copyright o 1985 Pergamon Press Ltd

MANAGING THE IMPLEMENTATION OF INNOVATIONS

CYNTHIA ROBERTS-GRAY

The BDM Corporation

ABSTRACT

Data from a field survey of acceptance and implementation of a new training system are used to assess the validity and utility of a conceptual model of implementation processes. Results demonstrate the model’s applicability, provide evidence of its explanatory power, and suggest that the model can be usefully applied to guide implementation planning and monitoring. Such applications are expected to improve the return realized on investment in new tech- nologies, policies, and programs.

In the last decade it has become clear that implemen- tation is a missing link in the process of innovation and planned change. Too many potentially useful innova- tions - for example, new instructional technologies, organizational and policy changes, management deci- sion aids- end up in the closet or file cabinet rather than being aggressively integrated into actual practice. Faulty implementation reduces the return realized on investments made in new programs and technologies. Its cost is counted not only in money wasted on devel- opment or acquisition of the innovation, but is counted also in lost opportunity for achieving promised bene- fits and in the erosion of confidence in innovation as a process for solving problems and achieving real im- provements.

Recognizing that implementation would be critical in determining the return realized on a multimillion dollar investment in a new training system in the U.S. Army, the system’s training developers took extraor- dinary steps to ensure that implementation would be a high quality effort. One of those steps was to include implementation monitoring as one element of the sup- port package that was fielded with the system. The purpose of monitoring was to detect emergent prob- lems and make recommendations to help integrate the training system into routine practices in operational units.

A preliminary step in designing the monitoring pro- gram for this training system was the development of a conceptual model of implementation processes. Although there is a relatively large literature concerned with implementation issues, review of the literature showed that there were few models that explore the in- ternal dynamics of implementation as a process (Gray, 1981; Tornatzky et al., 1983). Of the few process models that were available - for example, Zaltman and Duncan’s (1977) models of planned change, none appeared to provide sufficient conceptual integration to serve as guides or aids in planning for and evaluat- ing the implementation phase of innovation and planned change. A model was developed, therefore, for the spe- cific purpose of guiding implementation monitoring. The model is a conceptual integration of elements and relationships documented in the implementation litera- ture.

The purpose of this paper is to describe the model and explore its utility as a guide to implementation monitoring. The paper is presented in two sections. In the first part, the model is described and arguments are made in support of its content validity. In the second section, data collected during a case of implementation monitoring are used to evaluate the model’s criterion validity.

This article grew out of work conducted under contract to the U.S. Army Research Institute for the Behavioral and Social Sciences (AR1 Con- tracts 1070-80-7(4) and 903-82-C-0656). The views expressed here are the author’s own and do not necessarily reflect the position of the U.S. Army Research Institute or the Department of the Army.

Dr. Thomas Gray, Ms. Judith J. Nichols, and Dr. James Banks made thoughtful contributions during the preparation of this paper. Their assistance is very much appreciated.

Requests for reprints should be sent to the author at 7600 Woodhollow 1313, Austin, TX 78731.

261

Page 2: Managing the implementation of innovations

262 CYNTHIA ROBERTS-GRAY

A MODEL TO GUIDE IMPLEMENTATION PLANNING AND MONITORING

The model shown in Figure 1 is based on the assump- tion that successful implementation is fostered when the developer of an innovation follows through with support for implementing actions (Beyer & Trite, 1982). In most cases the developer is not in a position to intervene directly in user practices (i.e., cannot directly influence implementing actions or utilization behavior). The model shows, therefore, that features of the innovation and characteristics of the user are the proximal objectives of the developer’s implementation support program. Strategies of implementation sup- port can be applied to adapt features of the innovation so that they can be more easily assimilated in the user environment. Other strategies can be applied to facili- tate changes in user characteristics so that features of the innovation are accommodated. These changes establish the fit that is needed between innovation and user so that the desired degree of implementation will be realized. The innovation then is afforded the op- portunity to produce its intended benefits. The ulti- mate outcome should be a maximum return on invest- ment in the innovation.

At the top level of analysis, the model describes an implementation system in which the output is the de- sired degree of implementation. The input is features of the innovation and characteristics of the user that influence successful implementation. The conversion process is a program of implementation support de- signed to foster a good fit between innovation and user. This top-level systems model was generated to provide a framework for organizing the multitude of variables that are relevant during the implementation phase of innovation and planned change.

A review of the research literature confirmed that the number of variables to be integrated into the model was large. Zaltman, Duncan, and Holbeck (1973), for example, identified 19 variables as determinants of planned change. Shields (1976) described 5 1 factors af- fecting training technology transfer in the military. Rothman (1974) cited more than 200 variables affect- ing implementation of social change!

A separate notecard was prepared for each variable that was discussed in the literature as a factor influenc- ing successful implementation of innovations. These cards literally were sorted into boxes representing the output, input, and process of an implementation system. “Survival of promotion of key personnel” (Yin, 1979) and “accuracy” of implementation (Hall & Loucks, 1977), for example, were placed in the output box because they were judged to describe something about the degree of implementation. Another example is “user participation in the adoption decision” (Kotter & Schlesinger, 1979). This variable was sorted into the implementation support box because it describes a strategy for converting a potentially resistant user into

one who is committed to successful implementation of the innovation. Individual commitment (resistance) was in its turn sorted into the user characteristics box.

Some of the variables were more difficult to sort than others. “Complexity” (Fullan & Pomfret, 1977), for example, clearly belongs in the input class and, at first thought, seemed to be a feature of the innovation. But complexity was found to be redundant with user characteristics such as “skills and knowledge” (Davis, 1973) needed to operate and maintain the innovation. Because skill and knowledge are easier concepts to de- fine and measure, complexity was submerged in these variables and sorted into the user characteristics box.

Despite a number of reductions of this type, the variables in any given box were still sufficiently numerous to require additional integration. Sub- models were developed, therefore, to provide in- tegrating structures that could be usefully applied to guide implementation monitoring and feedback. The results of this second-level modelling effort are shown in Figure 1 and discussed in the paragraphs that follow.

Degree of Implementation The substructure for the component labeled degree of implementation is relatively simple. The three subele- ments represent a succession of implementation goals. As Lewin (1947) pointed out, change does not happen all at once. Instead, it develops gradually in a series of steps or stages. Lewin identified three stages in the de- velopment of social change: periods of “unfreezing,” “moving,” and “refreezing.” The succession of imple- mentation goals shown in Figure 1 corresponds to these three stages. The first goal is development of a local implementation plan. This plan helps unfreeze the user organization. Such planning is necessary, ac- cording to Karmas and Jacko (1977), to avoid failure after initially enthusiastic expectations for the inno- vation. Variables that describe characteristics of the user’s implementation plan, such as its “trialability, divisibility, or reversibility” (Glaser, 1973), were sorted into this subcategory.

A fair trial in the user environment is the second in the succession of goals for achieving the desired degree of implementation. Things get moving during this period of initial use of the innovation. The user can evaluate at first hand the demands and benefits of us- ing the innovation and, given a favorable evaluation, make whatever adjustments are necessary to improve or maintain effective implementation. Variables that were sorted into this subcategory describe the “fidelity” or “accuracy” of implementation (Fullen & Pomfret, 1977; Hall & Loucks, 1977).

Finally, there is a period of refreezing during which the innovation is integrated into routine practices and “disappears” into the user organization (Yin, 1979).

Page 3: Managing the implementation of innovations

FE

AT

UR

ES

OF

TH

E I

NN

OV

AT

ION

1 ne

ed

avai

labi

lity

2 ad

vant

age

mal

ntai

nabi

lity

3 ef

fect

iven

ess

relia

bllit

y

DE

GR

EE

OF

IM

PLE

ME

NT

AT

ION

4%

rz

F

W

k B

B

D

rz.

g

CH

AR

AC

TE

RIS

TIC

S

OF

TH

E U

SE

R

Cap

abili

ty

Con

trol

Kno

w-h

ow

Res

ourc

es

Com

mitm

ent

Pol

icie

s

1 aw

aren

ess

role

s pe

rcep

tion

endo

rsem

ent

2 sk

ill

faci

litie

s m

otiv

atio

n

3 ex

perie

nce

proc

edur

es

legi

timat

lon

enfo

rcem

ent

Fig

ure

1.

A S

yste

ms

Mo

del

of

the

Imp

lem

enta

tio

n

Ph

ase

of

Inn

ova

tio

n a

nd

Pla

nn

ed C

han

ge.

Page 4: Managing the implementation of innovations

264 CYNTHIA ROBERTS-GRAY

This final step, labeled routine in the model shown in Figure 1, has been called “routinization” (Yin, 1979), “incorporation” (Lambright, 1977), and “continua- tion” (Zaltman, et al., 1973). The variables that were sorted into this subcategory are those that describe the survival of the innovation - for example, “survival of equipment turnover” (Yin, 1979).

The three subelements of degree of implementation divide the implementation phase of innovation and planned change into three stages. This division pro- vides a set of benchmarks or milestone events against which to evaluate implementation progress. It also serves to structure the problem to facilitate interven- tion in stages - a characteristic that Lippitt (1973) iden- tifies with “good” models of change.

Features of the Innovation and Characteristics of the User The model of implementation depicted in Figure 1 shows that degree of implementation is directly influ- enced by features of the innovation and by charac- teristics of the user. To develop substructures for these sets of variables, two organizing principles were ap- plied. First, the variables were sorted into categories representing capabilities and controls. Capabilities are the “raw materials” needed to produce the desired out- put of the system (see Beckhard, 1975; Rousseau, 1979). Controls are mechanisms or features that gov- ern the availability and distribution of the raw materi- als (Rousseau, 1979). The raw material the innovation puts into the system is its promised benefit or utility. The extent to which this raw material will be available to the user, however, depends on the degree to which t,he innovation is practicable in the user environment. Variables within the subcategory labeled practicability include the reliability and maintainability of the inno- vation.

The second organizing principle applied to sub- structure the two sets of input variables was the de- velopmental principle that had been applied to the out- put variables. The model was constructed to reflect the hypothesis that different input variables are important at different stages of implementation. Availability of the innovation and match with user needs were identi- fied as subcategories of variables that would affect im- plementation planning. Maintainability of the innova- tion and relative advantage-that is, the extent to which the innovation offers advantages beyond the im- mediate need and beyond those offered by other inno- vations - were identified as features of the innovation that affect the likelihood of a fair trial for the innova- tion in the user environment. Finally, the operational cost-effectiveness of the innovation and its reliability were identified as features of the innovation that affect integration of the innovation into routine practices.

Figure 1 shows that two kinds of capability and two kinds of control were identified within the set of vari-

ables describing user characteristics. This division was made to acknowledge the fact that both the organiza- tion and its individual members are users of an innova- tion (see Roberts-Gray & Gray, 1983; Scheirer, 1982). The raw material (i.e., capabilities) the organization provides for producing the desired degree of imple- mentation is its resources for supporting use of the in- novation. Individual capabilities are represented in the model as know-how- people need to know how to use the innovation in order to support the desired degree of implementation. Organizational control is provided in the policies it establishes regarding implementation or use of the innovation. Individual control is reflected in people’s commitment to the innovation for, as Lewin (1947) pointed out, change has both a cognitive (i.e., know-how) and an affective (i.e., commitment) component.

Figure 1 shows the three-stage developmental se- quence that was identified for each of these four cate- gories of user characteristics. First, individual users must be aware of and perceive the need for the innova- tion. The organization must endorse adoption of the innovation and designate implementation role respon- sibilities in order to generate an implementation plan and prepare to receive and use the innovation. Then, to ensure that the innovation gets a fair trial, indi- vidual users must have the skills and knowledge re- quired to use the innovation and be motivated to do so. The user organization must be able to provide the facilities and supplies that are needed for initial or trial use of the innovation, and it must specify the rules for using and evaluating the innovation. Finally, to ensure survival of the innovation, individual members need to be experienced users who accept use of the innovation as a legitimate part of their routine practices. The orga- nization must have standard procedures for operating and maintaining the innovation as well as for monitoring and enforcing utilization policies that are compatible with the rest of the organization’s rules and regula- tions. Examples of variables included in these 12 cate- gories of user characteristics are “cosmopolitanism” (Shields, 1976) and “transition from soft to hard fund- ing” (Yin, 1979).

Implementation Support The model shows that there are two groups of strate- gies that can be applied to shape features of the inno- vation and characteristics of the user so as to support the desired degree of implementation. Strategies of adaptation are applied to change features of the inno- vation so that they better fit the abilities and values of the user. Strategies of facilitation are applied to help change user characteristics so that features of the inno- vation are accommodated.

There are two kinds of adaptive implementation. The first occurs when the developer takes an active role in testing and modifying the innovation during its

Page 5: Managing the implementation of innovations

Managing Implementation 265

implementation phase. The U.S. Army’s Life Cycle System Management procedures (1975), for example, make provisions for modifying hardware systems based on a series of developmental and operational tests. It also allows users to submit Modification Work Orders for further testing to support product improve- ment. The structure subordinate to the implementation support component of the implementation model illus- trated in Figure 1 shows this sequence of testing and modification of the innovation.

The second kind of adaptive implementation occurs when the user initiates redesign or adaptation of the in- novation. In the implementation literature, this pro- cess is often referred to as “reinvention” (see Lin & Zaltman, 1973; Rogers & Shoemaker, 1971). Rein- vention is such a pervasive phenomenon that it is sometimes treated as a philosophy of innovation rather than as a strategy for implementation support (see, for example, papers treating the controversy between pro- fidelity and pro-adaptation researchers; e.g., Blakely, 1982). The implementation model described in this paper is based on the assumption that implementation can be managed to achieve maximum benefit from the innovation. It was therefore necessary to identify a strategy to anticipate and coordinate reinventions that enhance rather than dilute the effectiveness of the in- novation.

Implementation monitoring is proposed as the strategy the developer can apply to support healthy reinventions and mitigate unacceptable adaptations. A continued feedback and information system has been described as “perhaps the most important single re- quirement” for increasing the likelihood of stable im- plementation (Beckhard, 1975; Kritek, 1976). Monitor- ing provides information needed to inform users about progress that is being made toward successful imple- mentation. This kind of feedback is a powerful rein- forcer of human behavior and can be usefully applied as a strategy of implementation support. Gray (1984) recommends a sequence of monitoring activities which corresponds to the three stages of implementation. Before the innovation is installed in the user environ- ment, monitoring focuses on the adequacy of the im- plementation support program. During the trial use phase, sufficiency of actual implementation is investi- gated. Finally, the impact and operational cost-effec- tiveness of the innovation are examined and results submitted to facilitate decisions about integrating the innovation into routine practices. This sequence for implementation monitoring is shown in Figure 1 as part of the substructure of the implementation support component.

Four strategies of facilitation are identified in Figure 1. To Zaltman and Duncan’s (1977) excellent discussions of these strategies is added the proposal that different strategies bring about different kinds of changes in the

user. Education strategies provide information and training to ensure that individuals know how to use the innovation and integrate it into their routine practices. Assistance strategies provide technical or fiscal sup- port for arranging organizational resources to accom- modate receipt and use of the innovation. Persuasion strategies are applied to shape people’s attitudes and values to foster personal commitment to the innova- tion. Power strategies are exercised to establish policy and sanctions to force the innovation into place and provide organizational control over its use.

An appropriate sequence was identified for applying different techniques within each of the four facilitation strategy groups. Providing information, expert con- sulting or counseling, demonstrations of the innova- tion, and networking or linkage between superordinate members of the developer and user organizations are techniques that facilitate timely development of a local implementation plan. User training, funds or a getting- started resource package, user participation in the adoption decision and implementation planning, and negotiation of implementation support agreements are techniques that can be applied to ensure that the inno- vation gets a fair trial in the user environment. Finally, the provision of user manuals or other procedures for coaching users as they become more experienced with the innovation, technical assistance and team-building activities, and the specification of terms and conditions for developer supplied warranties are techniques that can be applied to help the user accommodate contin- ued use of the innovation as part of routine practices.

Summary The model identifies 21 critical issues for measurement and analysis (3 successive approximations of the de- sired degree of implementation, 6 features of the inno- vation, and 12 characteristics of the user that directly influence achievement of the desired degree of imple- mentation). These 21 issues are organized into three stages so that the monitor can provide detailed atten- tion to those seven issues that are most important before, during, or after installation of the innovation in the user environment.

The model also identifies 18 techniques the devel- oper can apply to support implementation of the in- novation. As was discussed in the foregoing text, the different strategies bring about different kinds of changes in the interface between user and innovation. This feature of the model can be exercised to help en- sure that recommendations developed in the course of implementation monitoring follow directly from anal- ysis of data that are collected to measure degree of im- plementation, features of the innovation, and charac- teristics of the user.

The rather lengthy discussion of the logic of the model and the frequent reference to other models and

Page 6: Managing the implementation of innovations

266 CYNTHIA ROBERTS-GRAY

data bearing on the implementation problem are pro- phenomenon to provide a picture of the situation at vided as evidence of the model’s content validity. The hand. It shows the relative importance of various ele- model meets criteria for a “good” model of change (see ments. It also designates milestone events that can be Lippitt, 1973). It sorts out the elements of a complex monitored to facilitate intervention in stages.

AN APPLICATION AND TEST OF CRITERION VALIDITY

As Lippitt (1973) has pointed out, models are validated against their applicability. During 1981 and 1982, the model was applied to guide implementation monitor- ing for an advanced technology training system in the U.S. Army. In addition to providing useful feedback to maximize the benefits obtained from the training system, the monitoring effort provided data with which to conduct a preliminary test of the model’s criterion validity.

Method A field survey was conducted in a sample of active duty Army units to monitor implementation of an ad- vanced technology system for collective training. At the time that implementation monitoring was initiated, data were available from developmental tests to satisfy the training developers that the training system was needed and that its relative advantages would be evi- dent to users. Another agency was charged with collect- ing data on availability, maintainability, and reliability of the system. Implementation monitoring did not, therefore, include measurement and analysis of features of the innovation. Interviews, questionnaires, and document requests were developed to collect data with which to assess characteristics of the user and to measure the degree of implementation.

Data Sources. The battalion was the primary unit of analysis for the survey because battalions run the train- ing programs into which the training system had to fit, A battalion is a hierarchically organized Army unit with approximately 700 members. The commander of such a unit usually has had 15 or more years of service in the military.

A sample of 20 battalions was drawn from active duty Army divisions stationed in the continental United States and in Europe. Although the battalions were selected on the basis of convenience to the Army, the participating units represented a variety of geographi- cal areas and unit types (i.e., armor, mechanized, and infantry units).

Interviews were conducted with the battalion com- mander, battalion operations staff officer, two com- pany commanders, two each of their platoon leaders, and two each of the squad/crew leaders in the sampled platoons for a total of 12 respondents per battalion. Interviews were also conducted with one or more brigade operations staff officers (brigade is the echelon immediately superior to the battalion) and one or more persons at the training equipment maintenance facility serving each of the installations visited. Training

schedules, training policy documents, and records of training equipment use were also obtained from the sample battalions.

Procedure. Site visits were made to each of the battal- ions. Data collection teams of four to six individuals included behavioral scientists and military subject matter experts trained in interview techniques. Semistructur~ interviews were conducted one-to-one. Questionnaires were administered to individuals fol- lowing interview sessions. When possible, photocopies of the documents requested were obtained. When photocopies were not available, written notes were taken from the original documents while the research team was on site.

The model of implementation processes shown in Figure 1 guided the development of a hierarchical pat- tern of analysis whose terminal nodes described desired status of user characteristics and degree of im- plementation for the particular training system. Survey questions were drafted to obtain information about actual status of variables at the terminal nodes of the hierarchy.

Data evaluation rules were developed to use the survey responses to compare actual status against de- sired status. The evaluation rules were constructed to apply an ordinal scale in which 3 = ideal (i.e., actual status is the desired status), 2 = potential problem, and 1 = problem (i.e., actual status is inadequate). These rules were developed by a panel of personnel with subject matter expertise in both Army training and in the particular training innovation. This ap- proach is similar to that described by Hall and Loucks (1977) and by Tornatzky, Fergus, Avellar, and Fair- weather (1980) as a technique for measuring program implementation. Although any individual data evalua- tion rule is subject to debate, the set of rules represents the considered opinion of the research team. Because the rules are explicit, they enabled systematic aggrega- tion of many pieces of data to address issues at super- ordinate levels in the model.

The data were analyzed to test the hypothesis that the user characteristics specified in the model shown in Figure 1 directly influence degree of implementation of an innovation. The number of battalions imple- menting and not implementing the training system was cross-tabulated with status of user characteristics. The average scores on the user characteristics dimensions and the average implementation scores were computed for each battalion by calculating an arithmetic mean

Page 7: Managing the implementation of innovations

Managing Implementation 261

TABLE 1

BAITALION SCORES

Degree of Implementation Know-How

User Characteristics

Resources Commitment Policies Aggregate

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

0 l

0 0 0 0

0 0 l

0 0 0

0 0 l

0 0 0

t 0 l 0 0 0 t t

0 0 0 0

t . 0 0 0 0

. 0 . 0 l 0

l l . 0 0 0 t t l

0 0 0 l l t 0 0 t

t l l 0 0 t

l l 0 0 l *

t l l 0 . t

l l l 0 l t

l l . 0 t t

l l l 0 t l

0 = Satisfactory l = Unsatisfactory

across elements at the terminal nodes in the pattern of analysis. If the mean was found to be equal to or greater than 2.0, the battalion was assigned a score of satisfactory. If the average was less than 2.0, the bat- talion was scored unsatisfactory. This resealing was necessary because the sample size (n = 20) was too small to allow more than two scale values on any di- mension included in the cross-tabulation.

Results Table 1 shows the status of user characteristics and of degree of implementation computed for each of the battalions included in the survey. It shows that in 13 of the 20 battalions the aggregate user characteristics score was satisfactory. Seven of these 13 battalions also had achieved satisfactory implementation of the training innovation. None of the battalions in which status of user characteristics was unsatisfactory could claim satisfactory status for degree of implementation.

Although 6 of the 20 battalions had satisfactory user characteristics scores but had unsatisfactory imple- mentation scores, user acceptance (i.e., satisfactory user characteristics) appeared to be a necessary con- dition of satisfactory implementation. In Table 2, de- gree of implementation is cross-tabulated with user characteristics. Analysis of data in this table confirmed that user characteristics scores reliably discriminated units in which degree of implementation was satisfac- tory (Fisher’s Exact Test, p < .05).

Table 3 shows that when each of the four user

characteristics dimensions was cross-tabulated with de- gree of implementation, the pattern of results was sim- ilar to that obtained with the aggregate user character- istics score. The commitment dimension was a notable exception. Because all battalions achieved a satisfac- tory score on this dimension, cross-tabulating it with degree of implementation provided no information about its contribution to the achievement of imple- mentation goals. It is an interesting finding, however, because it suggests that, although user commitment may be a necessary condition, it is not a sufficient con- dition of successful implementation.

Discussion The results support the hypothesis that the model iden- tifies user characteristics that influence degree of im- plementation. All of the battalions that had achieved a satisfactory degree of implementation were battalions

TABLE 2

NUMBER OF BATTALIONS IMPLEMENTING THE TRAINING SYSTEM BY AGGREGATE OF USER CHARACTERISTICS

User Characteristics

Degree of Implementation Satisfactory Unsatisfactory

Satisfactory 7 0 Unsatisfactory 6 7

Fisher’s Exact test, p < .05.

Page 8: Managing the implementation of innovations

268 CYNTHIA ROBERTS-GRAY

TABLE 3 NUMBER OF BATTALIONS IMPLEMENTING THE TRAINING SYSTEM BY SUBCATEGORIES OF USER CHARACTERISTICS

Know-How

Degree of Implementation Satisfactory Unsatisfactory

Satisfactory 6 1

Unsatisfactory 2 11

Resources

Satisfactory Unsatisfactory

Satisfactory Unsatisfactory 5 2

3 10

Commitment

Satisfactory Unsatisfactory

Satisfactory Unsatisfactory 7 0

13 0

Policies

Satisfactory

Unsatisfactory

Satisfactory Unsatisfactory

7 0 7 6

with satisfactory user characteristics. Conversely, none of the battalions in which user characteristics were un- satisfactory had achieved a satisfactory degree of im- plementation.

There are at least two explanations for the finding that 6 of the 20 battalions had apparently accepted the innovation but had not implemented it. The first pos- sibility is that there are factors that are not included in the set of user characteristics specified in the model, but that do affect degree of implementation. For ex- ample, Shields (1976) proposes that there is a set of milieu factors that affect training technology transfer in the military. These factors, which include such items

as “crises and revolutions,” are not part of user char- acteristics. Also, as noted earlier, this study did not include measures of features of the innovation. Anec- dotal evidence suggested that these features may have played a role in determining degree of implementation at different user sites. The wet climate, for example, was reported to affect durability of some of the train- ing device components at some sites. This situation may have adversely affected user attitudes as well as actual device use at those sites. The data that are avail- able do not allow exploration of this type of explana- tion.

The second and more appealing explanation is the proposal that favorable status for user characteristics is a prerequisite for successful implementation. In that case, the six battalions that appeared to have accepted the training innovation but were not using it would be projected as battalions that would soon achieve satis- factory implementation. This latter explanation is con- sistent with the systems sequence described in the model.

The fact that the status of user characteristics re- liably discriminated users and non-users of the training system argues for the model’s validity as a guide for determining critical issues for measurement and anal- ysis during implementation monitoring. This result has important implications for practice. Because de- velopers of new programs, policies, and technologies are rarely in a position to intervene directly in user practices, they cannot directly influence implementing actions. However, they can provide education, assis- tance, persuasion, and guidance to shape user accep- tance of the innovation. By making user characteristics the proximal objectives of a program to support imple- mentation of an innovation, the developer increases the probability that real gain will be realized on invest- ment in the innovation.

CONCLUSION

The systems model of implementation presented in Arguments have been presented to support the this paper was developed to serve as a guide for imple- model’s content validity. Data from a case of imple- mentation monitoring. The model is organized to aid mentation monitoring have been submitted as evidence in identifying critical issues for measurement and anal- of the model’s criterion validity. Ultimately, however, ysis. It suggests a sequence for implementation moni- the validity of this model will be tested against its ap- toring and thus supports intervention in stages. It also plicability. The model has been successfully applied to identifies and organizes a range of intervention strate- guide implementation monitoring for a major training gies so that implementation support can be tailored to innovation in the U.S. Army. Other similar applica- correct detected problems and foster successful imple- tions are urged. The net effect should be improved re- mentation. In this way, the model helps to forge a logi- turn on investment in innovative programs and new cal link between planning and monitoring. technologies.

REFERENCES

BEYER, .I. M., & TRICE, H. M. (1982). The utilization process: A

conceptual framework and synthesis of empirical findings. Admin- i&alive Science Quarterly, 27, 591-622.

BECKHARD, R. (1975). Strategies for large system change. Sloan Management Review, 16, 43-55.

BLAKELY, C. (1982). Organizational innovations: What have we

learned? Paper presented to the American Psychological Associ-

ation, Washington, DC.

DAVIS, H. R. (1973). Change and innovation. In S. Feldman (Ed.),

Page 9: Managing the implementation of innovations

Managing Implementation 269

Administration and mental health. Springfield, IL: Charles C Thomas.

ROBERTS-GRAY, C., & GRAY, T. (1983). Implementing innova- tions: A model to bridge the gap between diffusion and utilization. Knowledge: Creation, Diffusion, Utilization, 5, 213-232.

FULLAN, M., & POMFRET, A. (1977). Research on curriculum instruction and implementation. Review of Education Research, 47, 335-397.

ROGERS, E. M., & SHOEMAKER, F. F. (1971). Communication of innovations: A cross-cultural approach. New York: Free Press.

GLASER, E. M. (1973). Knowledge transfer and institutional change. Professional Psychology, 4, 434-444.

ROTHMAN, J. (1974). Planning and organizing for social change: Action principles from social science research. New York: Columbia University Press.

GRAY, T. (1981). Implementing innovations: A systems approach to what is known. Journal of Technology Transfer. 6, 19-32.

GRAY, W. (1984). Implementation monitoring: A role for evalu- ators. Paper presented at Evaluation 84, San Francisco, CA.

ROUSSEAU, D. M. (1979). Assessment of technology in organiza- tions: Closed vs. open systems approaches. Academy of Manage- ment Review, 4, 531-542.

HALL, Cl. E., & LOUCKS, S. F. (1977). A developmental model for determining whether the treatment is actually implemented. American Educational Research Journal, 14, 263-276.

SCHEIRER, M. A. (1982). Program implementation: The organiza- tional context. Beverly Hills, CA: Sage Publications.

KARMAS, J. S., & JACKO, M. (1977, October). Innovations: A note of caution. NASSP Bulletin, 47-56.

SHIELDS, J. L. (1976). Training technology transfer: T3. Proceed- ings of the 15th Annual U.S. Army Operations Research Symposium (pp. 1053-1065). Washington, DC: TRADOC.

KOTTER, J. P., & SCHLESINGER, L. A. (1979). Choosing stra- tegies for change. Harvard Business Review, 57, 106-l 14.

TORNATZKY, L. G., EVELAND, J. D., BOYLAN, M. G., HETZNER, W. A., JOHNSON, E., ROITMAN, D., & SCHNEI- DER, J. (1983). Theprocess of technologicalinnovation: Reviewing the literature. Washington DC: National Science Foundation.

KRITEK, W. J. (1976). Lessons from the literature on implementa- tion. Educational Administration Quarterly, 12, 86-102.

LAMBRIGHT, W. N. (1977). Adoption and utilization of urban technology: A decision-making study. Syracuse, NY: Syracuse Re- search Corporation.

TORNATZKY, L. G., FERGUS, E. O., AVELLAR, J. W., & FAIRWEATHER, G. W. (1980). Innovation and social process. New York: Pergamon Press.

U.S. ARMY. (1975). Life cycle system management model for Army systems (DA Pam 1 l-25). Washington, DC: Author.

LEWIN, K. (1947). Frontiers in group dynamics. Human Relations, I, 5-41. YIN, R. K. (1979). Changing urban bureaucracies. Lexington, MA:

Lexington Books. LIN, N., & ZALTMAN, G. (1973). Dimensions of innovations. In G. Zaltman (Ed.), Processes andphenomena of social change. New York: John Wiley & Sons.

ZALTMAN, G., DUNCAN, R., HOLBECK, J. (1973). Innovations and organizations. New York: John Wiley & Sons.

LIPPITT, Cl. L. (1973). Visualizing change. La Jolla, CA: Uni- ZALTMAN, G., & DUNCAN, R. (1977). Strategies for planned versity Associates. change. New York: John Wiley & Sons.