functional scorecard

32
Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=mmis20 Download by: [University of Kelaniya] Date: 21 September 2015, At: 02:31 Journal of Management Information Systems ISSN: 0742-1222 (Print) 1557-928X (Online) Journal homepage: http://www.tandfonline.com/loi/mmis20 Measuring the Performance of Information Systems: A Functional Scorecard JERRY CHA-JAN CHANG & WILLIAM R. KING To cite this article: JERRY CHA-JAN CHANG & WILLIAM R. KING (2005) Measuring the Performance of Information Systems: A Functional Scorecard, Journal of Management Information Systems, 22:1, 85-115 To link to this article: http://dx.doi.org/10.1080/07421222.2003.11045833 Published online: 08 Dec 2014. Submit your article to this journal Article views: 29 View related articles

Upload: udara-perera

Post on 08-Dec-2015

7 views

Category:

Documents


0 download

DESCRIPTION

HRIS FUNCTIONAL SCORECARD

TRANSCRIPT

Page 1: Functional Scorecard

Full Terms & Conditions of access and use can be found athttp://www.tandfonline.com/action/journalInformation?journalCode=mmis20

Download by: [University of Kelaniya] Date: 21 September 2015, At: 02:31

Journal of Management Information Systems

ISSN: 0742-1222 (Print) 1557-928X (Online) Journal homepage: http://www.tandfonline.com/loi/mmis20

Measuring the Performance of InformationSystems: A Functional Scorecard

JERRY CHA-JAN CHANG & WILLIAM R. KING

To cite this article: JERRY CHA-JAN CHANG & WILLIAM R. KING (2005) Measuring thePerformance of Information Systems: A Functional Scorecard, Journal of ManagementInformation Systems, 22:1, 85-115

To link to this article: http://dx.doi.org/10.1080/07421222.2003.11045833

Published online: 08 Dec 2014.

Submit your article to this journal

Article views: 29

View related articles

Page 2: Functional Scorecard

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS 85

Journal of Management Information Systems / Summer 2005, Vol. 22, No. 1, pp. 85–115.

© 2005 M.E. Sharpe, Inc.

0742–1222 / 2005 $9.50 + 0.00.

Measuring the Performance ofInformation Systems:A Functional Scorecard

JERRY CHA-JAN CHANG AND WILLIAM R. KING

JERRY CHA-JAN CHANG is an Assistant Professor in the Department of MIS in theCollege of Business, University of Nevada, Las Vegas. He has a B.S. in Oceanogra-phy from National Ocean University, Taiwan, an M.S. in Computer Science fromCentral Michigan University, an MBA from Texas A&M University, and an M.S. inMoIS and a Ph.D. in MIS from the University of Pittsburgh. His research interestincludes performance measurement, IS strategy, management of IS, group supportsystems, human–computer interaction, organizational learning, and strategic plan-ning. His work has appeared in Information & Management, Decision Support Sys-tems, DATABASE, Communications of the ACM, and Journal of Computer InformationSystems, and several major IS conference proceedings.

WILLIAM R. KING holds the title University Professor in the Katz Graduate School ofBusiness at the University of Pittsburgh. He has published more than 300 papers and15 books in the areas of Information Systems, Management Science, and StrategicPlanning. He has served as Founding President of the Association for InformationSystems (AIS), President of TIMS (now INFORMS), and Editor-in-Chief of MISQuarterly. He was instrumental in the creation of INFORMS and of the InformationSystems Research journal. He recently received the Leo Lifetime Exceptional Achieve-ment Award by AIS.

ABSTRACT: This study develops an instrument that may be used as an informationsystems (IS) functional scorecard (ISFS). It is based on a theoretical input–outputmodel of the IS function’s role in supporting business process effectiveness and orga-nizational performance. The research model consists of three system output dimen-sions—systems performance, information effectiveness, and service performance. The“updated paradigm” for instrument development was followed to develop and vali-date the ISFS instrument. Construct validation of the instrument was conducted usingresponses from 346 systems users in 149 organizations by a combination of explor-atory factor analysis and structural equation modeling using LISREL. The processresulted in an instrument that measures 18 unidimensional factors within the threeISFS dimensions. Moreover, a sample of 120 matched-paired responses of separateCIO and user responses was used for nomological validation. The results showed thatthe ISFS measure reflected by the instrument was positively related to improvementsin business processes effectiveness and organizational performance. Consequently,the instrument may be used for assessing IS performance, for guiding informationtechnology investment and sourcing decisions, and as a basis for further research andinstrument development.

KEY WORDS AND PHRASES: functional scorecard, information systems performancemeasurement, instrument development, structural equation modeling.

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 3: Functional Scorecard

86 JERRY CHA-JAN CHANG AND WILLIAM R. KING

ASSESSING THE INFORMATION SYSTEM (IS) function’s performance has long been animportant issue to IS executives. This interest is evident from the prominence of thisissue in the various IS “issue” studies [12, 13, 34, 49, 72] as well as the popularity ofannual publications such as ComputerWorld Premier 100 and InformationWeek 500,which involve the use of surrogate metrics to assess overall IS functional perfor-mance (ISFP). Executives routinely seek evidence of returns on information technol-ogy (IT) investments and sourcing decisions—both types of choices that have becomemore substantial and a competitive necessity. As the unit that has major responsibili-ties for these decisions, the IS function is usually believed to be an integral part ofachieving organizational success. Yet the overall performance of the IS function hasproved to be difficult to conceptualize and to measure.

As the outsourcing of IS subfunctional areas such as data centers and “help desks”has grown into the outsourcing of the entire IS function, there is an ever-growingneed for formal performance assessment [61]. This will permit the establishment ofbaseline measures to use in judging outsourcing success. So, the issue of an overall ISfunctional metric, which is, and has been, high on IS executives’ priorities, is becom-ing even more important.

Although there has been a good deal of research on IS efficiency, effectiveness, andsuccess at various levels of analysis, overall functional-level performance is one ofthe least discussed and studied. According to Seddon et al. [88], only 24 out of 186studies between 1988 and 1996 can be classified as focusing on the IS functionallevel. Nelson and Cooprider’s [70] work epitomizes this need.

Moreover, while there exist metrics and instruments to assess specific IS subfunctionsand specific IS subareas, such as data center performance, productivity and data qual-ity, typically these measures cannot be aggregated in any meaningful way. This limitstheir usefulness as the bases for identifying the sources of overall performance im-provements or degradations. As an anonymous reviewer of an earlier version of thispaper said, “The critical issue is that the performance of the IS function is now underthe microscope and decisions to insource/outsource and spend/not spend must bemade in a structured context.”

The objective of this research is to develop such an instrument—a “scorecard”—for evaluating overall ISFP.

The Theoretical Bases for the Study

THE DEFINITION OF THE “IS FUNCTION” that is used here includes “all IS groups anddepartments within the organization” [84]. This definition is broad enough to includevarious structures for the IS function, from centralized to distributed, yet specificenough to include only the formal IS function that can be readily identified.

Figure 1 shows the modified input–output (I/O) model that is the theoretical basisfor the study. The model in Figure 1 has been utilized as a basis for other IS researchstudies [63, 113]. It incorporates a simple input–output structure wherein the IS func-

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 4: Functional Scorecard

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS 87

tion uses resources to produce IS performance, which in turn influences both busi-ness process effectiveness and organizational performance.

The resources utilized by the IS function are shown in Figure 1 to be hardware,software, human resources, and integrated managerial and technical capabilities [14,15, 36]. The IS function is shown to produce systems, information, and services [56,92], which collectively affect the organization in a fashion that is termed IS functionalperformance (ISFP), which is to be assessed through an IS functional scorecard (ISFS),the development of which is the objective of this study.

In the theoretical model, IS outputs are also shown as significant enablers and driv-ers of business process effectiveness, since IS are often the basis for business processoperations and redesign [94, 113]. ISFP also is shown to influence business processeffectiveness, and both influence overall organizational performance [113].

Although it is not the primary purpose of this study to directly address the businessprocess effectiveness and organizational performance elements of Figure 1, data werecollected on these elements of the model for purposes of nomological validation ofthe “scorecard” that is being developed.

The model of Figure 1 is based on streams of research in IS capabilities, IS effec-tiveness/success, IS service quality, IS functional evaluation, and IS subfunctionalassessment.

IS Capabilities

IS capabilities are integrated sets of hardware, software, human skills, and manage-ment processes that serve to translate financial investments in IS into IS performance[17, 23, 42, 83, 99, 111, 113]. For instance, an IS strategic planning capability mightconsist of well-trained planners, computer-based planning models, knowledgeable

Figure 1. Theoretical Input–Output Performance Model

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 5: Functional Scorecard

88 JERRY CHA-JAN CHANG AND WILLIAM R. KING

technical people, adequate planning budgets, and a well-formulated and specifiedplanning process.

IS Effectiveness/Success

DeLone and McLean [30] categorized over 100 IS “dependent variables” into sixcategories and developed an IS success model to describe the relationships betweenthe categories. They concluded that IS success should be a multidimensional measureand recommended additional research to validate the model. Other researchers havesince tested and expanded their model [7, 46, 79]. DeLone and McLean [31] haveupdated the model based on a review of research stemming from their original work.They concluded that their original model was valid and suggested that “service qual-ity” be incorporated as an important dimension of IS success.

IS Service Quality

Recognizing the importance of the services provided by the IS function, theSERVQUAL measure, originally developed in marketing [74], has been adapted tomeasure IS service quality [75, 110]. However, the controversy over SERVQUAL inmarketing [27] has carried over into IS [52, 104], suggesting that more research needsto be conducted to measure IS service quality. Proponents of this measure sometimesadvocate its use as proxy for ISFP. However, as depicted in Figure 1, it is directlyapplicable only to one of the three major outputs of the IS function.

IS Functional Evaluation

Only a few studies directly address the comprehensive evaluation of the performanceof the IS function. No one has developed a validated metric. Wells [112] studiedexisting and recommended performance measures for the IS function and identifiedsix important goals/issues. Saunders and Jones [84] developed and validated 11 ISfunction performance dimensions through a three-round Delphi study. They proposedan IS function performance evaluation model to help organizations select and priori-tize IS performance dimensions and to determine assessments for each dimension.Both studies focused on top management’s perspective of ISFP and did not offer anyspecific measures.

IS Subfunctional Assessment

Measuring IS subfunctional performance has been important to IS practitioners andacademics, and such measures have been developed at a variety of levels using anumber of different perspectives. For instance, measurements have been made of theeffects of IS on users (e.g., [105]), learning outcomes (e.g., [1]), service (e.g., [52]),e-business (e.g., [96]) and other contexts using economic approaches (e.g., [16]), a

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 6: Functional Scorecard

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS 89

financial perspective (e.g., [10]), a social science perspective (e.g., [80]), an “IT value”approach [24], a business process viewpoint (e.g., [98]), and probably others.

So, there has been no paucity of interest in IS assessment, or in the development ofmeasures. However, there is a great need for a comprehensive measure of IS perfor-mance that will provide a configural, or “Gestalt” [41], view of an organization’sformal IS activities and facilitate decision making and functional improvement.

The Methodological Basis for the Study

TO ENSURE THE APPROPRIATENESS OF THE STUDY at the IS functional level, it wasdesigned according to guidelines from the organizational effectiveness literature. Theseguidelines were developed in response to problems plaguing organizational effec-tiveness research as described by Steers [95]. Cameron and Whetton [19] developedseven basic guidelines that are listed in the lefthand column of Table 1. Cameron [18]later demonstrated the usefulness of these guidelines in a study of 29 organizations.These guidelines have also been adopted by IS researchers to clarify conceptual de-velopments in examining IS functional effectiveness [69, 88].

The implementations of Cameron and Whetton’s [19] guidelines for this study areshown in the righthand column of Table 1. Thus, the ISFS developed here is definedas organizational IS users’ perception of the performance for all of the aspects of theIS function that they have personally experienced. Organizational users of IS servicesand systems are the primary stakeholder for the IS function [92]. Although there aremany other stakeholders for the IS function, users represent the largest group, andtheir efficacy in utilizing IS products and services directly affects the organization’sbottom line. Therefore, the aggregated evaluation of individual users’ assessmentsforms a quite comprehensive picture of the ISFP.

Table 1. Implementation of Cameron and Whetton’s [19] Guidelines

Guidelines Implementations

1. From whose perspective is effectiveness Organizational users of IS services andbeing assessed? systems.

2. On what domain of activity is the Products and services provided by the ISassessment focused? function.

3. What level of analysis is being used? The IS function [84].4. What is the purpose for judging Identify strengths and weaknesses; track

effectiveness? overall effectiveness.5. What time frame is being employed? Periodically, ranging from quarterly to

annually.6. What type of data are being used for Subjective; perceptual data from

judgments of effectiveness? individual.7. What is the referent against which Past performance measures.

effectiveness is judged?

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 7: Functional Scorecard

90 JERRY CHA-JAN CHANG AND WILLIAM R. KING

Despite its focus on users, this approach is different from the popular “user satisfac-tion” measures [6, 8, 35], because it is designed to assess people’s perceptions of theoverall IS function rather than to capture users’ attitudes toward a specific system.

The Domain and Operationalization of theIS Performance Construct

USERS’ PERCEPTION OF IS ACTIVITIES derive from their use of the IS “products” andthe services provided by the IS function. IS research has traditionally separated theeffect of systems and information as two distinct constructs [30]. However, systemand information quality are “attributes of applications, not of IS departments” [87, p.244]. Therefore, they are not sufficient to reflect the effectiveness of the entire ISfunction.

Domain of the ISFS Construct

The domain of ISFP used in this study reflects the theory of Figure 1 and the modelssuggested by Pitt et al. [75] and Delone and McLean [31]. The definitions of the threebasic output-related dimensions are given below. A model of the ISFS construct, us-ing LISREL notation, is presented in Figure 2.

• Systems performance: Assesses the quality aspects of systems such as reliabil-ity, response time, ease of use, and so on, and the various impacts that systemshave on the user’s work. “Systems” encompass all IS applications that the userregularly uses.

• Information effectiveness: Assesses the quality of information in terms of thedesign, operation, use, and value [108] provided by information as well as theeffects of the information on the user’s job. The information can be generatedfrom any of the systems that the user makes use of.

• Service performance: Assesses the user’s experience with services provided bythe IS function in terms of quality and flexibility [38]. The services provided bythe IS function include activities ranging from systems development to helpdesk to consulting.

In order to develop a measurement instrument with good psychometric properties,the “updated paradigm” that emphasizes establishing the unidimensionality of mea-surement scales [40, 89] was followed. A cross-section mail survey is appropriate toobtain a large sample for analysis and to ensure the generalizability of the resultinginstrument.

Operationalization of Constructs

Two sets of constructs were operationalized in this study—the three-dimensional ISFSconstruct and the constructs related to the consequences of ISFP. These “consequences”

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 8: Functional Scorecard

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS 91

constructs (business process effectiveness and organizational performance), as shownin Figure 1, were used to assess nomological validity. Whenever possible, previouslydeveloped items that had been empirically tested were used or adopted to enhance thevalidity and reliability of the instrument under development. Some new measureswere also developed from reviews of both practitioner and research literatures toreflect developments that have occurred subsequent to the development of the mea-sures from which most items were obtained (e.g., e-commerce, enterprise resourceplanning [ERP], etc.).

The three output dimensions of Figure 1 are the basis for three ISFS dimensions.

Systems Performance

Measures of systems performance assess the quality aspects of systems and the vari-ous effects that IS have on the user’s work. Empirical studies listed under the catego-ries “system quality” and “individual impact” in DeLone and McLean’s [30] IS SuccessModel were reviewed to collect the measure used in those studies. In addition, instru-ments developed by Baroudi and Orlikowski [8], Doll and Torkzadeh [35], Davis[29], Kraemer et al. [59], Mirani and King [67], Goodhue and Thompson [43], Rykerand Nath [81], Saarinen [82], and Torkzadeh and Doll [103] were also reviewed and

Figure 2. Three-Dimensional Model of ISFS

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 9: Functional Scorecard

92 JERRY CHA-JAN CHANG AND WILLIAM R. KING

included for more updated measures published subsequent to DeLone and McLean’soriginal review.

Information Effectiveness

Measures of information effectiveness assess the quality of the information providedby IS as well as the effects of the information on the user’s job. Although DeLone andMcLean’s [30] “information quality” provided a good source for existing measures,Wang and Strong [109] developed a more comprehensive instrument that encom-passes all measures mentioned in DeLone and McLean’s review. Therefore, the 118measures developed by Wang and Strong make up the majority of items in this di-mension. However, since the focus of their instrument is on quality of information, inorder to ensure coverage of measures on the effects of information on the user’s job,some new items were developed.

Service Performance

Measures of service performance assess each user’s experience with the services pro-vided by the IS function in terms of the quality and flexibility of the services. Theentire IS–SERVQUAL instrument is included in this dimension for comprehensive-ness. New measures were also incorporated to augment the IS–SERVQUAL itemsbased on the more comprehensive view of service performance proposed by Fitzgeraldet al. [38]. In addition, literature on three areas of IS functional services that were notexplicitly covered by the service quality literature—training [60, 65, 71], informationcenters [11, 44, 47, 66], and help desks [20]—were also reviewed and included toensure the comprehensiveness of measures for this dimension.

In addition to utilizing existing items to measure these constructs, the emergence ofinnovations that have come into use since most of the prior instruments were developedprompted the inclusion of new items to measure the IS function’s performance in sevennew areas: ERP [51], knowledge management [45, 64, 97], electronic business [9, 55],customer relationship management [39], supply chain management [37], electroniccommerce [22, 33, 102], and organizational learning [86, 100]. In total, 31 new itemsgleaned from the practitioner and research literatures to reflect potential user assess-ments of IS function’s contribution to those areas in terms of systems, information, andservices were incorporated to expand the item pools for each dimension.

Instrument Development

A total of 378 items were initially generated. Multiple rounds of Q-sorting and itemcategorization were conducted [29, 68] to reduce the number of items and to ensurethe content validity of the ISFS instrument. The last round of Q-sort resulted in theidentification of subconstructs with multiple items for each of the three dimensions.Table 2 shows the subconstructs for each dimension that resulted from the Q-sortprocess.

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 10: Functional Scorecard

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS 93

The Q-sorts resulted in an ISFS instrument that consists of 42, 36, and 32 items forthe three dimensions, respectively. All items were measured using a Likert-type scaleranging from 1 (hardly at all) to 5 (to a great extent) with 0 denoting “not applicable.”The final version of the instrument is in the Appendix.

Survey Design and Execution

A sample of 2,100 medium-to-large companies with annual sales over $250 millionwas randomly selected from Hoover’s Online (www.hoovers.com) and Information-Week 500.

To avoid common-source bias, data were collected from two types of respondentsin each of the sampled organizations. Data for the ISFS instrument were collectedfrom IS users, and organizational CIOs were asked to respond to a “Consequences ofISFP” survey which was used as a basis for establishing nomological validity.

A packet consisting of one “Consequences of ISFP” instrument and three ISFSinstruments was sent to the CIOs of these companies. The CIO was asked to respondto the “Consequences of ISFP” survey and to forward ISFS instruments to three ISusers. The characteristics of desirable user–respondents in terms of various func-tional areas, familiarity with IS, and so on, were specified.

The CIO is deemed to be suitable for receiving the packet because the topic of thisresearch would be of great interest to him or her, therefore increasing the potential forparticipation. The CIO is also an appropriate respondent to the “Consequence of ISFP”survey because he or she is at a high-enough position to provide meaningful responsesconcerning consequences. Although it is possible that the CIO might distribute theISFS survey to “friendly” users and potentially bias the responses, it is unlikely thatthe users would be able to consciously bias the results due to the focus of the analysis(variance explanation) and the length and complexity of the ISFS instrument.

Table 2. Sub-ISFS Constructs from Q-Sort

Systems Information Serviceperformance effectiveness performance

Effect on job Intrinsic quality of ResponsivenessEffect on external information Reliability

constituencies Contextual quality of Service provider qualityEffect on internal information Empathy

processes Presentational quality TrainingEffect on knowledge of information Flexibility of services

and learning Accessibility of Cost/benefit of servicesSystems features informationEase of use Reliability of information

Flexibility of informationUsefulness of information

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 11: Functional Scorecard

94 JERRY CHA-JAN CHANG AND WILLIAM R. KING

Two rounds of reminders were sent to initial nonrespondents to improve the re-sponse rate. In addition, where appropriate, letters were sent to the CIOs who re-turned the CIO survey soliciting additional user participation.

At the conclusion of data collection in 2001, 346 usable ISFS instruments and 130“Consequences of ISFS” surveys were received, with 120 companies having responsesfrom at least one IS user and the CIO. This resulted in a response rate of 7.2 percentfor the CIO survey, 5.6 percent for the ISFS questionnaire, and 6.1 percent matched-pair responses.

Two analyses were conducted to assess possible nonresponse bias. t-tests of com-pany size in terms of revenue, net income, and number of employees between re-sponding and nonresponding companies showed no significant differences. t-tests of30 items randomly selected from the three ISFS dimensions (10 items each) betweenthe early (first third) and late (last third) respondents [4, 62] also showed no signifi-cant differences. Therefore, it can be concluded that there was no nonresponse bias inthe sample and that the relatively low percentage response rate does not degrade thegeneralizability of the ISFS instrument [57].

Sample Demographics

The participating companies represent more than 20 industries with nearly a quarterof the companies in manufacturing (24.6 percent), followed by wholesale/retail (13.8percent), banking/finance (10.8 percent), and medicine/health (7.7 percent). The rangeof annual sales was between $253 million and $45.352 billion, with an average of $4billion for the sample. For the “Consequences of ISFP” surveys, 46.9 percent of therespondents hold the title of CIO. More than 80 percent of the respondents have titlesthat are at the upper-management level, indicating that the returned surveys wereresponded to by individuals at the desired level. For the ISFS instrument, 47.7 percentof the respondents are at the upper-management level and 39.9 percent were at themiddle management level. The respondents are distributed across all functional ar-eas, with accounting and finance, sales and marketing, and manufacturing and opera-tions being the top three.

Instrument Validation

INSTRUMENT VALIDATION REQUIRES THE EVALUATION of content validity, reliability,construct validity, and nomological validity. Following Segars’s [89] process for in-strument validation, we first use exploratory factor analysis to determine the numberof factors, then use confirmatory factor analysis iteratively to eliminate items thatloaded on multiple factors to establish unidimensionality.

Content Validity

Content validity refers to the extent to which the measurement items represent andcover the domain of the construct [54]. It is established by showing that the “items are

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 12: Functional Scorecard

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS 95

a sample of a universe” of the investigator’s interest and by “defining a universe ofitems and sampling systematically within this universe” [26, p. 58]. Churchill [25]recommended specifying the domain of the construct followed by generating a sampleof items as the first two steps in instrument development to ensure content validity.Domain development should be based on existing theories, and sample items shouldcome from existing instruments, with the development of new items when necessary.

In this study, domain development was guided by theories in both organizationaleffectiveness and IS research. Items from existing instruments formed the overwhelm-ing majority of the item pool. The initial items were refined through a series of Q-sortsand a pilot test. These development procedures ensured the content validity of theinstruments.

Unidimensionality and Convergent Validity

Unidimensionality requires that only a single trait or construct is being measured bya set of measures and “is the most critical and basic assumption of measurementtheory” [50, p. 49]. Gerbing and Anderson suggest that “confirmatory factor analysisaffords a stricter interpretation of unidimensionality” [40, p. 186] than other com-monly used methods. Although the subconstructs of the three basic dimensions de-scribed earlier were identified during the Q-sort, those factors needed to be empiricallytested. Therefore, exploratory factor analyses were first conducted for items withineach dimension to determine the factors. This is acceptable, since the items for eachdimension were clearly separated in the instrument into sections with opening state-ments describing the nature of the items in the sections.

Three separate exploratory factor analyses were conducted using principal compo-nents with varimax rotation as the extraction method. There were seven, seven, andfive factors with eigenvalues greater than 1.0 that explained 70.8 percent, 68.6 per-cent, and 69.6 percent of variance for systems performance, information effective-ness, and service performance, respectively. Review of the items showed that mostfactors loaded very closely to the subconstructs identified by the Q-sort.

To establish unidimensionality, the items that loaded on the same factor were thenanalyzed with confirmatory factor analysis using LISREL—with two exceptions. Onefactor in “systems performance” had only one item. Since it is one of the original“ease-of-use” items from Davis [29], it was included into the factor that contains therest of the “ease-of-use” items. Another factor in “information effectiveness” hadonly three items. It would be “just identified” for confirmatory factor analysis andwas only analyzed in conjunction with other factors in the same dimension. Thisprocess resulted in six factors for systems performance, six for information effective-ness, and five for service performance.

Segars and Grover suggest that “measured factors be modeled in isolation, then inpairs, and then as a collective network” [91, p. 148]. This method of analysis providesthe fullest evidence of measurement efficiency and avoids problems caused by exces-sive error in measurement [2, 3, 53, 90]. In total, 17 measurement models were ana-lyzed. Each model went through an iterative modification process to improve its model

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 13: Functional Scorecard

96 JERRY CHA-JAN CHANG AND WILLIAM R. KING

fit. First, items with standardized factor loading below 0.45 were eliminated [78] oneat a time. Second, error terms between pairs of items were allowed to correlate basedon a modification index. However, this modification was only implemented whentheories suggested that the two items should be correlated. This process was con-ducted iteratively by making one modification at a time until either good model fitwas achieved or no modification was suggested.

Following Segars and Grover’s [90] procedure, after every measurement modelcompleted its modification process, pairs of models within each dimension were testediteratively to identify and eliminate items with cross-loadings. With all cross-loadingitems eliminated, all factors within the same dimension were tested in a full measure-ment model. Again, items with cross-loadings in the full model were dropped. Afterthe full measurement models were purified, second-order models that reflect the sub-constructs within each ISFS dimension were tested. The final, second-order measure-ment models for the three ISFS dimensions are presented in Figures 3, 4, and 5.

The chi-square and significant factor loadings provide direct statistical evidencesof both convergent validity and unidimensionality [91]. With each of the three ISFSdimensions properly tested independently, all three dimensions were combined andtested for model fit (Figure 6). The complete ISFS model, shown in Figure 6, showedremarkably good fit for such high complexity.

Reliability

In assessing measures using confirmatory factor analysis, a composite reliability foreach factor can be calculated [5, 93]. This composite reliability is “a measure ofinternal consistency of the construct indicators, depicting the degree to which they‘indicate’ the common latent (unobserved) construct” [48, p. 612]. Another measureof reliability is the average variance extracted (AVE), which reflects the overall amountof variance that is captured by the construct in relation to the amount of variance dueto measurement error [48, 89]. The value of AVE should exceed 0.5 to indicate thatthe variance explained by the construct is larger than measurement error. The con-struct reliability and AVE of all dimensions and subconstructs are presented in Table 3.

Table 3 indicates that all subconstructs showed good composite reliability exceptthe IS training scale. However, there are some scales with an AVE below 0.50. Thissuggests that even though all scales (except one) were reliable in measuring theirrespective constructs, some of them were less capable of providing good measures oftheir own construct. Despite the low AVE, those scales were retained to ensure thecomprehensiveness of the ISFS instrument.

Discriminant Validity

Discriminant validity refers to the ability of the items in a factor to differentiate them-selves from items that are measuring other factors. In structural equation modeling(SEM), discriminant validity can be established by comparing the model fit of anunconstrained model that estimates the correlation between a pair of constructs and a

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 14: Functional Scorecard

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS 97

Figure 3. Full Second-Order Measurement Model for Systems PerformanceNotes: χ2 = 618.62; d.f. = 411; p = 0.00; RMSEA (root mean square error of approxima-tion) = 0.038; GFI (goodness-of-fit index) = 0.90; AGFI (adjusted goodness-of-fit index) =0.87.

constrained model that fixes the correlation between the constructs to unity. Dis-criminant validity is demonstrated when the unconstrained model has a significantlybetter fit than the constrained model. The difference in model fit is evaluated by thechi-square difference (with one degree of freedom) between the models. Tests of allpossible pairs of subconstructs within each dimension were conducted; the results are

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 15: Functional Scorecard

98 JERRY CHA-JAN CHANG AND WILLIAM R. KING

Figure 4. Full Second-Order Measurement Model for Information EffectivenessNotes: χ2 = 216.20; d.f. = 156; p = 0.00; RMSEA = 0.033; GFI = 0.94; AGFI = 0.92.

presented in Table 4. As shown, all chi-square differences are significant at p < 0.001,indicating that each scale captures a construct that is significantly unique and inde-pendent of other constructs. This provides evidence of discriminant validity.

Nomological Validity

A nomological network that specifies “probable (hypothetical) linkages between theconstruct of interest and measures of other constructs” [85, p. 14] further clarifies theISFS construct and provides an additional basis for construct validation. Anoperationalization of the theoretical model of Figure 1 that considers the two impor-tant consequences of ISFP was used. This model consists of the rightmost portions of

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 16: Functional Scorecard

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS 99

Figure 1 that relate ISFP to business process effectiveness and to organizational per-formance.

Organizational Performance

Although a positive relationship between IS effectiveness and business performancehas been suggested, the evidence of such an effect has proved to be elusive [58].Using “user information satisfaction” and “strategic impact of IS” as surrogate ISeffectiveness measures and several perceptual measures as business performance, Chanet al. [21] empirically showed a significant positive relationship between IS and busi-ness performance. Since ISFS is posited as a more comprehensive measure of ISperformance, the positive relationship should hold in this study.

Figure 5. Full Second-Order Measurement Model for Service PerformanceNotes: χ2 = 139.09; d.f. = 94; p = 0.00; RMSEA = 0.037; GFI = 0.95; AGFI = 0.93.

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 17: Functional Scorecard

100 JERRY CHA-JAN CHANG AND WILLIAM R. KING

This construct captures the IS function’s contribution to the overall performance ofthe organization. The literature has focused on assessing the extent to which the ISimproves the organization’s return on investment (ROI), market share, operationalefficiency, sales revenue, customer satisfaction, competitiveness, and customer rela-tions [21, 77, 101, 106, 113]. Since subjective measures of those variables have been

Figure 6. The Complete ISFS ModelNotes: χ2 = 3,164.90; d.f. = 2,094; p = 0.00; RMSEA = 0.039; GFI = 0.79; AGFI = 0.77.

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 18: Functional Scorecard

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS 101

considered to be acceptable in the literature [32, 107], seven items that assess theCIO’s perception of IS’s contribution to improving the organization’s performance inthose areas were used.

Business Processes Effectiveness

Aside from directly affecting organizational performance, the IS function should alsohave an effect on organizational performance through its impact on the effectivenessof business processes, as shown in Figure 1. IS have traditionally been implementedto improve the efficiencies of internal operations. This use of IT has more recentlybeen applied in redesigning both intra- and interfunctional business processes [94].

Improvements to the value-chain activities through IT are captured in this con-struct. Based on Porter and Millar [76] and Devenport [28], Xia [113] developed a39-item instrument to assess executives’ perception of the extent to which IT im-proved the effectiveness of six value-chain activities. Data analysis resulted in sixfactors: production operations, product development, supplier relations, marketingservices, management processes, and customer relations. Items representing thosesix factors were generated for this construct.

Validity and Reliability of the Measures Used in Nomological Analysis. Although allscales in the “Consequences of ISFP” survey were from previously tested instruments,

Table 3. Reliability of Measurement Factors in ISFS Dimensions

Factor names Reliability AVE

Systems performance 0.92 0.66Impact on job 0.95 0.68Impact on external constituencies 0.88 0.56Impact on internal processes 0.89 0.80Impact on knowledge and learning 0.89 0.62Systems usage characteristics 0.85 0.49Intrinsic systems quality 0.79 0.56

Information effectiveness 0.92 0.63Intrinsic quality of information 0.73 0.48Reliability of information 0.79 0.66Contextual quality of information 0.85 0.75Presentational quality of information 0.87 0.77Accessibility of information 0.80 0.57Flexibility of information 0.81 0.58Usefulness of information 0.91 0.66

Service performance 0.89 0.63Responsiveness of services 0.88 0.79Intrinsic quality of service provider 0.84 0.56Interpersonal quality of service provider 0.93 0.82IS training 0.59 0.33Flexibility of services 0.69 0.37

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 19: Functional Scorecard

102 JERRY CHA-JAN CHANG AND WILLIAM R. KING

since a different sample was used, tests were conducted to ensure the reliability andvalidity of those constructs. Reliability was evaluated using Cronbach’s alpha. Itemswith low corrected item-total correlation, indicating low internal consistency for theitems, were dropped. Construct validation was assessed by exploratory factor analysisusing principal components with oblique rotation as the extraction method. Table 5presents the results of the analyses.

Although both constructs had two factors extracted, the two factors were signifi-cantly correlated in both cases. Therefore, all items within each construct were re-tained and used to create an overall score for the construct. Items in the finalmeasurement models were used to create an overall score for the ISFS construct. Theaverage of all items for each construct was used to avoid problems that may occur dueto differences in measurement scales. Table 6 shows the correlation among the con-structs.

As shown in Table 6, there were significant positive correlations between the ISFSconstruct and the two consequence constructs. There was also significant positivecorrelation between business processes effectiveness and organizational performance.Although correlation is not sufficient to establish causal relationships, the purposehere is to demonstrate the expected association between the constructs. Therefore, asshown in Tables 5 and 6, the nomological network was supported.

Table 4. Chi-Square Differences Between Factors

Chi-square differences

Factor1 Factor2 Factor3 Factor4 Factor5 Factor6

Systems performanceFactor2 39.61***Factor3 31.33*** 37.00***Factor4 26.31*** 38.67*** 28.55***Factor5 66.45*** 78.42*** 59.97*** 65.97***Factor6 49.47*** 64.66*** 55.23*** 58.68*** 70.76***

Information effectivenessFactor2 63.61***Factor3 80.96*** 73.22***Factor4 64.16*** 48.21*** 75.03***Factor5 65.65*** 40.60*** 65.60*** 47.34***Factor6 62.68*** 52.31*** 78.87*** 60.04*** 48.66***Factor7 67.31*** 47.84*** 67.80*** 57.82*** 47.27*** 47.36***

Service performanceFactor2 23.84***Factor3 51.52*** 52.93***Factor4 46.31*** 48.52*** 73.42***Factor5 30.69*** 46.89*** 75.73*** 53.93***

*** p < 0.001.

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 20: Functional Scorecard

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS 103

Results, Limitations, and Managerial Uses

THE ISFS INSTRUMENT IS A COMPREHENSIVE ONE that has been designed to measurethe performance of the entire IS function. The instrument consists of three majordimensions: systems performance, information effectiveness, and service performance.Each dimension contains several unidimensional subconstructs, each of which ismeasured by at least two items. All scales have high reliability. Evidence from dis-criminant validity analyses showed that each scale is measuring a construct that isdifferent from the other constructs.

Of course, some limitations to the instrument need to be pointed out. The sample size,while large, especially for “matched-pair” survey studies, is “borderline” for the num-ber of variables relative to the number of observations that are involved in the SEManalysis. Thus, some caution should be taken until it is revalidated. The nonresponsebias analysis was conducted by comparing early responders to late responders and interms of organizational size–related variables for responders and nonresponders. Al-though this is common practice, other variables might have been analyzed [57]. Wealso note that two subconstructs—“IS training” and “flexibility of services”—wereborderline with respect to reliability. Despite this, these items were retained for com-prehensiveness or theoretical soundness. Further studies will need to explore and im-prove these items. The ISFS may therefore be thought of as a preliminary step that canguide future research and enhance practice in a significant, but limited, way.

The ISFS integrates aspects of various philosophical approaches that have beentaken to developing IT metrics (e.g., [10, 16, 24, 80, 98]) as well as various sub-functional “levels” that have previously been measured (e.g., [31, 75, 84]). The com-prehensiveness of the ISFS instrument was demonstrated by its consideration of “all”

Table 5. Reliability and Validity of Nomological Constructs

Number of Number of Variancefactors items explained Cronbach’s

Constructs extracted retained (percent) alpha

Business processes 2 6 63.48 0.759effectiveness

Organizational 2 7 70.04 0.860performance

Table 6. Correlation of Constructs in the Nomological Network

Business OrganizationalConstructs processes performance

Organizational performance 0.750**ISFS 0.214** 0.205**

** Correlation is significant at the 0.01 level (two-tailed).

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 21: Functional Scorecard

104 JERRY CHA-JAN CHANG AND WILLIAM R. KING

IS activities as reflected in the hierarchical structure of 18 unidimensional subconstructswithin the three dimensions. This allows IS managers to use this instrument to assesstheir strengths and weaknesses in those subconstructs and dimensions. Of course,since the dimensions are not independent, the fact that one dimension may be “low”does not tell the whole story. Thus, such an indication must be further assessed inlight of the overall “Gestalt” of dimensions. In this sense, the ISFS also allows the ISfunction to pinpoint specific areas that need improvements and to track both theseareas and overall performance over time, thus providing the basis for “continuousimprovement” [73].

When used in large organizations with decentralized IS functions, the ISFS instru-ment can offer valuable insights for internal benchmarking. Comparing the results ofISFS instruments from different divisions or areas would help identify areas of ISexcellence and facilitate the transfer of knowledge to other areas. It has already beenused in each of these ways in a number of organizations that participated in the study.

In addition, with the intensifying scrutiny on IT investment, analysis, and outsourcing,the ISFS instrument can be very useful to establish a baseline on the current status ofISFP. Comparing postinvestment or outsourcing IS performance to the baseline wouldprovide a more objective evaluation of the efficacy of the actions taken. This, in turn,will allow the organization to develop follow-up actions to maximize IS performanceand, ultimately, to improve organizational performance.

Thus, the instrument can be used in various ways—as an overall evaluative tool, asa “Gestalt” of areas that may be tracked over time, or in evaluating specific subareas.At this latter level, it also provides means of identifying the specific performanceareas, as represented by the subconstructs, that may need improvement.

Because of the use of data from a cross-sectional field survey for validation, theISFS instrument is applicable to a variety of industries. When used within an organi-zation, the instrument should be administered to a range of systems users, in terms ofboth functional areas and organizational levels. This would ensure appropriate repre-sentations of the diverse users in the organization. The average scores for each sub-construct or dimension are the indicators of the IS function’s performance for thespecific subarea or dimension. To be effective, the ISFS instrument should be admin-istered repeatedly at a fixed interval between quarterly and annually. The result oflater assessments should be compared to earlier evaluations to detect changes thatwould indicate improvements or degradation in the IS function’s performance and inspecific performance areas. One additional caveat may be useful. The nomologicalvalidation was performed in terms of the overall ISFS score. As a result, there is noassurance that the “subscores” have the same degree of nomological validity. Sincesuch an analysis is beyond the scope of this study, we leave it to others who may wishto concern themselves with this issue.

Overall, the goal of developing a measure to assess the performance of the IS func-tion was successfully achieved in this study. The resulting instrument is not onlycomprehensive enough to cover all aspects of ISFP but also sensitive enough to pin-point specific areas that need attention. The ISFS instrument should be a useful toolfor organizations to use in continuously monitoring the performance of their IS func-

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 22: Functional Scorecard

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS 105

tion and for researchers to use in studies that require ISFP as a dependent or indepen-dent construct, as well as in studies that seek to complement the ISFS through otheranalyses.

REFERENCES

1. Alavi, M.; Marakas, G.M.; and Yoo, Y. A comparative study of distributed learning envi-ronments on learning outcomes. Information Systems Research, 13, 4 (December 2002), 404–415.

2. Anderson, J.C. An approach for confirmatory measurement and structural equation mod-eling of organizational properties. Management Science, 33, 4 (April 1987), 525–541.

3. Anderson, J.C., and Gerging, D.W. Structural equation modeling in practice: A reviewand recommended two-step approach. Psychological Bulletin, 103, 3 (May 1988), 411–423.

4. Armstrong, J.S., and Overton, T.S. Estimating nonresponse bias in mail surveys. Journalof Marketing Research, 14, 3 (August 1977), 396–402.

5. Bagozzi, R.P. An examination of the validity of two models of attitude. MultivariateBehavioral Research, 16, 3 (July 1981), 323–359.

6. Bailey, J.E., and Pearson, S.W. Development of a tool for measuring and analyzingcomputer user satisfaction. Management Science, 29, 5 (May 1983), 530–545.

7. Ballantine, J.; Bonner, M.; Levy, M.; Martin, A.; Monro, I.; and Powell, P.L. Developinga 3-D model of information systems success. In E.J. Garrity and G.L. Sanders (eds.), Informa-tion Systems Success Measurement. Hershey, PA: Idea Group, 1998, pp. 46–59.

8. Baroudi, J.J., and Orlikowski, W.J. A short-form measure of user information satisfac-tion: A psychometric evaluation and notes on use. Journal of Management Information Sys-tems, 4, 4 (Spring 1988), 44–59.

9. Basu, A., and Kumar, A. Workflow management issues in e-business. Information Sys-tems Research, 13, 1 (March 2002), 1–14.

10. Benaroch, M. Managing information technology investment risk: A real options per-spective. Journal of Management Information Systems, 19, 2 (Fall 2002), 43–84.

11. Bergeron, F.; Rivard, S.; and De Serre, L. Investigating the support role of the informa-tion center. MIS Quarterly, 14, 3 (September 1990), 247–260.

12. Brancheau, J.C., and Wetherbe, J.C. Key issues in information systems management.MIS Quarterly, 11, 1 (March 1987), 23–45.

13. Brancheau, J.C.; Janz, B.D.; and Wetherbe, J.C. Key issues in information systems man-agement: 1994–95 SIM Delphi results. MIS Quarterly, 20, 2 (June 1996), 225–242.

14. Broadbent, M., and Weill, P. Management by Maxim: How business and IT managerscan create IT infrastructers. Sloan Management Review, 38, 3 (Spring 1997), 77–92.

15. Broadbent, M.; Weill, P.; O’Brien, T.; and Neo, B.N. Firm context and patterns of ITinfrastructure capability. In J.I. DeGross, S.L. Jarvenpaa, and A. Srinivasan (eds.), Proceed-ings of Seventeenth International Conference on Information Systems. Atlanta: Association forInformation Systems, 1996, pp. 174–194.

16. Brynjolfsson, E. The productivity paradox of information technology. Communicationsof the ACM, 36, 12 (December 1993), 67–77.

17. Byrd, T.A., and Turner, D.E. Measuring the flexibility of information technology infra-structure: Exploratory analysis of a construct. Journal of Management Information Systems,17, 1 (Summer 2000), 167–208.

18. Cameron, K.S. A study of organizational effectiveness and its predictors. ManagementScience, 32, 1 (January 1986), 87–112.

19. Cameron, K.S., and Whetton, D.A. Some conclusions about organizational effective-ness. In K.S. Cameron and D.A. Whetton (eds.), Organizational Effectiveness: A Comparisonof Multiple Models. New York: Academic Press, 1983, pp. 261–277.

20. Carr, C.L. Managing service quality at the IS help desk: Toward the development andtesting of TECH-QUAL, a model of IS technical support service quality. Ph.D. dissertation,University of Minnesota, Minneapolis, 1999.

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 23: Functional Scorecard

106 JERRY CHA-JAN CHANG AND WILLIAM R. KING

21. Chan, Y.E.; Huff, S.L.; Barclay, D.W.; and Copeland, D.G. Business strategic orienta-tion, information systems strategic orientation, and strategic alignment. Information SystemsResearch, 8, 2 (June 1997), 125–150.

22. Chatterjee, D.; Grewal, R.; and Sambamurthy, V. Shaping up for e-commerce: Institu-tional enablers of the organizational assimilation of Web technologies. MIS Quarterly, 26, 2(June 2002), 65–90.

23. Chatterjee, D.; Pacini, C.; and Sambamurthy, V. The shareholder-wealth and trading-volume effects of information technology infrastructure investments. Journal of ManagementInformation Systems, 19, 2 (Fall 2002), 7–42.

24. Chircu, A.M., and Kauffman, R.J. Limits to value in electronic commerce–related ITinvestments. Journal of Management Information Systems, 17, 2 (Fall 2000), 59–80.

25. Churchill, G.A. A paradigm for developing better measures of marketing constructs.Journal of Marketing Research, 16, 1 (February 1979), 64–73.

26. Cronbach, L.J., and Meehl, P.E. Construct validity in psychological tests. In D.M. Jack-son and S. Messick (eds.), Problems in Human Assessment. New York: McGraw-Hill, 1967,pp. 57–77.

27. Cronin, J.J.J., and Taylor, S.A. SERVPERF versus SERVQUAL: Reconciling perfor-mance-based and perceptions-minus-expectations measurement of service quality. Journal ofMarketing, 58, 1 (January 1994), 125–131.

28. Davenport, T.H. Process Innovation: Reengineering Work Through Information Tech-nology. Boston: Harvard Business School Press, 1993.

29. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of informa-tion technology. MIS Quarterly, 13, 3 (September 1989), 319–340.

30. DeLone, W.H., and McLean, E.R. Information systems success: The quest for the depen-dent variable. Information Systems Research, 3, 1 (March 1992), 60–95.

31. DeLone, W.H., and McLean, E.R. The DeLone and McLean model of information sys-tems success: A ten-year update. Journal of Management Information Systems, 19, 4 (Spring2003), 9–30.

32. Dess, G.G., and Robinson, R.B.J. Measuring organizational performance in the absenceof objective measures: The case of the privately-held firm and conglomerate business unit.Strategic Management Journal, 5, 3 (July–September 1984), 265–273.

33. Devaraj, S.; Fan, M.; and Kohli, R. Antecedents of B2C channel satisfaction and prefer-ence: Validating e-commerce metrics. Information Systems Research, 13, 3 (September 2002),316–333.

34. Dickson, G.W.; Leitheiser, R.L.; Nechis, M.; and Wetherbe, J.C. Key information sys-tems issues for the 1980s. MIS Quarterly, 8, 3 (September 1984), 135–148.

35. Doll, W.J., and Torkzadeh, G. The measurement of end-user computing satisfaction. MISQuarterly, 12, 2 (June 1988), 259–274.

36. Duncan, N.B. Capturing flexibility of information technology infrastructure: A study ofresource characteristics and their measure. Journal of Management Information Systems, 12, 2(Fall 1995), 37–57.

37. Fan, M.; Stallaert, J.; and Whinston, A.B. Decentralized mechanism design for supplychain organizations using an auction market. Information Systems Research, 14, 1 (March2003), 1–22.

38. Fitzgerald, L.; Johnston, R.; Brignall, S.; Silvestro, R.; and Voss, C. Performance Measure-ment in Service Businesses. London: Chartered Institute of Management Accountants, 1993.

39. Gefen, D., and Ridings, C.M. Implementation team responsiveness and user evaluationof customer relationship management: A quasi-experimental design study of social exchangetheory. Journal of Management Information Systems, 19, 1 (Summer 2002), 47–70.

40. Gerbing, D.W., and Anderson, J.C. An updated paradigm for scale development incorpo-rating unidimensionality and its assessment. Journal of Marketing Research, 25, 2 (May 1988),186–192.

41. Glazer, R. Measuring the knower: Towards a theory of knowledge equity. CaliforniaManagement Review, 40, 3 (Spring 1998), 175–194.

42. Gold, A.H.; Malhotra, A.; and Segars, A.H. Knowledge management: An organizationalcapabilities perspective. Journal of Management Information Systems, 18, 1 (Summer 2001),185–214.

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 24: Functional Scorecard

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS 107

43. Goodhue, D.L., and Thompson, R.L. Task-technology fit and individual performance.MIS Quarterly, 19, 2 (June 1995), 213–236.

44. Govindarajulu, C., and Reithel, B.J. Beyond the information center: An instrument tomeasure end-user computing support from multiple sources. Information & Management, 33,5 (May 1998), 241–250.

45. Grover, V., and Davenport, T.H. General perspectives on knowledge management: Fos-tering a research agenda. Journal of Management Information Systems, 18, 1 (Summer 2001),5–22.

46. Grover, V.; Jeong, S.R.; and Segars, A.H. Information systems effectiveness: The con-struct space and patterns of application. Information & Management, 31, 4 (December 1996),177–191.

47. Guimaraes, T., and Igbaria, M. Exploring the relationship between IC success and com-pany performance. Information & Management, 26, 3 (March 1994), 133–141.

48. Hair, J.F.J.; Anderson, R.E.; Tatham, R.L.; and Black, W.C. Multivariate Data Analysis,5th ed. Upper Saddle River, NJ: Prentice Hall, 1998.

49. Hartog, C., and Herbert, M. 1985 opinion survey of MIS managers: Key issues. MISQuarterly, 10, 4 (December 1986), 351–361.

50. Hattie, J. Methodology review: Assessing unidimensionality of tests and items. AppliedPsychological Measurement, 9, 2 (June 1985), 139–164.

51. Hitt, L.M.; Wu, D.J.; and Zhou, X. Investment in enterprise resource planning: Businessimpact and productivity measures. Journal of Management Information Systems, 19, 1 (Sum-mer 2002), 71–98.

52. Jiang, J.J.; Klein, G.; and Carr, C.L. Measuring information system service quality:SERVQUAL from the other side. MIS Quarterly, 26, 2 (June 2002), 145–166.

53. Jöreskog, K.G. Testing structural equation models. In K.A. Boland and L.S. Long (eds.),Testing Structural Equation Models. Newbury Park, CA: Sage, 1993, pp. 294–316.

54. Kerlinger, F.N. Foundations of Behavioral Research. New York: McGraw-Hill, 1978.55. Kim, J.; Lee, J.; Han, K.; and Lee, M. Businesses as buildings: Metrics for the architec-

tural quality of Internet businesses. Information Systems Research, 13, 3 (September 2002),239–254.

56. King, W.R. Management information systems. In H. Bidgoli (ed.), Encyclopedia ofManagement Information Systems, vol. 3. New York: Academic Press, 2003.

57. King, W.R., and He, J. External validity, coverage and nonresponse errors in IS surveyresearch. Katz Graduate School of Business, University of Pittsburgh, 2004.

58. Kohli, R., and Devaraj, S. Measuring information technology payoff: A meta-analysis ofstructural variables in firm-level empirical research. Information Systems Research, 14, 2 (June2003), 127–145.

59. Kraemer, K.L.; Danziger, J.N.; Dunkle, D.E.; and King, J.L. The usefulness of com-puter-based information to public managers. MIS Quarterly, 17, 2 (June 1993), 129–148.

60. Kraut, R.; Dumais, S.; and Susan, K. Computerization, productivity, and quality of work-life. Communications of the ACM, 32, 2 (February 1989), 220–238.

61. Lacity, M., and Willcocks, L. Global Information Technology Outsourcing. Chichester,UK: Wiley, 2001.

62. Lambert, D.M., and Harrington, T.C. Measuring nonresponse bias in customer servicemail surveys. Journal of Business Logistics, 11, 2 (1990), 5–25.

63. Larsen, K.R.T. A taxonomy of antecedents of information systems success: Variableanalysis studies. Journal of Management Information Systems, 20, 2 (Fall 2003), 169–246.

64. Lee, H. Knowledge management enablers, processes, and organizational performance:An integrative view and empirical examination. Journal of Management Information Systems,20, 1 (Summer 2003), 179–228.

65. Lee, H.; Kwak, W.; and Han, I. Developing a business performance evaluation system:An analytical hierarchical model. Engineering Economist, 40, 4 (Summer 1995), 343–357.

66. Magal, S.R.; Carr, H.H.; and Watson, H.J. Critical success factors for information centermanagers. MIS Quarterly, 12, 3 (September 1988), 314–425.

67. Mirani, R., and King, W.R. The development of a measure for end-user computing sup-port. Decision Sciences, 25, 4 (July–August 1994), 481–498.

68. Moore, G.C., and Benbasat, I. Development of an instrument to measure the perceptions

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 25: Functional Scorecard

108 JERRY CHA-JAN CHANG AND WILLIAM R. KING

of adopting an information technology innovation. Information Systems Research, 2, 3 (Sep-tember 1991), 192–222.

69. Myers, B.L.; Kappelman, L.A.; and Prybutok, V.R. A comprehensive model for assess-ing the quality and productivity of the information systems function: Toward a theory forinformation systems assessment. In E.J. Garrity and G.L. Sanders (eds.), Information SystemsSuccess Measurement. Hershey, PA: Idea Group, 1998, pp. 94–121.

70. Nelson, K.M., and Cooprider, J.G. The contribution of shared knowledge to IS groupperformance. MIS Quarterly, 20, 4 (December 1996), 409–432.

71. Nelson, R.R., and Cheney, P.H. Training end users: An exploratory study. MIS Quarterly,11, 4 (December 1987), 547–559.

72. Niederman, F.; Brancheau, J.C.; and Wetherbe, J.C. Information systems managementissues for the 1990s. MIS Quarterly, 15, 4 (December 1991), 475–500.

73. Olian, J.D., and Rynes, S.L. Making total quality work: Aligning organizational pro-cesses, performance measures, and stakeholders. Human Resource Management, 30, 3 (Fall1991), 303–333.

74. Parasuraman, A.; Zeithaml, V.A.; and Berry, L.L. Refinement and reassessment of theSERVQUAL scale. Journal of Retailing, 64, 4 (Winter 1991), 420–450.

75. Pitt, L.F.; Watson, R.T.; and Kavan, C.B. Service quality: A measure of informationsystems effectiveness. MIS Quarterly, 19, 2 (June 1995), 173–185.

76. Porter, M.E., and Millar, V.E. How information gives you competitive advantage. HarvardBusiness Review, 63, 4 (July–August 1985), 149–160.

77. Premkumar, G. Evaluation of Strategic Information Systems Planning: Empirical Vali-dation of a Conceptual Model. Pittsburgh: University of Pittsburgh, 1989.

78. Raghunathan, B.; Raghunathan, T.S.; and Tu, Q. Dimensionality of the strategic gridframework: The construct and its measurement. Information Systems Research, 10, 4 (Decem-ber 1999), 343–355.

79. Rai, A.; Lang, S.S.; and Welker, R.B. Assessing the validity of IS success models: Anempirical test and theoretical analysis. Information Systems Research, 13, 1 (March 2002), 50–69.

80. Ryan, S.D., and Harrison, D.A. Considering social subsystem costs and benefit in infor-mation technology investment decisions: A view from the field on anticipated payoffs. Journalof Management Information Systems, 16, 4 (Spring 2000), 11–40.

81. Ryker, R., and Nath, R. An empirical examination of the impact of computer informationsystems on users. Information & Management, 29, 4 (1995), 207–214.

82. Saarinen, T. An expanded instrument for evaluating information system success. Infor-mation & Management, 31, 2 (1996), 103–118.

83. Santhanam, R., and Hartono, E. Issues in linking information technology capability tofirm performance. MIS Quarterly, 27, 1 (March 2003), 125–154.

84. Saunders, C.S., and Jones, J.W. Measuring performance of the information systems func-tion. Journal of Management Information Systems, 8, 4 (Spring 1992), 63–82.

85. Schwab, D.P. Construct validation in organizational behavior. In B.M. Staw and L.L.Cummings (eds.), Research in Organizational Behavior, vol. 2. Greenwich, CT: JAI Press,1980, pp. 3–43.

86. Scott, J.E. Facilitating interorganizational learning with information technology. Journalof Management Information Systems, 17, 2 (Fall 2000), 81–114.

87. Seddon, P.B. A respecification and extension of the DeLone and McLean model of ISsuccess. Information Systems Research, 8, 3 (September 1997), 240–253.

88. Seddon, P.B.; Staples, S.; Patnayauni, R.; and Bowtell, M. Dimensions of informationsystems success. Communications of the AIS, 2 (November 1999), 2–39.

89. Segars, A.H. Assessing the unidimensionality of measurement: A paradigm and illustra-tion within the context of information systems research. Omega, 25, 1 (February 1997), 107–121.

90. Segars, A.H., and Grover, V. Re-examining perceived ease of use and usefulness: Aconfirmatory factor analysis. MIS Quarterly, 17, 4 (December 1993), 517–525.

91. Segars, A.H., and Grover, V. Strategic information systems planning success: An investi-gation of the construct and its measurement. MIS Quarterly, 22, 2 (June 1998), 139–163.

92. Segars, A.H., and Hendrickson, A.R. Value, knowledge, and the human equation: Evo-lution of the information technology function in modern organizations. Journal of Labor Re-search, 21, 3 (Summer 2000), 431–445.

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 26: Functional Scorecard

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS 109

93. Sethi, V., and King, W.R. Development of measures to assess the extent to which aninformation technology application provides competitive advantage. Management Science, 40,12 (December 1994), 1601–1627.

94. Sethi, V., and King, W.R. Organizational Transformation Through Business ProcessRe-Engineering. Upper Saddle River, NJ: Prentice Hall, 1998.

95. Steers, R.M. Problems in the measurement of organizational effectiveness. Administra-tive Science Quarterly, 20, 4 (December 1975), 546–558.

96. Straub, D.W.; Hoffman, D.L.; Weber, B.W.; and Steinfield, C. Measuring e-commercein Net-enabled organizations: An introduction to the special issue. Information Systems Re-search, 13, 2 (June 2002), 115–124.

97. Sussman, S.W., and Siegal, W.S. Informational influence in organizations: An inte-grated approach to knowledge adoption. Information Systems Research, 14, 1 (March 2003),247–65.

98. Tallon, P.P.; Kraemer, K.L.; and Gurbaxani, V. Executives’ perceptions of the businessvalue of information technology: A process-oriented approach. Journal of Management Infor-mation Systems, 16, 4 (Spring 2000), 145–173.

99. Tam, K.Y. The impact of information technology investment on firm performance andevaluation: Evidence from newly industrialized economies. Information Systems Research, 9,1 (March 1998), 85–98.

100. Templeton, G.F.; Lewis, B.R.; and Snyder, C.A. Development of a measure for theorganizational learning construct. Journal of Management Information Systems, 19, 2 (Fall2002), 175–218.

101. Teo, T.S.H. Integration between business planning and information systems planning:Evolutionary-contingency perspectives. Ph.D. dissertation, University of Pittsburgh, 1994.

102. Torkzadeh, G., and Dhillon, G. Measuring factors that influence the success of Internetcommerce. Information Systems Research, 13, 2 (June 2002), 187–204.

103. Torkzadeh, G., and Doll, W.J. The development of a tool for measuring the perceivedimpact of information technology on work. Omega, International Journal of ManagementScience, 27, 3 (June 1999), 327–339.

104. Van Dyke, T.P.; Kappelman, L.A.; and Prybutok, V.R. Measuring information systemsservice quality: Concerns on the use of the SERVQUAL questionnaire. MIS Quarterly, 21, 2(June 1997), 195–208.

105. Venkatesh, V.; Morris, M.G.; Davis, G.B.; and Davis, F.D. User acceptance of informa-tion technology: Toward a unified view. MIS Quarterly, 27, 3 (September 2003), 425–478.

106. Venkatraman, N. Strategic orientation of business enterprises: The construct, dimen-sionality, and measurement. Management Science, 35, 8 (August 1989), 942–962.

107. Venkatraman, N., and Ramanujam, V. Measurement of business economic performance:An examination of method convergence. Journal of Management, 13, 1 (1987), 109–122.

108. Wand, Y., and Wang, R.Y. Anchoring data quality dimensions in ontological founda-tions. Communications of the ACM, 39, 11 (November 1996), 86–95.

109. Wang, R.Y., and Strong, D.M. Beyond accuracy: What data quality means to data con-sumers. Journal of Management Information Systems, 12, 4 (Spring 1996), 5–34.

110. Watson, R.T.; Pitt, L.F.; and Kavan, C.B. Measuring information systems service qual-ity: Lessons from two longitudinal case studies. MIS Quarterly, 22, 1 (March 1998), 61–79.

111. Weill, P. The relationship between investment in information technology and firm per-formance: A study of the valve manufacturing sector. Information Systems Research, 3, 4 (De-cember 1992), 307–333.

112. Wells, C.E. Evaluation of the MIS function in an organization: An exploration of themeasurement problem. Ph.D. dissertation, University of Minnesota, Minneapolis, 1987.

113. Xia, W. Dynamic capabilities and organizational impact of IT infrastructure: A researchframework and empirical investigation. Ph.D. dissertation, University of Pittsburgh, 1998.

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 27: Functional Scorecard

110 JERRY CHA-JAN CHANG AND WILLIAM R. KING

Appendix. ISFS Instrument

What Is the IS Function?

THIS QUESTIONNAIRE IS DESIGNED TO ASSESS the performance of the informationsystems (IS) function in your organization. The IS function includes all IS individu-als, groups, and departments within the organization with whom you interact regu-larly. As a user of some information systems/technology, you have your own definitionof what the IS function means to you, and it is the performance of “your” IS functionthat should be addressed here.

Effectiveness of Information

The following statements ask you to assess the general characteristics of the infor-mation that IS provides to you. Please try to focus on the data and information itselfin giving the response that best represents your evaluation of each statement. If astatement is not applicable to you, circle 0.

To aHardly greatat all extent N/A

The extent that the information is:Interpretable 1 2 3 4 5 0Understandable 1 2 3 4 5 0Complete 1 2 3 4 5 0Clear 1 2 3 4 5 0Concise 1 2 3 4 5 0Accurate 1 2 3 4 5 0Secure 1 2 3 4 5 0Important 1 2 3 4 5 0Relevant 1 2 3 4 5 0Usable 1 2 3 4 5 0Well organized 1 2 3 4 5 0Well defined 1 2 3 4 5 0Available 1 2 3 4 5 0Accessible 1 2 3 4 5 0Up-to-date 1 2 3 4 5 0Received in a timely manner 1 2 3 4 5 0Reliable 1 2 3 4 5 0Verifiable 1 2 3 4 5 0Believable 1 2 3 4 5 0Unbiased 1 2 3 4 5 0

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 28: Functional Scorecard

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS 111

To aHardly greatat all extent N/A

The extent that the information:Can be easily compared to pastinformation. 1 2 3 4 5 0Can be easily maintained. 1 2 3 4 5 0Can be easily changed. 1 2 3 4 5 0Can be easily integrated. 1 2 3 4 5 0Can be easily updated. 1 2 3 4 5 0Can be used for multiple purposes. 1 2 3 4 5 0Meets all your requirements. 1 2 3 4 5 0

The following statements ask you to assess the outcome of using the information thatIS provided to you.

To aHardly greatat all extent N/A

The extent that:The amount of information isadequate. 1 2 3 4 5 0It is easy to identify errors ininformation. 1 2 3 4 5 0It helps you discover newopportunities to serve customers. 1 2 3 4 5 0It is useful for defining problems. 1 2 3 4 5 0It is useful for making decisions. 1 2 3 4 5 0It improves your efficiency. 1 2 3 4 5 0It improves your effectiveness. 1 2 3 4 5 0It gives your company acompetitive edge. 1 2 3 4 5 0It is useful for identifying problems. 1 2 3 4 5 0D

ownl

oade

d by

[U

nive

rsity

of

Kel

aniy

a] a

t 02:

31 2

1 Se

ptem

ber

2015

Page 29: Functional Scorecard

112 JERRY CHA-JAN CHANG AND WILLIAM R. KING

IS Service Performance

The following statements ask you to assess the performance of services provided by theIS department or function. Please circle the number that best represents your evaluationof each statement. If a statement is not applicable to you, circle the number 0.

To aHardly greatat all extent N/A

The extent that the:Training programs offered by theIS function are useful. 1 2 3 4 5 0Variety of training programsoffered by the IS function issufficient. 1 2 3 4 5 0IS function’s services arecost-effective. 1 2 3 4 5 0Training programs offered by theIS function are cost-effective. 1 2 3 4 5 0IS function’s services are valuable. 1 2 3 4 5 0IS function’s services are helpful. 1 2 3 4 5 0

To aHardly greatat all extent N/A

The extent that the IS function:Responds to your service requestsin a timely manner. 1 2 3 4 5 0Completes its services in a timelymanner. 1 2 3 4 5 0Is dependable in providing services. 1 2 3 4 5 0Has your best interest at heart. 1 2 3 4 5 0Gives you individual attention. 1 2 3 4 5 0Has sufficient capacity to serve allits users. 1 2 3 4 5 0Can provide emergency services. 1 2 3 4 5 0Provides a sufficient variety ofservices. 1 2 3 4 5 0Has sufficient people to provideservices. 1 2 3 4 5 0Extends its systems/services to yourcustomers/suppliers. 1 2 3 4 5 0

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 30: Functional Scorecard

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS 113

To aHardly greatat all extent N/A

The extent that IS people:Provide services for you promptly. 1 2 3 4 5 0Are dependable. 1 2 3 4 5 0Are efficient in performing theirservices. 1 2 3 4 5 0Are effective in performing theirservices. 1 2 3 4 5 0Have the knowledge and skill to dotheir job well 1 2 3 4 5 0Are reliable. 1 2 3 4 5 0Are polite. 1 2 3 4 5 0Are sincere. 1 2 3 4 5 0Show respect to you. 1 2 3 4 5 0Are pleasant to work with. 1 2 3 4 5 0Instill confidence in you. 1 2 3 4 5 0Are helpful to you. 1 2 3 4 5 0Solve your problems as if theywere their own. 1 2 3 4 5 0Understand your specific needs. 1 2 3 4 5 0Are willing to help you. 1 2 3 4 5 0Help to make you a moreknowledgeable computer user. 1 2 3 4 5 0

Systems Performance

The following statements ask you to assess the extent that systems produce variousoutcomes for you. The term systems does not refer to the information itself. Rather, itrefers to the capability to access, produce, manipulate, and present information toyou (e.g., to access data bases, or to develop a spreadsheet). Please circle the numberthat best represents your evaluation of each statement. If a statement is not applicableto you, circle 0.

To aHardly greatat all extent N/A

The extent that systems:Make it easier to do your job. 1 2 3 4 5 0Improve your job performance. 1 2 3 4 5 0Improve your decisions. 1 2 3 4 5 0

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 31: Functional Scorecard

114 JERRY CHA-JAN CHANG AND WILLIAM R. KING

Give you confidence toaccomplish your job. 1 2 3 4 5 0Increase your productivity. 1 2 3 4 5 0Increase your participation indecisions. 1 2 3 4 5 0Increase your awareness ofjob-related information. 1 2 3 4 5 0Improve the quality of yourwork product. 1 2 3 4 5 0Enhance your problem-solvingability. 1 2 3 4 5 0Help you manage relationshipswith external business partners. 1 2 3 4 5 0Improve customer satisfaction. 1 2 3 4 5 0Improve customer service. 1 2 3 4 5 0Enhance information sharing withyour customers/suppliers. 1 2 3 4 5 0Help retain valued customers. 1 2 3 4 5 0Help you select and qualify desiredsuppliers. 1 2 3 4 5 0Speed product delivery. 1 2 3 4 5 0Help you manage inbound logistics. 1 2 3 4 5 0Improve management control. 1 2 3 4 5 0Streamline work processes. 1 2 3 4 5 0Reduce process costs. 1 2 3 4 5 0Reduce cycle times. 1 2 3 4 5 0Provide you information fromother areas in the organization. 1 2 3 4 5 0Facilitate collaborative problemsolving. 1 2 3 4 5 0Facilitate collective groupdecision making. 1 2 3 4 5 0Facilitate your learning. 1 2 3 4 5 0Facilitate collective group learning. 1 2 3 4 5 0Facilitate knowledge transfer. 1 2 3 4 5 0Contribute to innovation. 1 2 3 4 5 0Facilitate knowledge utilization. 1 2 3 4 5 0

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15

Page 32: Functional Scorecard

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS 115

The following statements ask you to assess general characteristics of the informationsystems that you use regularly. Please circle the number that best represents yourevaluation of each statement. If a statement is not applicable to you, circle 0.

To aHardly greatat all extent N/A

The extent that:Systems have fast response time. 1 2 3 4 5 0System downtime is minimal. 1 2 3 4 5 0Systems are well integrated. 1 2 3 4 5 0Systems are reliable. 1 2 3 4 5 0Systems are accessible. 1 2 3 4 5 0Systems meet your expectation. 1 2 3 4 5 0Systems are cost-effective. 1 2 3 4 5 0Systems are responsive to meetyour changing needs. 1 2 3 4 5 0Systems are flexible. 1 2 3 4 5 0Systems are easy to use. 1 2 3 4 5 0System use is easy to learn. 1 2 3 4 5 0Your company’s intranet is easyto navigate. 1 2 3 4 5 0It is easy to become skillful inusing systems. 1 2 3 4 5 0

Dow

nloa

ded

by [

Uni

vers

ity o

f K

elan

iya]

at 0

2:31

21

Sept

embe

r 20

15