prepr int - diva portallnu.diva-portal.org/smash/get/diva2:720906/fulltext01.pdf · prepr int this...
TRANSCRIPT
http://www.diva-portal.org
Preprint
This is the submitted version of a paper presented at Nordic Evaluation Conference, Ålborg2012-06-12--06-14.
Citation for the original published paper:
Fagerström Kareld, B. (2012)
Use of Evaluation Results in IT Evaluation: a Pre Study.
In: Flemming Larsen, Evert Vedung (ed.), Nordic Evaluation Conference 2012
N.B. When citing this work, cite the original published paper.
Permanent link to this version:http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-22460
1
Use of Evaluation Results in IT Evaluation
– a Pre Study
Birgitta Fagerström Kareld
Linnaeus University
Abstract: Expectations on investmenst in information technology (IT) may include financial
returns, organizational effectiveness, personal efficiency, new ways to organize work, process
improvements and more. IS research has therefore focused on evaluation of IT-investment for
decades and there is an ongoing debate on how to utilize the IT assets. One theme in this
discussion is betterment based on evaluation results. In this study a small sample of articles
was investigated with focus on“use of evaluation results” and on “use of general evaluation
concepts” . The study shows that betterment and learning via use of evaluation results are the
most common suggestions and that use of general evaluation concepts and terms was absent.
Keywords: IS/IT evaluation, use of evaluation results, use of general evaluation concepts
1 Introduction
Information technology (IT)1 is typically launched as a facilitator for different kinds of
improved conditions in society, in organizations and also in our private lives. These
conditions may relate to financial returns, organizational efficiency and effectiveness,
personal efficiency, new ways to organize work, process improvements and more. The
listed enhancements are desired effects of an intervention in the form of an IT investment,
often an IT-system. The most common way to evaluate investments in IT is to evaluate the
artifact, the IT-system. Utilization of this kind of data is relevant when the IT system itself
needs to be adjusted. A more problematic picture appears when an IT-system is evaluated
and the realization of the expected improvements requires a broader and more far reaching
implementation and evaluation approach. A mismatch between a business process and its
IT-system might appear in an IT-system focused evaluation and only disappear as a result
of actions in some kind of broader IT-system context. It is therefore important to know how
evaluation results are to be utilized when designing an IT-evaluation.
Evaluation of IT investments is regarded as a promising opportunity and the amount of
suggestions and approaches from research within the area of information systems is
extensive. Evaluation of IS/IT investments can be classified into four categories where the
terms strategic respectively formative evaluation denotes evaluations conducted before the
1 The terms information system (IS), information technology (IT), IT-system, application and IT-artefact
will be used interchangably.
2
IT-system has been implemented. Summative evaluation and post-mortem analysis takes
place after implementation. Each category is aimed for use in a specific situation or context
(Beynon-Davies et. al, 2004). Results of IT-evaluations have for a long time been
suggested as tools for organizational learning and improvements concerning use and
benefits from IT (Kumar, 1990; Symon and Walsham, 1988; Symons, 1991; Beynon-
Davies et. al, 2004). One way to overcome the limitations of a single system evaluation
approach is to adopt a technology acceptance model as theoretical framework for
evaluation. Vankatesh et al. (2003) developed the model shown below. The model shows
asepcts verified by research on user acceptance of information technology.
Figure 1: Unified Theory of Acceptance and Use of Technology (Vankatesh et al., 2003)
A multi-system perspective evaluation with focus on work tasks and a current set of IT-
systems (the IT-work-environment for individuals) is also an alternative to the singe-
system approach (Fagerström Kareld, 2008). Use of this multi-system apporoach assume
continuous evaluations and management´s use of evaluation results to improve benefits
from IT. The article presenting this multi-system approach argued for a pragmatic view on
IT evaluations – as long as the aspects in figure 2 was incorporated in arguments for a
specific evaluation approach.
Figure 2: Important aspects in any evaluation design (Fagerström Kareld, 2008)
The purpose of this paper is to explore if and how use of evaluation results is discussed in a
small sample of IS/IT-evaluation models and approaches published in journals within the
area of information systems.
3
2 Method
The process of finding relevant articles started with a structured search in an “one-search-
engine” using different combinations of search terms like “IS”, “IT “evaluation”, “use of
evaluation results”, “dissemination”, “method” and “model”. This search method returned
too many results and results outside the target area and this strategy was abandoned in
favor of a selection process guided by intuition and experience. Five articles were finally
selected2. The selected articles were investigated with focus on “use of evaluation results”
using a template for article reading presented in section four. Suggestions on use of
evaluation results from evaluation research served finally as tool for analyzing the findings.
3 Related literature
Vedung (2009) identifies four wievs on utilization of evaluation results. The instrumental
view concider evaluations useful if they are adopted by decision makers and also permitted
to influence decision-making and actions. Conceptual use denotes when user´s thinks
about and receive new insights from an evaluation but transformation into action does not
take place. In an interaktive utilization there is an yield between evaluators and
practitioners. Dialogues and deliberative exchange serve as means for this interaction. This
view has focus on learning. The three views mentioned can serve as affordable utilizations
of evaluation. Less reasonable use exists when evaluations are used for legitimizing- or
tactical reasons.
There are also strategies for enhancing use of evaluation results. The diffusion-oriented
strategy seeks to deliver evaluation results as efficient as possible. The evaluation results
must reach the recipients so they can use them. It is therefore important to write readable
and user-friendly reports for the target group/s. Another method within this strategy is to
promote dissemination by opening up sustainable channels to practitioner groups. This
linkage method may use some agent, like an advisory comission or opinion leader to
connect evaluator and recipients. Using a product-oriented strategy means to strive for
quality assurance of the product, the evaluation report. This can be done by using some
competent reviewer. A thrid approach, the user-oriented strategy, aims to prepare
evaluation clients for utilization. This can for instance be achieved by client education in
evaluation or by institutionalization - incorporation of the evaluation into the management
system as an ongoing affair. Adjustment of the policy formation process is mentioned as a
possible fourth strategy. This intervention-oriented way is described as a stepwise process
to ensure relevant alternatives are selected (Vedung, 2009).
Leeuw and Furubo (2008) outline four criterions in order to characterize a system of
evaluation. The first criterion is that there should be some agreement among the actors
involved about what they actually are doing and why. The second criterion deals with
2 Impact factors ranging from X-Y (values will be supplemented)
4
organizational responsibility and states that there must be an organizational demand for the
evaluation results. The third criterion specifies that there should be certain permanence in
the ongoing evaluation activities: that they are part of something ongoing. The fourth
criterion has focus on the intended use of evaluation results. The information from the
evaluative activities is supposed to be linked to decisions and processes.
Lawrenz et. al (2007) extends and concretizes opportunities for enhacing use of evaluation
results. The authors uses a national evaluation of of a multisite National Science
Foundation program as setting for describing the evolution of, and strategic planning for,
effectively disseminating evaluation findings and recommendations. Two dissemination
tecniques were outlined concerning a web-based survey and site visits. The techniques used
are listed in table 1.
Web-Based Surveys Site visits
Full report Full Set of Issue Papers
Executive Summary Site Visit Handbook
Power Point Overview Brochyre
Fact Sheet Trifold Brochyre
Online Preliminary Stat Sheet Videoconference
Online Analytical Processing Cube Electronic Synthesis
Table 1: Dissemination techniques (from Lawrence et al. 2007)
A set of nine variables described each technique: audience, rationale, content, purpose/use,
timing, development, special issues/constrains, distribution and special concern. No
evaluation data was collected about effects of use of each method.
4 Article investigation
This section presents the result of the article reading. The template for documenting the
reading is shown below.
Aspect Content
Content Why this article was selected and
a short summary of content and
results
Use of evaluation results Any sign of ideas, concerns or
suggestion on use of evaluation
results
Use of general evaluation terms For instance: Meta level
positioning of the eveluation
concept? Evaluator discussion?
Evaluation questions discussed?
Evaluation models?
Table 2: Tempate for documenting article reading
5
Information systems evaluation: navigating through the problem domain3
Content
A promising title and an article testing conjectures from the normative IS evaluation
literature. An application specific evaluation model for manufacturing information system
used in an intraorganizational context is applied in a case study applying an interpretative
approach. Focus on three core factors: (1) different types of justfication processes (concept
and financially based), (2) limitations inherent in traditional appraisal approaches and (3)
life cycle evaluation.
Figure 3: Concept justification stakeholders (Irani, 2002)
This case tells that the IT-system was abandoned due to technical problems and lack of
employee support. The author concluded (among other things) that there is a relationship
between the concept of justification of an IS to operational stakeholders and their increased
level of comittment for project success. He also stated that IT/IS benefits can be classified
into strategic, tactical and operational benefits and that…”there is a variety of interacting
social and technical factors that complicate the evaluation process”…(p.22)
Use of evaluation results
The concept used can be expanded and used to communicate technology adoption issues to
stakeholders or even larger population within the organization.
Use of general evaluation terms
No.
Information systems evaluation and the information systems development
process4
Content
This article has focus is on learning. The purpose was to review existing literature on IS
evaluation and to present a preliminary model that intergrate approaches to IS evaluation
into the life cycle of the information systems development process and also incorporate
3 Irani, Z. (2002),”Information systems evaluation: navigating through the problem domain”,
Information & Management 40, pp. 11-24. 4 Beynon-Davies, P., Owens, I. and Williams, M. (2004),”Information systems evaluation and the
information systems development process”, The Journal of Enterprise Information Management,
Vol.17, pp. 276-282
6
failure assessment into the evaluation process. The purpose was also to build feed-back
loops into the model to promote organizational learning. The result is a model
(theoretically “derivied”) in line with the purpose of the paper shown in figure 4.
Figure 4: IS evaluation and organizational learning (Beynon-Davies, Owens & Williams, 2004)
Use of evaluation results
The model contains feed-back loops from summative evaluation to strategic evaluation
(management decision go/no-go) and from summative evaluation to information systems
maintenance. There is also a feedback loop from the activity “Production of new ISD
Practicies” to the IS development process. Postmortem analysis is followed by
dissemination of findings. The conclusion is that effective evaluation leads to effective
maintenance. Post mortem information should be made public in order to promote
competence development. Use of evaluation results is also mentioned in connection to
formative evaluation in a “what-manner” and in connection to “Post-mortem analysis”with
a learning focus.
Use of general evaluation terms
A suggestion that a senior executive, not involved in the project, should conduct the
postmortem analysis.
Information systems evaluation as an organizational institution – experience
from a case study5
Content This article announced focus on organizational change and key stakeholders, themes connected to
learning and use of evaluation results. A matrix built on two dimensions of evaluation context was
developed and used to analyze a case of IS evaluation in an UK insurance organization.
5 Serafeimidis, V. and Smithson, S. (2003), “Information systems evaluation as an organizational
institution – experience from a case study”, Information Systems Journal, Vol. 13 pp. 251-274.
7
Figure 5: Four orientatios for information systems evaluation (Serafeimidis & Smithson, 2003)
In the model control evaluation refers to cases where the expected outcomes are fairly certain and
there is an organizational consensus around them. This kind of evaluation needs to be incorporated
in harmony with other organizational processes. This orientation is described as the most
commonly used. Sense making refers to situations where the objectives of the investments are not
clear and the relations between actions and impact is predictable. This orientation is relevant when
there are disagreements concerning objectives. Social learning contributes to decrease uncertainty
of strategic changes. This orientation is significant when objectives are clear but the results of
actions are hard to predict. Exploratory orientation is relevant when the situation is characterized
by uncertainty. The case study included two roles, “strategist for evaluation” and “evaluators”.
Principles for managing the linkage between IS evaluation and organizational change is
actually the four orientations for IS evaluation.
Use of evaluation results
Formal evaluation methods can become a starting point for discussion and dissemination of
information in order to reach concensus. Evaluation promotes learning in terms of business
values.
…”Evaluators need to take into concideration the knowledge availiable and further diffuse
it. Learning is strongly facilitated by communication actions. Evaluators are thus the actors
who should develop knowledge through interaktion”…(p. 266)
…”Exoploratory evaluation is likely to be particularly vulnerable to the balance of
organizational power…it can also be used to legitimize decisions and justtify
actions”…(p.267)
Use of general evaluation terms
No.
A Holistic Framework on Information Systems Evaluation with a Case
Analysis6
Content This approach strives to integrate IS development into business development- and information
systems development processes. The aim of the paper was to provide an instrument for
understanding IS evaluation in its broader context. The instrument shown below was developed
and used in a case study.
6 Hallikainen, P. and Chen, L. (2006), “A Holistic Framework on Information Systems Evaluation
with a Case Analysis”, Electronic Journal Information System Evaluation, Vol. 9, issue 2, pp 57-64.
8
Figure 5: A holistic framework on IS evaluation (Hallikainen & Chen, 2005)
One conclusion from the case study was that information from informal evaluation
processes was equal important (or even more important) than information from formal
evaluation methods as advokated in the literature on IS evaluation. Another comment was
that the researcher found no established practicies for learning about IS evaluation itself.
Use of evaluation results
This paper came up with several suggestions on use of evaluation results: o The instrument is expected to be of value for both researchers and practitioners.
o Make tacit evaluation knowledge more explicit so it can be exploited in future projects.
o The evaluation process should identify and control the critical areas of an IS development
project
o Performance evaluation of the IS development process can facilitate learning for future projects.
o Results from the evaluation should be delivered to each person in the IS project so the
information can
be employed in the decision making phase.
o As a tool for experience learning…”feed-back from the evaluation process should lead to
corrective actions if necessary”…(p. 60).
Use of general evaluation terms: No
9
The DeLone and Mclean Model of Information Systems Success:
A Ten-Year Update7
Content
As the title indicates is this an influential article on measures for IS success. The authors
discuss …”many of the important IS success research contributions of the last decade,
focusing especially on research efforts that apply, validate, challenge, and proposes
enhancements to our original model”…(p. 9). The purpose of the paper was to update the
original D&M IS Success Model (model creator’s abbreviation) and evaluate it. The
original model (Figure 2) stated that “System Quality” and “Information Quality” affect
“Use” and “User Satisfaction” which in turn have effect on ”Individual Impact” that
influences “Organizational Impact”. The article explains that the six success categories are
based on a process model of IS and that the categories are interrelated.
Figure 6: D&M IS Success Model (DeLone and Mclean, 2003)
Research representing additional or alternative aspects on the D&M IS Success Model over
the past ten years was displayed and analyzed. The result was an updated model with one
additional aspect “Service Quality” and one aspect “Net Benefits” merged from “Individual
Impact” and “Organizational Impact”. Eight conclusions concerning further utilization and
development of the D&M IS Success Model ended the article.
Use of evaluation results
Yes, but in this article the concept “use of evaluation results” refers to use within the
research community. Researchers are encuraged to verify and improve the model.
Use of general evaluation terms
No.
7 DeLone, W. and McLean, E. (2003), “The DeLone and Mclean Model of Information Systems
Success: A Ten-Year Update”, Journal of Management Information Systems, Vol. 19 pp. 9-30.
10
5 Analysis
The experience from the article reading is that articles on IS evaluation houses fondness for
models to illustrate the evaluation situation. It is also common to verify (or test) a model in
a case study. The overall impression on “use of evaluation results” is that organizational
learning and betterment are the most common ways to aproach this subject.
The article reading resulted in the following suggestions on use of evaluation results8:
1. The concept can be expanded and used to communicate technology adoption issues to
stakeholders or even larger population within the organization
2. A model containing feed-back loops to other organizational processes and functions and
postmortem analysis with a learning focus.
3. Formal evaluation methods can become a starting point for discussion and dissemination
of information in order to reach concensus. Evaluation promotes learning in terms of
business values… “develop knowledge through interaktion”…” vulnerable to the balance
of organizational power…it can also be used to legitimize decisions and justtify actions.
4. Make tacit evaluation knowledge more explicit so it can be exploited in future projects,
the evaluation process should identify and control the critical areas of an IS
development project, performance evaluation of the IS development process can facilitate
learning for future projects, results from the evaluation should be delivered to each
person in the IS project so the information canbe employed in the decision making phase,
as a tool for experience learning…”feed-back from the evaluation process should lead to
corrective actions if necessary”…
5. The concept “use of evaluation results”refers to use within the research
community. Researchers are encuraged to verify and improve the model.
A mapping of the suggestions above to the concepts and views on utilisation gives the
folloving result. In the instrumental view evaluations are useful if they are adopted by
decision makers and also permitted to influence decision-making and actions. This
approach is most obvious present in paragraph two and four. Researchers model
improvement, paragraph five, is also connected to the instrumental wiev. Conceptual use
denotes when user´s thinks about and receive new insights but transformation into action
does not take place. This is the case in paragraph one and three. In interaktive utilization
there is an yield between evaluators and practitioners. Dialogues and deliberative exchange
and focus on learning are important. This aspect is most obvious in paragraph four. The
legitimizing and tactical utilization of evaluation results is also illustrated, paragraph three.
The strategies for enhancing use of evaluation results by diffusion- product- user or
intervention oriented strategies (Vedung, 2008) could not be found in the sample of
investigated articles. Nor could the “system of evaluation” (Leeuw & Furubo, 2008) or the
dissemination techniques described in Lawrence et al. (2007).
The search for use of general evaluation terms in the selcted articles was almost fruitless.
One “evaluator discussion” suggesting an evaluator outside the IS project was found.
8 Same order as presented in section 4
11
6 Discussion and reflections
The purpose of this paper was to explore discussions on use of evaluation results in a small
sample of IS/IT-evaluation models and approaches. The result of the article investigation
is that reserachers on IS evaluation are concerned about use of evaluation results. The most
common suggestion is use for orgnaizational learning and betterment. The directives on
how to achieve this are however scanty but one article suggested delivery to persons in IS
projects. No use of general evaluation concepts and terms could be found in the articles. A
conclsion from this is that IS evaluation reserarchers are occupied in an IS impact and
assessment paradigm within the IS research community. Maybe a step outside this
paradigm and into a more general evaluation context would result in learning and
development of IS evaluation praxis.
The evaluation community as represented in section “related literature” are conciderably
aware of the difficulties involved in dissemination of evaluation results. They also seem to
take for granted that the evaluator has an important role to play in the dissemination
process. My reflection on this is that clients and management in client organizations have
the main responsibility for this dissemination since no actions can take place without
mandate from top management. Management also has a responsibility for many spent on
evaluations.
The reasoning in this article is based on a very small sample of articles. The results and
conclusions are therefore (of course) not generalizable to the total poulation of research
articles on IS evaluation. It is my opinion that the topic “use of evaluation results” needs
further attention and my suggestion for further research is a systematic literature rewiev
covering a reasonable sample of articles and an extended framwork for analysis.
12
References Beynon-Davies, P., Owens, I. and Williams, M. (2004),”Information systems evaluation and the
information systems development process”, The Journal of Enterprise Information
Management, Vol.17, pp. 276-282
DeLone, W. and McLean, E. (2003), “The DeLone and Mclean Model of Information Systems
Success: A Ten-Year Update”, Journal of Management Information Systems, Vol. 19 pp. 9-
30.
Fagerström Kareld, B. (2008), “ Evaluate IT Where Effects Occur and get Directives for
Managerial Action”, European Conference on Information Management, 2008.
Hallikainen, P. and Chen, L. (2006), “A Holistic Framework on Information Systems Evaluation
with a Case Analysis”, EJSE, Vol. 9
Irani, Z. (2002),”Information systems evaluation: navigating through the problem domain”,
Information & Management,40, pp. 11-24.
Kumar, K. (1990) “Post implementation evaluation of computer-based information systems:
current practices”, CACM, Vol. 33 No.2, pp.236-52.
Lagsten, J. and Goldkuhl, G. (2008), “Interpretative IS evaluation: Results and Uses”, EJSE, Vol.
11
Lawrenz, F. Gullicksson, A. and Toal, S. (2007), “Dissemination: Handmaiden Evaluation Use”,
American Journal of Evaluation, Vol. 28.
Leeuw, F. L. and Furubo, J. (2008) “Evaluation Systems: What Are They and Why Study Them” ,
Evaluation, Vol. 14(2), pp. 157-169.
Loveman, G. W. (1988), “An assessment of the Productivity Impact of Information Technologies”,
Massachusetts Institute of Technology.
Serafeimidis, V. and Smithson, S. (2003), “Information systems evaluation as an organizational
institution – experience from a case study”, Information Systems Journal, Vol. 13 pp. 251-
274.
Symons, V. and Walsham, G. (1988) “The evaluation of information systems: A critique”, Journal
of Applied Systems Analysis, Vol. 15.
Symons, V. J. (1991), “A review of information systems evaluation: content, context and process”,
European Journal of Information Systems, Vol. 1.
Vankatesh, V., Morris, M., Davis, G., Davis, F. (2003), “User Acceptance of Information
Technology: Towards a Unified View”, MIS Quarterly Vol. 27 No. 3, pp. 425-
478/September 2003
Vedung, E. (2009), “Evaluation research”, Uppsala university, Institute for Housing and Urban
Research