a critical analysis of hrd evaluation models

24
A Critical Analysis of HRD Evaluation Models from a Decision-Making Perspective Elwood F. Holton III, Sharon Naquin HRD evaluation models are recommended for use by organizations to improve decisions made about HRD interventions. However, the orga- nizational decision-making literature has been virtually ignored by evaluation researchers. In this article, we review the organizational decision- making literature and critically review HRD evaluation research through the decision-making lens. The review shows that most HRD evaluation models fit within the rational-economic framework. However, decision-making research shows that the rational-economic models do not work in practice and offers rich alternative models. This research offers compelling explanations for the disturbing phenomenon that HRD evaluation models are not widely used in practice. Some radically new directions for HRD evaluation research are discussed. For at least forty years, HRD experts have advocated the use of formal, sys- tematic evaluation as one key to more effective use of HRD resources in orga- nizations. Yet there is a puzzling paradox that has emerged and suggests that a new perspective is needed on evaluation models in HRD. Twitchell, Holton, and Trott (2000) summarized and analyzed trends from all of the surveys con- ducted on evaluation use over the previous forty years and concluded: Popular press and business leaders all discuss the need to increase the rate of growth in productivity in the face of ever increasing competition. Furthermore, there is increasing research evidence that human resource practices con- tribute significantly to organizational outcomes. The training literature presents evaluation as a necessary component in providing training that can help organizations increase these outcomes. There are numerous case studies of effective evaluation. Even estimating financial return, which is often presumed to be the hardest part of evaluation, has been widely demonstrated to be very feasible. 257 HUMAN RESOURCE DEVELOPMENT QUARTERLY, vol. 16, no. 2, Summer 2005 Copyright © 2005 Wiley Periodicals, Inc.

Upload: douglas-utiyama

Post on 28-Nov-2014

501 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: A Critical Analysis of HRD Evaluation Models

A Critical Analysis of HRDEvaluation Models from aDecision-Making Perspective

Elwood F. Holton III, Sharon Naquin

HRD evaluation models are recommended for use by organizations toimprove decisions made about HRD interventions. However, the orga-nizational decision-making literature has been virtually ignored byevaluation researchers. In this article, we review the organizational decision-making literature and critically review HRD evaluation research through thedecision-making lens. The review shows that most HRD evaluation models fitwithin the rational-economic framework. However, decision-making researchshows that the rational-economic models do not work in practice and offersrich alternative models. This research offers compelling explanations for thedisturbing phenomenon that HRD evaluation models are not widely used inpractice. Some radically new directions for HRD evaluation research arediscussed.

For at least forty years, HRD experts have advocated the use of formal, sys-tematic evaluation as one key to more effective use of HRD resources in orga-nizations. Yet there is a puzzling paradox that has emerged and suggests thata new perspective is needed on evaluation models in HRD. Twitchell, Holton,and Trott (2000) summarized and analyzed trends from all of the surveys con-ducted on evaluation use over the previous forty years and concluded:

Popular press and business leaders all discuss the need to increase the rate ofgrowth in productivity in the face of ever increasing competition. Furthermore,there is increasing research evidence that human resource practices con-tribute significantly to organizational outcomes. The training literature presentsevaluation as a necessary component in providing training that can helporganizations increase these outcomes. There are numerous case studies ofeffective evaluation. Even estimating financial return, which is often presumedto be the hardest part of evaluation, has been widely demonstrated to be veryfeasible.

257HUMAN RESOURCE DEVELOPMENT QUARTERLY, vol. 16, no. 2, Summer 2005Copyright © 2005 Wiley Periodicals, Inc.

Page 2: A Critical Analysis of HRD Evaluation Models

258 Holton, Naquin

Yet, all the literature on how much evaluation is used by businessand industry suggests that only about half of the training programs areevaluated for objective performance outcomes. Additionally, less than onethird of training programs are evaluated in any way that measure changesin organizational goals or profitability. After at least 40 years of bemoaningthe lack of evaluation, promoting the value of evaluation, developingmethods of evaluation, and pushing the evaluation cause, there appearsto have been only modest changes in the amount, types, or quality ofevaluation in business and industry. Even instructional designers, whoarguably should be among the most sophisticated training practitionerswith regard to evaluation, do not use evaluation to any greater extent thananyone else.

The persistent low levels of training evaluation, particularly at the moresophisticated levels three and four, raise serious questions about the state oftraining evaluation research as well as whether currently used models willenable trainers to achieve higher levels of program evaluation. Simply, if fortyyears of promoting its use has not changed the overall picture, something elsemust be needed. It seems safe to now say that we know that it does not matterif you examine practitioners more highly trained in evaluation (instructionaldesigners), those that have more well-defined outcomes (technical trainers,health care), those in more sophisticated organizations (ASTD benchmarkinggroup) or simply average training practitioners, the level of evaluation use isabout the same and is not changing much [p. 104].

Traditionally, experts have used this information as evidence of inadequateHRD practice. That is, their position has been that the best decision model fortraining investments is to conduct formal evaluation and determine the impacton performance.

Although that position has merit, there is a puzzling contradiction in thedata. Consider this: in 2003, $62 billion was spent by publicly traded organi-zations with more than one hundred employees on formal training. Althoughwe do not have data from forty years ago, even if spending has averaged onlyhalf the current level (in 2003 dollars), that would mean that over $1.2 trillionhas been spent on formal training in the past forty years. It is hard to imaginethat organizations would invest this much money without a reasonable decision-making model. But they have chosen not to use current evaluation models mostof the time. Either there are many organizations that have been incredibly naivein spending huge amounts of money, or they are using different decision-makingprocesses.

Thus, an alternate view of the apparent low levels of evaluation practices isthat evaluation models are not good decision-making models for HRD in orga-nizations. Survey data (Twitchell et al., 2000) clearly show that the use of eval-uation for HRD interventions in organizations has changed only modestly in the

Page 3: A Critical Analysis of HRD Evaluation Models

past forty years, particularly at the higher levels of measuring performance andorganizational result outcomes. Yet decisions about which HRD interventionsto conduct and how well they work continue to be made every day in organi-zations. Thus, the central problem evaluation experts continue to struggle withis how to explain the lack of evaluation use in organizations. Evaluators tend tobelieve that evaluation is the optimal way to make those decisions, but the usagedata suggest that decision makers in organizations do not agree.

Fundamentally, all evaluation models and processes have decision makingas their core output (Russ-Eft & Preskill, 2001; Swanson & Holton, 1999).Because evaluation methodologies have their roots in education, not in orga-nizational science, HRD is not accustomed to framing them as decision-makingmodels. Yet when used within organizational or institutional boundaries, theprimary purpose of evaluation is to contribute to better decisions aboutthe value of HRD investments. This is especially true of summative evaluationas opposed to formative evaluation.

This is a critical problem for the field of HRD. At the most basic level, thelack of use of evaluation models for HRD decision making has dogged the fieldfor over forty years. If evaluation is to be a critical process for HRD in organi-zations, then models must be developed that practitioners can embrace anduse. On a broader level, effective decisions about HRD interventions are nec-essary to maximize the value of human capital in organizations. Researchersmust thoroughly understand the decision-making processes used for HRDinterventions in order to make them more effective.

The purpose of this article is to shed new light on this problem. To do so,we examine evaluation in HRD from a completely different perspective: theorganizational decision-making perspective. If evaluation is viewed as adecision-making process (Russ-Eft & Preskill, 2001), then it is only logical thatthe decision-making literature should be explored to see how it can informevaluation in HRD and whether it helps explain the lack of evaluation use.

Evolution of Decision-Making Models

Decision making continues to be a heavily researched topic within a myriad ofdisciplines such as economics, applied statistics, organizational behavior, andindustrial/organizational psychology. This review is in no way exhaustive ofthe research but rather includes only the highlights necessary for this analysis.Readers needing a more complete review of decision-making literature shouldconsult Beach (1997), Hoch and Kunreuther (2001), Hastie and Dawes (2001),or Gigerenzer and Selten (2002a).

The literature on decision theory and research is generally discussed inthree categories: normative (or prescriptive), behavioral, or naturalistic(descriptive) (Beach, 1997). Normative or prescriptive decision theory presentsideal models of decision-making processes that are believed to lead to optimal

A Critical Analysis of HRD Evaluation Models 259

Page 4: A Critical Analysis of HRD Evaluation Models

260 Holton, Naquin

decisions. Generally, normative theories assume that decision makers strive todo what is best while providing the optimal payoff (maximum benefits or min-imal loss) for themselves or their organization. Much of the work in this cate-gory has roots in economics.

Behavioral decision theory also uses normative decision-making models,but in a different way. The primary question of interest for behavioral decisionresearch is how closely decision makers’ practices come to the normative mod-els’ prescriptions. Thus, they study decision-making behaviors in comparisonto the ideal models. They tend to focus on how individuals process informa-tion in order to provide meaning to decision making (March & Sevon, 1988):behavioral aspects of the individual decision maker (personality, perception,risk attitudes, managerial influence) and groups (size, communication, man-agerial influence) (Gilligan, Neale, & Murray, 1983). Behavioral decision the-ory also focuses on subjective probability and utility.

Still another category, naturalistic decision theory, focuses strictly on obser-vations of what decision makers actually do (“naturally”) as opposed to theo-rizing what they should do. The need for practical knowledge about real-worlddecision making serves as the driver in naturalistic theory (Beach, 1997). Sim-ilar to the behavioral researchers, they are interested in real-world decision-making practices, but unlike the behavioral researchers, they are not interestedin comparing the practices to normative models. Thus, there is some concep-tual overlap between the naturalistic and behavioral theories (Beach, 1997).

Normative Decision-Making Theory. The normative decision theories, orprescriptive theories, help decision makers make better decisions using a methodto prescribe what the decision process should be and advising how people shouldmake decisions using a normative decision approach. These prescriptive theo-ries tend to be rational decision theories, rooted in neoclassical economics andultimately in gambling; they assume that a decision maker is completely rationaland maximizes his or her economic rewards when making a decision. Decisionsare assumed to be made through a linear and logical sequence of steps: identifi-cation of the problem or issue requiring a decision; collecting and sorting infor-mation about potential solutions; comparing each solution alternative againstpredetermined criteria; ranking possible solutions; and finally, selecting theoptimal alternative. The goal is to maximize rewards and minimize costssimultaneously.

The model assumes that the decision maker has access to complete infor-mation regarding the issue, and a single conceptualization of a problem ordecision to be made can be determined. The model further assumes that afterassessing the advantages and disadvantages of alternatives and evaluating theconsequences of either choosing or not choosing each alternative, the decisionmaker selects the alternative that provides the maximum utility (that is, theoptimal choice). When decisions are made that do not conform to the rationalmodel, the normative view is that the problem lies with the human mind. Thatis, it represents an inadequacy on the part of the decision maker. They believe

Page 5: A Critical Analysis of HRD Evaluation Models

that rationality is achievable, so anything short of rationality is considered irra-tionality.

Rational-economic models are concerned with the expected utility of achoice. Utility is “that property in any object, whereby it tends to produce ben-efits, advantage, pleasure, good, or happiness or to prevent the happening ofmischief, pain, evil, or unhappiness in the party whose interest is considered”(Bentham, 1968, p. 3). Utility theory is an economic theory that started toemerge as early mathematicians (such as Pascal) began to assert that individ-uals usually accept a gamble yielding the highest expected return given thepossible outcomes. They reasoned that individuals use mathematical proba-bilities rather than subjective perception. The problem, however, is that thispremise fails to allow for individual preferences or differences that lay outsidethe expected outcomes of the gamble, mainly the cost to the individual tak-ing the gamble. It is also important to note that these utilities of functionseemingly convey strength of preference under certainty.

Classic economic theory, introduced by von Neumann and Morgenstern(1944), focuses on the expected-utility theory with a new role of incorpo-rating uncertainty in preference situations. Using a theory of games, vonNeumann and Morgenstern addressed social exchange issues when maxi-mizing utility in economics. They assumed that a consumer seeks to max-imize utility or satisfaction and the entrepreneur seeks to maximize profits.The neoclassical economic theory preceding their work accounted for math-ematical decisions only to maximize an individual’s or institution’s rewards,which assumes that individuals and institutions are completely rational.Any decisions outside this computational realm were unmanageable. As aresult, von Neumann and Morgenstern developed game theory based onthe fact that games involve more than one person, and the outcome of one’srewards also depends on the strategies the other chooses. Bernard (1984)contends that von Neumann and Morgenstern’s theory is the norm forrational behavior.

The rational decision-making model has been widely criticized. In fact,there has been a steady attack on rationalism to explain the seemingly irrationalyet viable element of decision making: failure to account for the characteris-tics of decision making through our cognitive activity that are not achievedthrough an economic point of view. Bass (1983) tells us that utility theory isfundamental to using closed systems. Other researchers (Barnard, 1938;Simon, 1955; Cyert, Simon, & Trow, 1956; Cyert & March, 1963) have arguedthat organizational decision making does not revolve around the closed, ratio-nal systems of economic theory but instead around open systems of variables.Whereas the rational model assumes that there are no intrinsic biases to thedecision-making process (Lyles & Thomas, 1988), optimization may be some-what unrealistic since decision makers bring their own perceptions and men-tal models into such a situation. As a result, intrinsic biases are inevitable andshould be addressed.

A Critical Analysis of HRD Evaluation Models 261

Page 6: A Critical Analysis of HRD Evaluation Models

262 Holton, Naquin

Rational-Economic Evaluation Models in HRD. As Russ-Eft and Preskill(2001) point out, definitions of evaluation have several common characteris-tics. First, they say, evaluation is a systematic process. Second, evaluationinvolves collecting data about questions or issues. Third, evaluation is seen asa process for enhancing knowledge and decision making. In each of these deci-sions, there is some judgment about merit, worth, or value. Finally, in eachdefinition, there is some notion of evaluation use. Thus, evaluation is definedas an inherently rational data-driven process to aid decision making.

Popular evaluation models in HRD are consistent with these definitions.Six models are representative of the rational models employed in HRD:Kirkpatrick (1998), Hamblin (1974), Brinkerhoff (1989), Phillips (1997),Swanson and Holton (1999), and Cascio (1999). The Kirkpatrick model is theoldest and most widely known evaluation model in HRD, consisting of fourlevels: reactions, learning, behavior, and results. Hamblin’s (1974) and Phillips’s(1997) models add return on investment (ROI) or economic value as a fifthlevel to Kirkpatrick’s model. Brinkerhoff’s model (1989) adds two preliminarystages to Kirkpatrick’s four levels to provide formative evaluation of trainingneeds and the training design. Swanson and Holton (1999) developed theresults assessment system to improve on the Kirkpatrick model, though itretains some similarity. It assesses outcomes in six areas: supervisor-managerperceptions, participant perceptions, knowledge learning, expertise learn-ing, performance results, and financial performance. Cascio’s utility analysismodel (1999) has no real linkage to the Kirkpatrick model. It offers a methodfor measuring performance change and estimating the dollar value of thatperformance change.

Regardless of the model, these six all advocate a careful, systematic collectionof data that is designed to result in a rational analysis of intervention outcomes.Kirkpatrick’s model does not explicitly include an economic analysis in his fourthlevel but clearly indicates that organizational outcomes be evaluated to assess theworth of the intervention. Swanson and Holton’s model advocates including afinancial cost-benefit analysis, while Hamblin’s and Phillips’s models add a fifthlevel to Kirkpatrick’s to calculate the ROI of the intervention. Cascio’s utilityanalysis is specifically designed to calculate the dollar value of outcomes fromperformance improvement through training.

Behavioral Decision Theory and Bounded Rationality. Diverging fromthe rational viewpoints, behavioral decision theory emerged as a more descrip-tive attitude of decision making, which embraces the psychological dimensionof decision making and more closely describes how people actually make deci-sions. More specifically, behavioral decision theory addresses the complexitiesof risk-related individual decision making. Furthermore, they seek to explainthe puzzling paradox that people seem not to maximize expected utility whenmaking decisions and may make decisions that seem completely irrational(Hastie & Dawes, 2001).

Page 7: A Critical Analysis of HRD Evaluation Models

Behavioral decision theory began as the study of the degree to whichunaided human decision making conforms to the processes and output of pre-scriptive decision theory (Beach, 1997). But it has developed from rational,prescriptive decision theories to a more descriptive view of how decision mak-ers make decisions. The descriptive view is explanatory and takes into accountjudgments of the decision maker rather than relying solely on formal,mathematical scrutiny.

Behavioral decision theory focuses on subjective probability (probabilities)and utility (worth) of decision making and typically takes some form ofbounded rationality. From the bounded rationality viewpoint, the fact thatdecision makers rarely use the rational model is completely understandable,given the cognitive limitations of the human mind. However, they do not viewthis as irrational. Rather, they maintain that the search for optimization is anunrealistic goal for decision makers and hence they seek to build models andtheories that remain normative in one sense but are realistic in that they moreclosely fit the behavior of decision makers. Their models are consideredoptimization under constraints models (Gigerenzer & Selten, 2001b).

Simon (1945) was one of the earliest researchers to highlight the limita-tions of the rational decision-making model. His premise was that decisionmakers are unable to operate under conditions of perfect rationality. Hecontends that it is unlikely that the “rational economic man” will or can cal-culate the marginal costs and returns of all probabilities or alternatives. Thereis likely to be uncertainty regarding the issue for decision: that is, only a lim-ited amount of information regarding the possible alternatives may be avail-able or accurately interpreted, evaluation criteria may not be certain or agreedon, and there may be time constraints prohibiting the decision makers frompursuing a maximizing outcome. The decision maker forms an aspiration, arealm of boundaries, indicating how good the alternative should be, and assoon as that aspiration is met, the decision maker then “satisfices” by termi-nating the search and choosing that alternative. The result is a decision thatsatisfices rather than optimizes. Simon thus proposed the concept of boundedrationality: decision makers may intend to be rational and base their deci-sions on reasoned logic, but it is unrealistic to expect that there are notlimitations on the degree of rationality that an individual can employ.

Another principal contributor to behavioral decision theory research wasEdwards (1954), who proposed describing the less-than-optimal behavior of thedecision maker by incorporating a psychological framework. He conceived sub-jective value or utility when the probabilities are not known to the “gambler” andhe must rely on other determinates for making a decision, that is, the subjec-tive probability of winning or losing. Subjective expected utility (SEU) is ageneralized measure of (un)desirability (Fischhoff, Goitein, & Shapira, 1983).In using SEU as a model for making decisions, the focus is on identifying a setof probabilities or utilities and assigning weights to those probabilities. As

A Critical Analysis of HRD Evaluation Models 263

Page 8: A Critical Analysis of HRD Evaluation Models

264 Holton, Naquin

Fischhoff et al. (1983) point out, any descriptive failure by an SEU model canbe explained by the value or probability judgments of assigning weights to thosemodels. The linear, objective function of the SEU model has been shown to bean effective tool in laboratory studies but not necessarily in field research(Kahneman & Tversky, 1979; Slovic, Fischhoff, & Lichtenstein, 1982). How-ever, other models, such as heuristics, arguably produce probabilistic judgments.

Other researchers have extended Simon’s work (Tversky & Kahneman,1974; Kahneman, Slovic, & Tversky, 1982; Janis & Mann, 1977; Nisbett &Ross, 1980; Jungermann, 1983). Most notably, Tversky and Kahneman (1974)provided insights about specified systematic biases that affect judgment, lead-ing to the notion of decision-making heuristics. Heuristics, or rules of thumb,are “principles which reduce the complex tasks of assessing probabilities andpredicting values to simpler judgmental operations” (Tversky & Kahneman,1974, p. 1124). Tversky and Kahneman recognize three effective heuristicsthat are employed in making judgments under uncertainty: (1) degree of rep-resentativeness of one situation to another, (2) availability of a similar occur-rence through recall, and (3) adjustment and anchoring of an initial value orstarting point of a previously made decision. They recognize the systematicand predictable errors that can occur from relying on heuristics for decisionmaking, but also acknowledge that understanding and using these heuristicscould lead to improved judgments and decisions under uncertainty, as well aseconomic rewards and time savings.

Baumol and Quandt (1964) further described rules of thumb: (1) the vari-ables employed in the decision criteria are objectively measurable, (2) the deci-sion criteria are objectively communicable, and the decision does not dependon the judgment of individual decision makers, (3) as a corollary to the sec-ond point, every logically possible configuration of variables corresponds to a(usually unique) determinate decision, and (4) the calculation of the appro-priate decision is simple, inexpensive, and well suited for frequent repetitionand for spot checking by management in higher echelons. They explain thatdecision processes are in themselves expensive tools and that using rules ofthumb are justified especially when a decision is not of crucial importance.

When optimizing maximum returns in organizational decision making,Klein (2001) notes that suboptimization is more typical, with decisions beingmade from a local or departmental point of view. These suboptimal decisionsmay not be optimal for the entire organization, which is affected by thetotality of these effects. McCain (1981) reminds us that the idea of optimiza-tion may mean that something other than profits is being maximized as anorganizational goal.

Bounded Rationality Evaluation Models in HRD. Bounded rationalityevaluation models in HRD fall into two categories. One major direction hasbeen the use of metrics that provide heuristics and surrogate indicatorsof HRD effectiveness in organizations, such as training as a percentage of pay-roll. Fitz-enz (2000), Phillips, Stone, and Phillips (2001), Holton and

Page 9: A Critical Analysis of HRD Evaluation Models

Naquin (2000), Edvinsson and Malone (1997), and Becker, Huselid, andUlrich (2001) are among those who have worked to construct balanced-scorecard-type metrics of HRD.

Heuristics are decision rules that simplify decision making (Todd, 2001).Thus, practitioners may use shortcuts to infer the value of HRD interventions.One major thrust in this direction has been creation of HR and HRD metricsstrategy. Metrics are not precise or specific measures of economic returns.However, they are believed to approximate some valued output or outcome ofinterventions, be simple enough to be easily understood and collected, andlead to better HR and HRD decisions when used in a balanced-scorecard-typeapproach. For example, a metric such as “percentage of key positions in acompany with at least one other person prepared to assume the position” isnot a precise measure of HRD effectiveness, but it may be one effective heuris-tic for programmatic effectiveness. Advocates of this approach argue that sinceHRD evaluation is not widely conducted, better decisions will result if metricsare combined into a “dashboard,” even though the data are less accurate. Saiddifferently, having satisfactory data all the time is better than having optimaldata just some of the time.

Another bounded rationality approach is Brinkerhoff’s success case method(2003) and Swanson and Holton’s critical outcome technique (1999). Both focuson outcomes only, dropping the requirement that individuals know that theinterventions have clearly defined and agreed-on goals beforehand (a require-ment for rationality; Klein, 2001). In the evaluation literature, these would belabeled as goal-free evaluations (Scriven, 1973). Brinkerhoff’s technique reliesheavily on stories of success, a naturalistic decision technique that researchershave identified as commonly used to make organizational decisions (Beach,1996; Klein, 1996). The critical outcome technique is more measurementoriented but still relies on success stories to identify outcomes.

From the decision-making perspective, these approaches can be seen asadopting a bounded rationality perspective. Although no author has explicitlyacknowledged such a link, all of them have as a foundational assumption thatdecision makers have neither the time nor resources to conduct complex ROI-type evaluations. In general, bounded rationality assumes that individuals uselimited pieces of information to find a satisfactory solution rather than anoptimal decision. Bounded rationality is not irrational, though it can be seenby advocates of rational models as being less rational than desired.

The bounded rationality perspective challenges the notion that HRD deci-sion makers have the cognitive and organizational resources to make optimaldecisions using a rational-economic model. In fairness, HRD evaluation mod-els have not been clear as to whether a person should select an optimal ROI fromamong various alternatives or simply ensure that interventions return a suitableROI that exceeds some hurdle rate. Nonetheless, the rational-economic modelsheavily emphasize complex cognitive processes that require considerable energy,effort, and resources.

A Critical Analysis of HRD Evaluation Models 265

Page 10: A Critical Analysis of HRD Evaluation Models

266 Holton, Naquin

Naturalistic Decision Theories. Researchers who found that a normativeor utility theory of decision making was inadequate and that practical real-world knowledge was needed developed naturalistic decision theory. Simon’sAdministrative Behavior (1945, 1976) emphasizes that the behavior of a personin an organization is constrained by the position he or she holds in that orga-nization, meaning that decision making in organizations is strongly influencedby the structure and norms of the organization and that decision makers donot entertain the full array of options that an outsider might consider available(Beach, 1997). Moreover, Simon reminds us that individuals within theorganization, not the organization itself, are making decisions.

Beach (1997) shows some of the latest ideas about “how decision makingtakes place naturally,” challenging the normative view of decision making. Rec-ognizing main ideas and smaller descriptive theories, he divides the traditionalnaturalistic models into four categories: recognition models, narrative models,incremental models, and moral or ethical models.

Recognition models require that the context of the current decision betaken into account as the decision maker pulls from his or her repertoire ofprevious similar situations for reference. Recognition models also require theuse of standard operating procedures and programmed responses (Simon,1979) in order for the decision maker to recognize previous or existingprocesses. Klein (1989) formulated the recognition-primed decision model, asexplained by Beach (1997), requiring that the situation be familiar, understood,evaluated, and simulated. If it is familiar and understood, then the decisionprocess reflects the previous decision process or sets of action but with furthercognitive deduction of appropriateness to the current problem. If a situationis not familiar, the decision maker must gather more information.

Narrative models, such as the scenario model (Jungermann & Thuring,1987), the story model (Pennington & Hastie, 1986), and the argument model(Lipshitz, 1993), acknowledge the importance of thinking about solutions todecision problems.

The scenario model describes how a plausible narrative can be constructedto generate forecasts in four steps. In the first step, the frame and goal of thedecision maker are used as probes to retrieve relevant knowledge of relational,if-then information. The second step uses the relational proposition to developa causal model of causal propositions. The third step assigns plausible valuesto the “if ” portion of the proposition, where each set of unique values repre-sents a scenario. In the fourth step, the model is run by working through thelogical implications of each set of values (the scenario). Different scenarios canbe compared and the differences assessed (Beach, 1997).

The story model uses the construction of stories as a method for under-standing the past. It relies heavily on interpretation and evaluation of humanbehavior using a logical consequence of past events. Argument-driven modelsalso use stories or scenarios as part of the decision process, but also considerarguments for and against a potential course of action. This allows potential

Page 11: A Critical Analysis of HRD Evaluation Models

outcomes and uncertainty about the situation to be considered. Once argu-ments are formulated, an action is reassessed and reshaped. Therefore, deci-sions are substantiated by supporting arguments. Uncertainty is attacked bythe decision maker’s motivation to modify the action.

Incremental models are based on the work of Lindblom (1959),Braybrooke and Lindblom (1963), and Connolly and Wagner (1988), whofocused on thought rather than action. Lindblom’s work on incremental eval-uation “presumes that the decision maker implements the option with themost attractive marginal increments, obtains feedback, and either proceeds oralters the policy in light of the feedback, but does not provide much detailabout how this happens” (Beach, 1997, p. 155). He uses the conditions sur-rounding public policy decisions to illustrate that multiplicity of goals, exten-sive outcomes, uncertain information and consequences, and knowledge of allplausible options is impossible; decision makers under these conditions adopta strategy that is realistic and gets the job done (Beach, 1997).

Finally, Beach illustrates moral and ethical models as elements of natural-istic decision theory. He contends that “in general, morals, ethics, ideologies,beliefs, and values all influence the decision processes” (p. 159), which do notlend themselves to normative or utility theories. Etzioni (1967) observes thatmost humans are rooted in a social context and therefore offers that decisionsare not influenced solely by the pleasure and gain of choices. He proposesthree influential sources of decision making: utility, social (codes of behaviorand group culture), and individual moral and ethical issues. These moral andethical issues, or deontological views, play a role in decision making in thatutilitarian and social issues are subordinates to them and all three must be con-sidered. Moreover, he suggests that in the light of this deontological view, deci-sion makers “use their emotions and value judgments to reject courses ofaction that violate their moral or ethical codes or select courses of action thatare compatible with, or prescribed by, those codes” (Beach, 1997). He alsoexplains that we make our decisions of our social contexts guided by ourmorals and ethical principles, which have been in some part formed by thosesocial connections. In response to these views, he proposes three questions:(1) What is the decision maker trying to do? (2) How does the decision makerchoose the means for doing what they are trying to do? and (3) Who makesthe decisions?

More recently, image theory (Beach, 1990) has emerged as a new categoryof naturalistic decision making. It is a descriptive theory of decision making inwhich unacceptable options are screened out and the decision maker choosesthe best option from among the survivors of the screening. Image theory hasbeen supported as a model of decision making (Beach, 1996; Potter & Beach,1994; Beach & Strom, 1989), and other research has supported its descriptivesufficiency (Beach, Smith, Lundell, & Mitchell, 1988).

Images represent each alternative of interest that a decision maker bringsinto his or her set of standards on which intuitive and automatic decisions are

A Critical Analysis of HRD Evaluation Models 267

Page 12: A Critical Analysis of HRD Evaluation Models

268 Holton, Naquin

made. As Mitchell and Beach (1990) explain, these images can be the decisionmaker’s principles (values image), existing agenda of goals (trajectory image), orplans to achieve the goals (strategic image). If a decision alternative is incom-patible with these images, it is rejected; if not, it is accepted as either the finaldecision or a member of the set of multiple alternatives from which the best willbe selected later as the final decision. In framing of the decision made throughimage theory, a previous decision can be recognized and used as an ad hoc frame.The outcome of this previous decision, whether success or failure, can serve asa policy or a previously successful plan, or suggest what not to do, where thedecision maker must return to the beginnings of the theory process.

Although image theory is still young, it has been shown in numerous stud-ies to be a viable concept for decision making. It bears a striking resemblance tomultilens approaches to understanding organizations put forth by Bolman andDeal (1997) and Morgan (1996). Image theory suggests that decision makersframe their decision through different images or lenses, which dictate the choicesmade. It highlights the multidimensional nature of organizational behaviors.

Khatri and Ng (2000) suggest that rational decision-making processes beused in conjunction with intuitive processes. Prietula and Simon (1989) defineintuition as a sophisticated form of reasoning based on “chunking” that anexpert hones over years of job experience. Simon (1987) denotes intuition asreliance on patterns developed through continual exposure to actual situations.Intuition, although not widely researched in decision making, has been foundto be a useful approach to decision making, most notably when information isabsent, insufficient, or uncertain (Burke and Miller, 1999). In other words, intu-ition is more appropriate in an unstable than a stable environment (Burke &Miller, 1999; Khatri & Ng, 2000). Khatri and Ng (2000) go on to argue thatintuitive synthesis is more appropriate for strategic (or nonroutine) decisionsthan for operational (routine) decisions. Because of the uncertain nature ofstrategic decision making and because intuition does not require the useof quantitative data or equations, which may be absent in the path of strategicdecision processes, intuitive synthesis is a more viable tool for strategic deci-sions. Mintzberg (1975) found that managers rely more on intuition or intu-itive judgment when making decisions, avoiding systematic and analytical data.

Naturalistic Evaluation Models in HRD. Naturalistic evaluation modelshave not been prevalent in the HRD literature. One exception to this is the Eval-uative Inquiry for Learning in Organizations (EILO) model approach advocatedby Preskill and Torres (1999). As Preskill (2004) points out, this model isgrounded in the belief that “evaluation is most effective when it is conductedusing collaborative, participatory and learning-oriented approaches” (p. 346).She goes on to point out that the EILO model has several unique characteristics:

• Evaluation should be ongoing, reflexive, and embedded in organizationalpractice.

• Evaluators should work with stakeholders to apply their learning from theevaluation processes and findings.

Page 13: A Critical Analysis of HRD Evaluation Models

• Learning from the evaluation process is an important goal.• EILO acknowledges that evaluation occurs within a complex system and is

influenced by the organization’s infrastructure [pp. 348–350].

The EILO model is not explicitly linked to naturalistic decision-makingtheory, but it is easy to see that it is quite likely to embrace the naturaldecision-making processes within an organization. Because it is grounded inthe belief that evaluation should include organizational members as partici-pants, be conducted in collaboration with them, and lead to the organization’slearning from the outcomes and using that knowledge, it is most likely to leadto a decision-making process that is natural to the organization. By definition,EILO engages the organization in a manner that fits its culture and thedecision-making practices commonly used in that organization. Because of itscollaborative nature, the evaluation process can be expected to change fromorganization to organization depending on the unique needs and demands ofeach. Thus, of the evaluation models commonly advocated in HRD, EILOwould be most likely to fit within naturalistic decision-making framework.

Critique of HRD Evaluation Models

Russ-Eft and Preskill (2001) offer this critique of HRD evaluation models:“Evaluation of training over the last forty years has focused on a limited num-ber of questions using only a few tools and methods. This pattern has led to aconstructed view of what evaluation can offer organizations. Training evalua-tion has been parochial in its approach and, as a result, has failed to show itscontribution or value to organizations” (p. 96). They conclude that “we shouldlook to other disciplines to expand our notion of evaluation theory and prac-tice” (p. 96). We suggest here that decision-making literature is one such dis-cipline that can expand our thinking about evaluation in HRD. If evaluation isnecessary for organizations to make optimal decisions about HRD investments,then it must be connected to decision-making research. Otherwise, organiza-tions will continue to struggle to see the value or contribution of evaluation.

It should be clear that most evaluation models used in HRD would be con-sidered rational-economic decision-making models. Yet decision-makingresearchers long ago realized that people do not use rational-economic deci-sion models in real-world contexts. Furthermore, they have moved wellbeyond such models to create more sophisticated models of decision making.Unfortunately, HRD has not.

The implication of this is enormous. It suggests that the quest to embedrational-economic evaluation models as HRD decision-making processes inorganizations may be futile or at least have reached its limit. Humans simplydo not naturally make decisions in this manner. That would offer a new expla-nation as to why progress has been so slow in creating rational-economic orga-nizational evaluation processes. Not only has the predominant Kirkpatrickmodel been flawed (Holton, 1996; Swanson & Holton, 1999), the central

A Critical Analysis of HRD Evaluation Models 269

Page 14: A Critical Analysis of HRD Evaluation Models

270 Holton, Naquin

premise of rational-economic decision making requires a fundamental and dif-ficult change in human decision-making processes.

There are two fundamental flaws in the rational-economic model. First,decision-making research has consistently shown that rational decision mak-ing is not a realistic model. Bounded rationality approaches or other natural-istic frameworks are widely accepted as more realistic and reflective of humancapacity. These are not new conclusions but ones well established in thedecision-making literature.

Second, consider the “economic” portion of the model. HRD experts havestressed economic criteria vigorously in an effort to put human capital invest-ments on a par with other capital investments in business organizations. Thiseffort, along with changes in the global economy, has been somewhat success-ful as human capital appears to be more widely regarded as a competitiveadvantage. However, decision-making researchers have moved to a subjectiveexpected utility model rather than an economic one.

Focusing on utility has two broad and important implications. First, itbroadens the field of possible outcomes from just economic ones to incorpo-rate any possible outcome. While it can be argued that any organizational out-come ultimately has some economic impact on the organization, in practicedecision makers view some outcomes as noneconomic because the economicpayoffs are so distal to the HRD intervention. For example, decisions might getmade on cultural fit rather than economic return because the economic returnsare so difficult to forecast or calculate. Thus, utility to the organization is abroader and more realistic view of potential outcomes.

Second, broadening the field to include subjective utility recognizes thatthe outcomes from HRD interventions are uncertain so the decision makerhas to make subjective estimates as to the probability of outcomes’ beingachieved. Evaluation models used in HRD have virtually ignored the proba-bilistic nature of outcomes’ occurring. In utility terms, the expected utility isa function of both the possible returns and the probability of those returnsoccurring. Subjective expected utility acknowledges that in organizations,those probabilities are rarely known, so they must be estimated subjectivelyby the decision maker. For example, given the relatively poor track record oflearning interventions transferring into performance improvement, decisionmakers might view investments in training as risky even though the possi-ble returns are large. In organizations with a history of poor outcomes fromHRD interventions, the subjective expected utility could be very low despitehigh forecasted ROIs.

New Directions for Evaluation Research

If HRD wants organizational decision makers to make better decisions aboutHRD interventions and investments, then it is mandatory that the decision-making literature be used to critically analyze future directions for research.

Page 15: A Critical Analysis of HRD Evaluation Models

Furthermore, the decision-making literature is ripe with possibilities for under-standing and improving HRD decision-making and evaluation processes. Virtu-ally every decision model discussed has direct application in HRD and providesa richer theoretical base than most current evaluation research in HRD.

Our review suggests some potentially radical different directions for HRDevaluation research. (Table 1 summarizes this review.) The table is divided intwo segments (column 1) that represent our fundamental belief about thevalidity of our current evaluation models. Column 2 expands this into five pos-sible normative positions for researchers to take, which correspond to five sug-gested future research directions (column 3). Column 4 highlights currentHRD evaluation research that fit within each category, and column 5 relatesthem to the decision-making literature.

Assumption: Our Normative Models Are Correct. The first two directionsfor research and practice begin with the assumption that our normative mod-els are indeed the right way to make decisions. There are two directions withinthis group (numbers 1 and 2 in Table 1):

1. Our evaluation models are the right normative decision-making models. Thisis the traditional position of evaluation advocates who recommend that HRDpractitioners adopt a rational-economic approach grounded in precise mea-surement (Kirkpatrick, 1998; Brinkerhoff, 1989; Hamblin, 1974; Phillips,1997; Swanson & Holton, 1999). Decision-making researchers call this pre-scriptive theory. In this case, the prescription is for HRD decisions to be madein such a way that benefits—mostly economic ones—exceed costs, with hopesthat it will be by a substantial amount. The optimal decision is viewed asone that maximizes the economic return to the organization. Those who donot use the rational-economic approach to HRD decisions are viewed as likelyto make suboptimal decisions due to poor decision-making practices.

Most of the research conducted on evaluation practices falls into the cat-egory of behavioral decision-making research in that it attempts to assess theextent to which behavior conforms to these prescriptive models. It is that bodyof research noted in the introduction that has shown that the rational pre-scriptive models are used only to a limited extent. It is important to note thatnone of this research tests whether the prescriptive models are the right mod-els, just whether behavior in practice fits the rational model. This is a huge gapin HRD evaluation research that must be rectified.

If we believe that our normative models are the right way to do things,then the future direction is to continue to sell them to the profession. How-ever, this approach appears severely limited from two perspectives. First, it hasnot been successful. That is, practitioners continue to resist this approach, sug-gesting that it may not be useful to them. Second, decision-making researchexplains this resistance by its considerable evidence that rational models donot work well in practice. These two research bases combined suggest thisoption is not a reasonable direction for future research.

A Critical Analysis of HRD Evaluation Models 271

Page 16: A Critical Analysis of HRD Evaluation Models

Tab

le 1

.D

irec

tion

s fo

r F

utu

re E

valu

atio

n R

esea

rch

HR

D’s

Nor

mat

ive

Sugg

este

d Fu

ture

H

RD

D

ecis

ion-

Mak

ing

Posi

tion

Stra

tegy

Exam

ples

Pers

pect

ive

Our

nor

mat

ive

mod

els

are

corr

ect.

1. W

e ar

e ri

ght,

and

our

norm

ativ

e de

cisi

on-m

akin

gth

eory

, whi

ch is

a r

atio

nal

econ

omic

one

, is

the

righ

tap

proa

ch.

2. O

ur n

orm

ativ

e de

cisi

on-

mak

ing

mod

els

are

corr

ect

but

unre

alis

tic

give

n th

ede

cisi

on-m

akin

g re

sear

ch,

whi

ch s

how

s th

at h

uman

sar

e un

able

to

cons

iste

ntly

us

e su

ch a

n ap

proa

chfo

rde

cisi

on m

akin

g.

3. O

ur n

orm

ativ

e de

cisi

on-

mak

ing

mod

els

are

corr

ect,

but

only

for

a re

stri

cted

rang

e of

pro

gram

s or

situ

atio

ns.

We

shou

ld c

onti

nue

to “

sell”

this

to

the

prof

essi

on.

Sear

ch fo

r si

mpl

er h

euri

stic

sth

at a

ppro

xim

ate

or e

stim

ate

mor

e co

mpl

ex a

ppro

ache

s.

Con

duct

res

earc

h to

dete

rmin

e w

hen

a ra

tion

al-

econ

omic

app

roac

h is

the

best

dec

isio

n-m

akin

g m

odel

and

deve

lop

othe

r de

cisi

on-

mak

ing

mod

els

for

othe

rde

cisi

on c

rite

ria

and

situ

atio

ns.

Rat

iona

l-ec

onom

ic

Boun

ded

rati

onal

ity

Subj

ecti

ve e

xpec

ted

utili

ty

Kir

kpat

rick

(19

98),

Phi

llips

(199

7), S

wan

son

and

Hol

ton

(199

9), C

asci

o (1

999)

Hum

an r

esou

rce

and

inte

llect

ual c

apit

al m

etri

cs(B

ecke

r, H

usel

id, &

Ulr

ich,

2001

; Fit

z-E

nz, 2

000;

Hol

ton

& N

aqui

n, 2

000;

Phi

llips

,St

one,

& P

hilli

ps, 2

001,

Edv

inss

on &

Mal

one,

199

7).

Succ

ess

case

met

hod

(Bri

nker

hoff,

200

3); c

riti

cal

outc

ome

tech

niqu

e (S

wan

son

& H

olto

n, 1

999)

Non

e

Page 17: A Critical Analysis of HRD Evaluation Models

Our

nor

mat

ive

mod

els

are

not

corr

ect.

4. O

ur n

orm

ativ

e de

cisi

on-

mak

ing

mod

el is

wro

ng, b

utpr

acti

tion

ers

are

smar

ter

than

we

give

the

m c

redi

t fo

r an

dha

ve d

ecis

ion-

mak

ing

mod

els

that

wor

k bu

t th

at w

e do

not

unde

rsta

nd w

ell.

5. O

ur n

orm

ativ

e de

cisi

on-

mak

ing

mod

el is

wro

ng, a

ndpr

acti

tion

ers

are

mak

ing

a lo

tof

bad

HR

D d

ecis

ions

.

Use

gro

unde

d th

eory

-bu

ildin

g re

sear

ch t

oun

ders

tand

how

HR

Dde

cisi

ons

are

mad

e an

dlo

okfo

r w

ays

to im

prov

eth

em.

Dev

elop

a n

ew H

RD

deci

sion

-mak

ing

theo

ryth

atem

ploy

s de

cisi

on-

mak

ing

rese

arch

mor

e fu

lly.

Nat

ural

isti

c

New

mul

tiat

trib

ute

norm

ativ

em

odel

s

Pres

kill

and

Torr

es (

1999

)

Hol

ton

(199

6)

Page 18: A Critical Analysis of HRD Evaluation Models

274 Holton, Naquin

2. Our evaluation models are the right normative decision-making models, butthey are unrealistic, so simpler models are needed. This option builds on the logicof the first option in that the assumption is that the underlying rational-economic normative model is correct but unrealistic to implement in practicebecause it is too complex. Researchers in this category have explored a humanresource metrics and goal-free-type outcome evaluations to simplify measure-ment and decision making. There is ample evidence from the decision-makingliterature to suggest this is typical of rational models. That is, the rational mod-els may be theoretically sound, but simpler methods have to be developed forpractice. If we accept this as our position, then our emphasis should be on cre-ating simpler approaches that approximate the underlying rational normativemodel but are more accessible and user friendly. While evaluators may prefera more exact and theoretically valid approach, this approach would accountfor practical realities in order to expand evaluation use in practice. That is,trade-offs are needed between accuracy and validity versus practitioner utilityfor decision making.

3. Our normative models are correct but incomplete. Perspective 3 in Table 1suggests the possibility that the rational normative models are right, but onlypartially so. The chief limitation of these evaluation models is that they requirechoices about interventions to be made strictly on rational or economic crite-ria. However, this defies the organizational reality that decisions are shaped byother factors. Bolman and Deal (1997) and Morgan (1996) employ a lensmetaphor to identify the multiple frames within which organizational life,including decision making, occurs. Decision researchers have taken a similarpath by creating the lens model to describe the process by which individualsattend to cues in their environment and then make decisions based on thosecues (Hastie & Dawes, 2001).

Using Bolman and Deal’s four lenses (1997) as an example, consider howan HRD decision might vary when the decision maker operates predominantlyfrom a human resource lens versus a structural, political, or cultural lens. Eachlens would place greater value on different cues or even see different cues inthe organization, potentially leading to different decisions about HRD. It ishard to imagine how political, cultural, and structural factors do not becomepart of HRD decisions. Even the purest of business decisions (new products orbuying a company, for example) involve more than economic factors. Forexample, boardroom politics always play a huge role in decisions about acquir-ing a new company.

The challenge is how to incorporate multiple perspectives into an HRDdecision model. Decision researchers have traditionally turned to utility mod-els to accommodate a broader range of outcomes and subjective expected util-ity models to incorporate the uncertainty surrounding uncertain investments.HRD evaluation research needs to do the same.

Assumption: Our Normative Models Are Not Correct. We turn now tothe more radical possibility that our normative decision models are not correct.

Page 19: A Critical Analysis of HRD Evaluation Models

That is, the underlying rational-economic decision model is flawed in theoryand practice.

It is startling to realize that little or no naturalistic decision research hasoccurred in HRD. The predominant evaluation model, the Kirkpatrick four-level model, was created in 1959–1960 as a prescriptive framework. Althoughit was surely built somewhat on the experience of the author, it was not cre-ated through any systematic research process. To be fair, HRD and manage-ment research were in their early infancy at the time. Nonetheless, theevaluation agenda for the past forty years has been dominated by a model thatwas not created through research and has been relatively uncritically embracedby a profession, yet all the data show it is not widely used. Furthermore, it hasnot been subjected to any degree of scrutiny through naturalistic decision-making research. Naturalistic decision researchers seek to understand howdecisions are actually made rather than testing whether a particular model isbeing employed.

To recap, the four-level model has largely failed the test of behavioral deci-sion research (it is not used), and no naturalistic research has been conductedto confirm that decision makers make decisions this way. This suggests tous that we must carefully consider the possibility that it and its close cousinsare the wrong normative models. That is, there is some other logic or rationalprocess by which organizations decide to invest what is now over $60 billiona year (training expenses for publicly traded companies over one hundredemployees). It is easy and tempting to continue to blame practitioners for notusing the model and label them as making irrational decisions. But what if theproblem is not the practitioners but us: HRD researchers? What if there is adifferent framework that is being used, perhaps even quite well, that we simplydo not understand?

4. Our normative models are wrong, but practitioners have effective decision-making models that we do not understand very well. This conclusion demandsnew evaluation theory. However, because practitioners are presumed to beusing models that work but we do not understand, it calls for grounded the-ory building. This would be called a naturalistic study by decision researchers.The objective of grounded theory building is to build new theory by observ-ing what is done in practice and generalizing from it.

The logic for this conclusion was stated in the introduction to this article:there has been so much money spent on HRD since the four-level model wasintroduced in 1959–1960 that it is hard to imagine that organizations do nothave an effective way to make decisions on how to invest it. Clearly the four-levelmodel explains only part of it because it is used in its entirety by only a minor-ity of organizations. Is it really possible that they have been spending that muchmoney poorly for so long? Our experience in organizations suggests not.

At a minimum, it seems reasonable to conclude that HRD needs to stepoutside the bounds of the four-level or other rational-economic models and

A Critical Analysis of HRD Evaluation Models 275

Page 20: A Critical Analysis of HRD Evaluation Models

276 Holton, Naquin

carefully research decision models in use. If we assume that good decisions arebeing made using some other framework, then the charge to researchers is toconduct naturalistic research to discern how decisions are actually being made.

5. Our normative models are wrong, and practitioners are making many baddecisions as a result. This is perhaps the most challenging of all the possibilitiesbecause it demands new deductive theory building. If our normative modelsare wrong, then new normative models are needed. And if there are not a lotof best practices to study, then those models have to be built deductively. Thatis, researchers will have to develop new theory based on prior research andtheory and then test its application in organizations to determine its value.

Several researchers have embarked on this journey. Holton (1996) pro-posed a new multiattribute evaluation model that is more comprehensivethan most others. Unfortunately, it is probably too complex for immediateuse by practitioners, but its promise is that it is a multifactor model that cap-tures a broader range of factors with an impact on HRD decisions. Wang,Dou, and Li (2002) use economic theory to construct a new productionfunction for the firm that captures the effects of HRD and other factors inmeasuring HRD effectiveness at the organizational level. It is important toacknowledge that both of these approaches are rational and economicapproaches that suffer from some of the same limitations that all other ratio-nal models do. However, it can be argued that it is important to have clear,rational, normative frameworks before creating simpler models in thebounded rationality tradition.

What both of these models begin to do is redefine the core normative deci-sion framework. They are examples of research that departs from the four-leveltradition (Wang more than Holton) to create new evaluation and decision the-ory. If we adopt this normative position about HRD decision making, thenresearchers need to engage in more theory building like this.

Conclusion

The purpose of this article was to critically examine the HRD evaluation liter-ature through the organizational decision-making lens. Its purpose was also toraise more questions than it answered. The decision-making literature, whichhad not been connected to evaluation previously, provides provocative insightsinto potential shortcomings in the evaluation literature. One fundamental stepwas to reconceptualize evaluation as organizational decision making, whichtakes it out of the educational evaluation domain and places it in the organi-zational decision-making domain. We believe this fundamental step mayunlock new doors to finding evaluation models that really get used to makeHRD decisions.

Given the five possible conclusions we discussed in the previous section,we suggest the following as our hunches about right and new directions forHRD research:

Page 21: A Critical Analysis of HRD Evaluation Models

• Researchers need to conduct naturalistic decision research to under-stand how HRD decisions are really made in organizations. It is a serious over-sight that more has not been done in this area. To repeat, research examiningthe extent to which the four-level model is used in organizations, which hasbeen widely conducted, is not naturalistic; it is behavioral. Naturalistic researchwould abandon the rational framework as an analytical framework and studydecision-making processes in use.

• We do not believe that conclusion 1 in Table 1 (continuing to sell thecurrent models to practitioners) is right. Too much decision research suggeststhe rational-economic approach does not work, is not used, and is toonarrowly defined for organizational reality.

• The literature on bounded rationality is compelling. It seems clear thatwhatever model emerges as the right normative model will have to be translatedinto a bounded model to be useful in practice. The approaches currently appear-ing in the literature are promising and should be pursued in the short term.However, it must be remembered they are built from the rational-economicmodel, which may well be changed in the long run.

• Because knowledge and learning are such value-laden and socially con-structed phenomena, our hunch is that some type of multifactor model willemerge from naturalistic research. Our experience tells us that economic fac-tors are but one of the numerous legitimate criteria for HRD interventions.New theory should focus the field on how best to account for those factorsrather than ignore them and champion only economic criteria.

Our crystal ball is sometimes clouded, but these are our best hunches atthis time. What seems clear is that it is time to reconceptualize evaluation inHRD and find evaluation and decision models that work and will be used.

References

Barnard, C. I. (1938). The functions of the executive. Cambridge, MA: Harvard University Press.Bernard, G. (1984). Utility and risk preference functions. In O. Hagen & F. Wenstop (Eds.),

Progress in utility and risk theory (pp. 135–143). Dordrecht: D. Reidel. Bass, B. M. (1983). Organizational decision making. Homewood, IL: Irwin.Baumol, W. J., & Quandt, R. E. (1964). Rules of thumb and optimally imperfect decisions.

American Economic Review, 54 (2), 23–46.Beach, L. R. (1990). Image theory: Decision making in personal and organizational contexts. New

York: Wiley.Beach, L. R. (1996). Decision making in the workplace: A unified perspective. Mahwah, NJ: Erlbaum.Beach, L. R. (1997). The psychology of decision making: People in organizations. Thousand Oaks,

CA: Sage.Beach, L. R., Smith, B., Lundell, J., & Mitchell, T. R. (1988). Image theory: Descriptive sufficiency

of a simple rule for the compatibility test. Journal of Behavioral Decision Making, 1, 17–28.Beach, L. R., & Strom, E. (1989). A toadstool among the mushrooms: Screening decisions and

image theory’s compatibility test. Acta Psychologica, 72, 1–12.Becker, B. E., Huselid, M. A., & Ulrich, D. (2001). The HR scorecard: Linking people, strategy, and

performance. Boston: Harvard Business School Press.

A Critical Analysis of HRD Evaluation Models 277

Page 22: A Critical Analysis of HRD Evaluation Models

278 Holton, Naquin

Bentham, J. (1968). An introduction to the principles of morals and legislation. In A. N. Page(Ed.), Utility theory: A book of readings (pp. 3–29). New York: Wiley. (Original work published1853)

Bolman, L. G., & Deal, T. E. (1997). Reframing organizations: Artistry, choice, and leadership. SanFrancisco: Jossey-Bass.

Braybrooke, D., & Lindblom, C. E. (1963). A strategy of decision: Policy evaluation as a socialprocess. New York: Free Press.

Brinkerhoff, R. O. (1989). Achieving results from training. San Francisco: Jossey-Bass.Brinkerhoff, R. O. (2003). The success case method: Find out quickly what’s working and what’s not.

San Francisco: Berrett-Koehler.Burke, L. A., & Miller, M. K. (1999). Taking the mystery out of intuitive decision making. Acad-

emy of Management Executive, 13 (4), 91–99.Galvin, T. (2003). 2003 industry report. Training, 40 (9), 21–38.Cascio, W. F. (1999). Costing human resources (4th ed.). Mason, OH: Southwestern Publishing.Connolly, T., & Wagner, W. G. (1988). Decision cycles. Advances in Information Processing in

Organizations, 3, 183–205.Cyert, R. M., & March, J. G. (1963). Behavioral theory of the firm. Upper Saddle River, NJ: Prentice

Hall.Cyert, R. M., Simon, H. A., & Trow, D. B. (1956). Observation of a business decision. Journal of

Business, 29, 237–248.Edvinsson, L., & Malone, M. S. (1997). Intellectual capital: Realizing your company’s true value by

finding its hidden brainpower. New York: HarperCollins.Edwards, W. (1954). The theory of decision making. Psychological Bulletin, 51, 380–417.Etzioni, A. (1967). Mixed-scanning: A “third” approach to decision making. Public Administra-

tion Review, 27, 385–392. Fischhoff, B., Goitein, B., & Shapira, Z. (1983). In R. W. Scholz (Ed.), Decision making under

uncertainty (pp. 63–86). New York: North-Holland.Fitz-Enz, J. (2000). The ROI of human capital: Measuring the economic value of employee performance.

New York: AMACOM.Gigerenzer, G., & Selten, R. (Eds.). (2002a). Bounded rationality: The adaptive toolbox. Cambridge,

MA: MIT Press.Gigerenzer, G., & Selten, R. (2002b). Rethinking rationality. In G. Gigerenzer & R. Selten (Eds.),

Bounded rationality: The adaptive toolbox (pp. 1–12). Cambridge, MA: MIT Press.Gilligan, C., Neale, B., & Murray, D. (1983). Business decision making. Oxford: Philip Allan.Hamblin, A. C. (1974). Evaluation and control of training. New York: McGraw-Hill.Hastie, R., & Dawes, R. M. (2001). Rational choice in an uncertain world. Thousand Oaks, CA:

Sage.Hoch, S. J., & Kunreuther, H. C. (2001). Wharton on making decisions. New York: Wiley.Holton, E. F. III. (1996). The flawed four-level evaluation model. Human Resource Development

Quarterly, 7, 5–21.Holton, E. F. III, & Naquin, S. S. (2000). Employee development and retention metrics. In Staffing

Report 2000. Willow Grove, PA: Staffing.org.Janis, I. L., & Mann, L. (1977). Decision making: A psychological analysis of conflict, choice, and com-

mitment. New York: Free Press.Jungerman, H. (1983). The two camps on rationality. In R. W. Scholz (Ed.), Decision making under

uncertainty (pp. 63–86). New York: North-Holland.Jungerman, H., & Thuring, J. (1987). The use of causal knowledge for inferential reasoning. In

J. L. Mumpower, L. D. Phillips, O. Renn, & V.R.R. Uppularia (Eds.), Expert judgement andexpert systems (pp. 131–146). New York: Springer.

Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgement under uncertainty: Heuristics and biases.Cambridge: Cambridge University Press.

Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk.Econometrica, 47 (2), 263–292.

Page 23: A Critical Analysis of HRD Evaluation Models

Khatri, N., & Ng, H. A. (2000). The role of intuition in strategic decision making. Human Rela-tions, 53 (1), 57–86.

Kirkpatrick, D. L. (1998). Evaluating training programs: The four levels. San Francisco: Berrett-Koehler.

Klein, G. A. (1989). Recognition-primed decisions. Advances in Man-Machine Systems Research,5, 47–92.

Klein, G. A. (1996). Sources of power: How people make decisions. Cambridge, MA: MIT Press. Klein, G. (2001). The fiction of optimization. In G. Gigerenzer & R. Selten (Eds.), Bounded ratio-

nality: The adaptive toolbox (pp. 103–122). Cambridge, MA: MIT Press.Lindblom, C. E. (1959). The science of “muddling through.” Public Administration Review, 19,

79–88.Lipshitz, R. (1993). Decision making as argument-driven action. In G. A. Klein, J. Oransanu,

R. Calderwood, & C. E. Zsambok (Eds.), Decision making in action: Models and methods(pp. 172–181). Norwood, NJ: Ablex.

Lyles, M., & Thomas, H. (1988). Strategic problem formulation: Biases and assumptions embed-ded in alternative decision making models. Journal of Management Studies, 25, 131–146.

March, J. G., and Sevon, G. (1988). Gossip, information, and decision making. In J. G. March(Ed.), Decisions and organizations (pp. 429–442). Oxford: Blackwell.

McCain, R. A. (1981). Markets, decisions, and organizations: Intermediate microeconomic theory.Upper Saddle River, NJ: Prentice Hall.

Mintzberg, H. (1975). The manager’s job: Folklore and fact. Harvard Business Review, 53 (1),49–61.

Mitchell, R. R., & Beach, L. R. (1990). “. . . Do I love thee? Let me count . . .” Toward an under-standing of intuitive and automatic decision making. Organizational Behavior and HumanDecision Processes, 47, 1–20.

Morgan, G. (1996). Images of organization. Thousand Oaks, CA: Sage.Nisbett, R., & Ross, L. (1980). Human influence: Strategies and shortcomings of social judgement.

Upper Saddle River, NJ: Prentice Hall.Pennington, N., & Hastie, R. (1986). Evidence evaluation in complex decision making. Journal

of Personality and Social Psychology, 51, 242–258.Phillips, J. J. (1997). Handbook of training evaluation and measurement methods (3rd ed.). Oxford:

Butterworth-Heineman.Phillips, J. J., Stone, R. D., & Phillips, P. P. (2001). The human resources scorecard. Oxford:

Butterworth-Heineman.Potter, R. E., & Beach, L. R. (1994). Imperfect information in pre-choice screening of options.

Organizational Behavior and Human Decision Processes, 59, 313–329.Preskill, H. (2004). The transformational power of evaluation: Passion, purpose and practice. In

M. C. Alkin (Ed.), Evaluation roots: Tracing theorists’ views and influences. Thousand Oaks, CA:Sage.

Preskill, H., & Torres, R. T. (1999). Evaluative inquiry for learning in organizations. Thousand Oaks,CA: Sage.

Prietula, M. J., & Simon, H. A. (1989). The experts in your midst. Harvard Business Review, 67(1), 120–124.

Russ-Eft, D., & Preskill, H. (2001). Evaluation in organizations: A systematic approach to enhancinglearning, performance and change. Thousand Oaks, CA: Sage.

Scholz, R. W. (Ed.). (1983). Decision making under uncertainty. New York: Elsevier.Scriven, M. (1973). Goal-free evaluation. In E. R. House (Ed.), School evaluation. Berkeley, CA:

McCutchan.Simon, H. A. (1945). Administrative behavior. New York: Free Press.Simon, H. A. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69

(1), 99–118.Simon, H. A. (1976). Administrative behavior: A study of decision making processes in administrative

organization (3rd ed.). New York: Free Press.

A Critical Analysis of HRD Evaluation Models 279

Page 24: A Critical Analysis of HRD Evaluation Models

280 Holton, Naquin

Simon, H. A. (1979). Rational decision making in business organizations. American EconomicReview, 69 (4), 493–513.

Simon, H. A. (1987). Making management decisions: The role of intuition and emotion. Academyof Management Executive, 1, 57–66.

Slovic, P., Fischhoff, B., & Lichtenstein, S. (1982). Facts versus fears: Understanding perceivedrisk. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics andbiases. Cambridge: Cambridge University Press.

Swanson, R. A., & Holton, E. F. III. (1999). Results: How to measure performance, learning andsatisfaction and satisfaction outcomes in organizations. San Francisco: Berrett-Koehler.

Todd, P. M. (2001). Fast and frugal heuristics for environmentally bounded minds. In G.Gigerenzer & R. Selten (Eds.), Bounded rationality: The adaptive toolbox (pp. 51–70).Cambridge, MA: MIT Press.

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science,185, 1124–1131.

Twitchell, S., Holton, E. F. III, & Trott, J. (2000). Technical training evaluation practices in theUnited States. Performance Improvement Quarterly, 13 (3), 84–110.

von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton,NJ: Princeton University Press.

Wang, G., Dou, Z., & Li, N. (2002). A systems approach to measuring return on investment inHRD. Human Resource Development Quarterly, 13, 203–224.

Elwood F. Holton III is the Jones S. Davis Distinguished Professor of HumanResource, Leadership and Organization Development in the Louisiana State UniversitySchool of Human Resource Education and Workforce Development.

Sharon Naquin is the founder and director of the Louisiana State University Divisionof Workforce Development. She also serves as on the faculty of the LSU School of Human Resource Education and Workforce Development.

For bulk reprints of this article, please call (201) 748-8789.