model assessment objectives in or

13
Mathematical Programming 42 (1988) 85-97 85 North-Holland MODEL ASSESSMENT OBJECTIVES IN OR Alex ORDEN Graduate School of Business, University of Chicago, IL 60637, USA Received December 1987 Key words: Assessment of models, management systems, modeling, operational research, valida- tion of models. Preface Shakespeare knew nought about operations research, but he did know about one of its most important aspects. He warns in Henry The Fourth Part 2 that planning models which do not correspond closely to reality are dangerous. In Act I, Scene 3 the Archbishop of York and his henchmen are deliberating on whether or not to lead a force of 25 000 men into battle in order to attempt to overthrow King Henry the Fourth. Lord Bardolph says: When we mean to build We first survey the plot, then draw the model; And when we see the figure of the house, Then must we rate the cost; Which if we find outweights ability, What do we then but draw anew the model In fewer offices; or at last desist To build at all? Much more in this great work~ Which is almost to pluck a kingdom down And set another up, should we survey The plot of situation and the model, Consent upon a sure foundation, Question surveyors, know our own estate, How able such a work to undergo, To weigh against his opposite; or else We fortify in paper, and in figures, Using the names of men instead of men: Like one who draws the model of a house Beyond his powers to build it; who half through, Gives o'er, and leaves his part created cost A naked subject to the weeping clouds, And waste for churlish winter's tyranny.

Upload: alex-orden

Post on 10-Jul-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Model assessment objectives in or

Mathematical Programming 42 (1988) 85-97 85 North-Holland

MODEL ASSESSMENT OBJECTIVES IN OR

Alex O R D E N

Graduate School of Business, University of Chicago, IL 60637, USA

Received December 1987

Key words: Assessment of models, management systems, modeling, operational research, valida- tion of models.

Preface

Shakespeare knew nought about operations research, but he did know about one of its most important aspects. He warns in Henry The Fourth Part 2 that planning models which do not correspond closely to reality are dangerous. In Act I, Scene 3 the Archbishop of York and his henchmen are deliberating on whether or not to lead a force of 25 000 men into battle in order to attempt to overthrow King Henry

the Fourth. Lord Bardolph says:

When we mean to build We first survey the plot, then draw the model; And when we see the figure of the house, Then must we rate the cost; Which if we find outweights ability, What do we then but draw anew the model In fewer offices; or at last desist To build at all? Much more in this great w o r k ~ Which is almost to pluck a kingdom down And set another up, should we survey The plot of situation and the model, Consent upon a sure foundation, Question surveyors, know our own estate, How able such a work to undergo, To weigh against his opposite; or else We fortify in paper, and in figures, Using the names of men instead of men: Like one who draws the model of a house Beyond his powers to build it; who half through, Gives o'er, and leaves his part created cost A naked subject to the weeping clouds, And waste for churlish winter's tyranny.

Page 2: Model assessment objectives in or

86 A. Orden / Model assessment

Of all of the pioneers and subsequent leaders in research on mathematical programming, Martin Beale was the most active in the practice of operations research. I believe he would have encouraged my effort in this paper to examine the nature of the assessment of real-world models for use in managerial planning and decision-making.

1. Issues

Operations Research (OR) became an identified subject during the 1940s and 1950s and was declared to be the introduction of science into decision-making. It was felt that OR models can in principle, as in the natural sciences, be either satisfactorily validated and accepted, or else falsified and rejected. It was clear from the start that adequate validations are very difficult, and cannot ordinarily be accomplished; but that was considered to be a matter of practicality which does not detract from

the underlying scientific nature of the field. That perspective has remained largely unchallenged. OR model developers seem

for the most par t to view adequate model validation in the sense of the natural sciences to be a proper idealization which, in practice, cannot be achieved. Some might add that they consider that outlook simplistic, but see little merit in attempting to revise the "phi losophy" of the field.

I shall in this paper take the contrary position that probing into the nature of the

assessment of OR models is in order. I view it as a very serious problem that work on OR models regularly takes place in the ambiguous situation where it is believed on the one hand that the most fundamental aspect of model assessment is rigorous model validation in the sense of the natural sciences, and on the other hand that not only is such validation impractical, but the very meaning of validation in OR

is unclear. I conjecture that this confusing situation has been seriously detrimental to the

OR field. To dramatize that concern I submit the observation that clarification of the role of model validation in the natural sciences was a very difficult process

which occupied several centuries and turned out to be a crucial conceptual develop- ment in human history. OR does not have to cope with obstructions such as the Inquisition, but it may be that if it is ultimately going to have quite a large role in human affairs, it must be unshackled from ill-structured underlying concepts.

The kinds of issues which I believe should be addressed - which this paper is an effort on my part to explore are:

- I f it is not satisfactory to treat the model validation concept of the natural sciences as the basic aspect of model assessment in OR, what should replace it?

- T h e concept of model validation by independent experimentation and /o r observation in the natural sciences is strikingly simple. Might an analogous concept of the proper nature of model assessment in OR also be reasonably simple?

Page 3: Model assessment objectives in or

A. Orden / Model assessment 87

- Might such a concept put the conduct of OR model development and use on firmer foundations than it now has, and thereby strengthen the OR field?

Several further questions and observations will perhaps make these abstract questions more tangible.

It is taken for granted that in practice every OR model is partially validated in various ways, and that other aspects of model assessment such as (rough) cost-benefit assessments on development and use of the model occur. However, when models are presented in journals or at conferences, model assessment is practically com- pletely ignored. That raises the question:

- Is model assessment sufficiently straightforward that there is little need to present it? or

- Is model assessment very elusive, and difficult to address satisfactorily?; or - Is assessment so unique to each modeling situation that it is of no interest to

anyone other than the participants?

Whatever the case may be, the situation in OR is that - When OR models are implemented substantial efforts go into various aspects

of model validation and other aspects of model assessment, but when models are presented in public it is very unusual for any significant information to be provided on either the procedures used to assess the model or on the outcomes of those procedures.

Contrast that if you will with the situation in the natural sciences, where dissemina- tion of knowledge on the validation of models occupies a pivotal role in the very

nature of science. Granted that OR model assessment activities take place within the confines of the organizations in which the models are developed, I believe one should nevertheless ask

- I s the suppressed state of dissemination of knowledge about model valida- t ion/assessment a factor which bears significantly on the spread of use of OR in management decision-making?

- Without dissemination of information on model assessment, what basis is there for anyone other than the direct participants to judge OR models?

The OR community for the most part accepts what may be called a default pos i t ion - that of believing wihtout appreciable evidence other than verbal tes- timonials that the developers of OR models have done a reasonable job of model validation/assessment. Possibly so, but I believe that this passive outlook should be challenged. It is all too commonly evident to the participants in model implementations, and to close observers, that there are large differences within and among model developers and model users on how use of the model benefits the organization, and ambiguities in how managers incorporate model based information into decision-making. Under the circumstances which prevail one must ask: I f the services of a model to an organization are unclear, how definite can the assessment have been on whether the model is doing what it was intended to do?

A possible response to this last question is that the validation of OR models can be largely independent of the uses to which they are put in support of management

Page 4: Model assessment objectives in or

88 A. Orden / Model assessment

decision-making. I am sceptical of that position. Reflecting the views which I have expressed, the theme of this paper is:

The doctrine that models (and hypotheses in other forms) must be validated by independent experiments and /o r observations - which is universally considered t o b e fundamental to the natural sciences - has little merit as a basic concept for OR. Remote though it may currently appear, perhaps a well-defined model assessment doctrine which is intrinsic to OR will gradually emerge and become widely accepted. I wish to explore lines of thought which might lead to such a doctrine. Whether any such doctrine will ever exist or not, I believe it is worthwhile to challenge the existing dogma.

To explore such matters requires venturing into ill-defined, uncharted territories. In the present situation emergence of a doctrine of model assessment which would put the OR field on stronger foundations seems unlikely. It may be contingent on whether major advances will occur in understanding the nature of management decision-making. 1 believe nevertheless that the OR community should concern itself actively with issues of this kind. During the past 10 years or so the OR field has been in what may be called a "recession". If and when the field regains the momentum which it had from the 50s to the 70s, mature understanding of its underlying concepts will, I believe, be important.

The next two sections offer an analysis of the nature of OR model assessment: Section 2 - headed "Orientations" - fixes several perspectives, and proceeding from that, Section 3 structures the objectives of model assessment.

2. O r i e n t a t i o n s

(1) The issues in the assessment of OR models which are developed in real-world situations for actual use by managers differ in important respects from the issues of generic and theoretical models whose purpose is improved general understanding of management problems/solutions, and possible subsequent adaptation to specific situations. This paper deals only with the former.

The literature in the OR field refers in the main to real-world cases as "applica- tions" - implying thereby that the fundamentals lie in hypothetical model forms or in algorithms. In order to avoid this somewhat invidious phrasing, and to emphasize that the discourse addresses models which are developed for actual use, I will refer to such models throughout the paper as "substantive-OR" models.

(2) The paper is oriented toward "traditional OR" - the development and use of fairly large, computer-based models for dealing with management problems of the kind with which Martin Beale was mainly concerned. I include under this heading not only models which make use of mathematical programming, stochastic process forms, and other mathematical structures, but also substantial simulations, heuristics, and techniques such as project scheduling networks (PERT/CPM) and "material

Page 5: Model assessment objectives in or

A. Orden / Model assessment 89

requirements planning" (MRP) which can be expressed equally well in common sense form as mathematically. However, I will consider the current movements under headings such as "decision suppo.rt systems" (DSS) and the linkage of OR to artificial intelligence/expert systems (AI /ES) - whose relations to traditional OR are not yet c lea r - to be outside the domain of discourse.

(3) Whatever the best principles for the assessment of substantive-OR models may be, I believe they must deal not only with how well a model represents a

pertinent real-world situation, but also with how worthwhile - in financial terms or otherwise - it is for the organization to develop and use the model. On that basis a succinct preliminary indication of the nature of the assessment of substantive-OR models which I espouse is

Assessment = Legitimization + Valuation

where legitimization means determination that a model corresponds acceptably to

reality, and valuation means assessment of advantages/disadvantages of develop- ment and use of the model in comparison with no model or other models. Loosely, legitimization may be associated primarily with model developers, and valuation primarily with model users.

(Note that if the assessment of generic/theoretical OR models were being con- sidered, the issue of value to an o rgan iza t ion -a s opposed, say, to explanatory power of the model - might or might not come into consideration.)

(4) Enlarging the domain of issues which might be brought into consideration, consider:

(a) Model legitimization, (b) Model valuation, (c) Legitimization of valuation, (d) Participant characterization. (c) refers to whether valuations of models can be validated by experiments,

observations, or perhaps other means. (d) refers to analysis of attitudinal, behavioral, cognitive, or adversarial characteristics of model users, developers and sponsors.

In these terms my present view/judgement is: - Re (c): that valuations of advantages/disadvantages of substantive-OR models

are usualty market-like indications of consumer preferences; that such valu- ations generally cannot be validated in any terms which can be considered scientific; that legitimization-of-valuation of OR models is for the most part a

futile line of thought. - Re (d): that personal characteristics of participants are best viewed as environ-

mental aspects of models - on which the development and use of models are contingent - but which are not intrinsic to the models.

- Accordingly, whatever is intrinsic to model assessment per se - and not to the "marketplaces" for models nor to their "environments" - lies in (a) and (b).

(5) (Replicability) I use the terms validity and validation throughout the paper

to refer to measurement-based determination that a model overall, or components

Page 6: Model assessment objectives in or

90 A. Orden / Model assessment

of a model , are acceptably accurate representat ions o f real matters; and I consider

replicability of experiments a n d / o r condit ions o f observat ion to be necessary for

validation. On that basis the most fundamenta l difference between model assessment in the natural sciences and that in OR is that

- The natural sciences are oriented toward val idat ion o f models as a whole.

- In substant ive-OR satisfactory replication o f all pert inent condit ions - including

the h u m a n factors which I have called " e n v i r o n m e n t a l " - i s rarely if ever

possible. Usually there are parts of a model which can be fairly well validated.

The legitimacy of other parts, and of the model as a whole, comes down to judgementa l credibility.

3 . M o d e l a s s e s s m e n t o b j e c t i v e s

In the assessment o f models in the natural sciences there is a single, fundamenta l

objective: To validate or falsify models overall by means o f independent

experimentat ion and observation. That simple principle cannot be carried over intact

to OR. Some difficulties are:

- There are multiple assessment objectives.

- It is difficult to identify the objectives unambiguously . - As in any domain in which there are multiple objectives, there are uncertaint-

ies/ambiguit ies concerning their relative importance.

In this Section I will p ropose a set of objectives - which may be viewed as an extension o f the single objective in the natural sciences - which seems to characterize

the assessment o f substant ive-OR models. The assessment objectives will be associ-

ated with a set o f func t ions /purposes o f O R models.

Chart 1 is a classification structure for the cognitive functions o f substant ive-OR

models. Three o f the functions (labeled L1, L2, L3) call for model legitimization,

Chart 1

Cognitive functions of OR models

Part 1. Functions which call for Legitimization Symbolic representation of:

- Replicable processes and relationships - Nonreplicable processes and relationships - Real-world objectives/subobjectives

Part 2. Functions which call for valuation - Systematization of managerial procedures - Enhancement of the quality of work done by managers - Achievement of evident economic and~or operational

benefits to the organization by means of the model

(L1) (L2) (L3)

(vl) (V2) (V3)

Page 7: Model assessment objectives in or

A. Orden / Model assessment 91

and three (labeled V1, V2, V3) call for model valuation. The term "cognitive" suits the first five but not the last of the six categories. For simplicity I will refer to the entire set as cognitive functions.

The categories are not entirely independent, but each is sometimes the most prominent function of a model. Their relative importance varies from case to case.

Viewed from some perspective other than that of model assessment this form of classification of the functions of models might not serve any useful purpose, but model assessment is important, and on that score each type of function can be associated with

- An idealization of what the function should accomplish. - A model assessment objective. Chart 2 adjoins an idealization and an assessment objective to each of the cognitive

functions. Chart 3 is a digest of Chart 2. I will discuss the above "system" from several points of view. The topics are: (1) Validation of the "nucleus" of a model, (2) Credibility, (3) Systematization of managerial procedures, (4) "Environments".

(1) Validation of the "nucleus" of a model

OR models range widely from those which represent situations which are largely, but not entirely, fairly replicable, to those which represent situations which are largely nonreplicable. Modeling the replicable aspects of a real situation commonly serves as a "nucleus" to which representations of nonreplicable matters are adjoined. With due effort the nucleus of the model is validated along lines which are compar- able to (but not as rigorous as) scientific experimentation/observation. Thus model assessment ordinarily includes validation of a nucleus (and sometimes other aspects) of the model (L1 in the charts). This seems to be the continuing basis of the tradition that OR models are "scientific." In actuality, in comparison with other assessment objectives, validation of a model's nucleus - although not to be ignored - is often relatively unimportant.

(2) Credibility

Substantive-OR models consist in part of structural features and parameter values, and commonly of representations of management objectives, whose legitimacy is based on judgemental credibility. "Credibility" (in this context) is an ambiguous concept which - as far as I can see - has no firm meaning other than coalescence of judgement, bolstered at times by statistical evidence. Problematic as that is, credibility issues often predominate over validity issues. (Establishment of validity of parts of models, and credibility otherwise, could be called "hard" and "soft" model legitimization.)

Page 8: Model assessment objectives in or

92 A. Orden / Model assessment

Chart 2

Model assessment idealizations and objectives

Cognitive Idealization of Function what the function

should accomplish

Assessment Objective

Legitimacy Assessments

L1. To represent replicable "Tight" predictive correspon- aspects of the real situation dence of parts of the model to

reality

L2. To represent nonreplicable aspects of the real situation

L3. To represent real-world objectives and subobjec- tives

V1. To systematize managerial procedures

V2. To enhance the quality of the work done by managers

V3. To yield economic or operational value to the organization

Excellent (prophetic?) judge- ment-based representation of non-replicable real matters

Complete "harmony" among executives, model developers, and model users on the real objectives; and on correspon- dence of the modeling of objec- tives to the real objectives

Value Assessments

(a) Production of good decisions by the model. Forma- tion of numerous alternatives, and selection among them by managers is reduced or elimi- nated. (b) Identification of what data are relevant and what are irrelevant in specified decision- making activities

Specific determination of types and degrees of contribution of use of the model to such aspects of managerial cognition as:

- Ability to look ahead. - Elimination of biases. - Understanding of operating

factors in adequate detail. - A b i l i t y to optimize or

satisfice.

Optimality of the investment of the organization in developing the model

Determination of predictive valid- ity of parts of models via measure- ments and/or applicability of tight controls

Appraisal of credibility of the structural and parameter aspects of the model which represent non- replicable real matters; and appraisal of trackability of those matters

Appraisal of congruence among the participants with regard to the real objectives and their model rep- resentations (taking acceptable tolerances into account)

Appraisal of the efficiency of managers in making decisions, with vs. without the model

Identification and evaluation of ways in which the model facilitates effective decision-making

Cost vs. benefit valuation (largely judgemental) of the development and use of the model

Page 9: Model assessment objectives in or

A. Orden / Model assessment

Chart 3

Cognitive Function Assessment Objective

93

Legitimization Objectives

L1. To represent replicable matters Validate predictive reliability of parts of models

L2. To represent nonreplicable matters Appraise credibility and trackability

L3. To represent management objectives Appraise congruence of participants

Valuation Objectives

V1. To systematize managerial procedures Evaluate effects on managerial efficiency

V2. To enhance cognitive qualities of managers Identify and evaluate the kinds of cognitive support

V3. To yield evident benefits to the organization Estimate model implementation benefits and costs

(3) Systematization of managerial procedures

It is customary to presuppose that the systematization of managerial procedures

(V1) which usually accompanies the use of an OR model is incidental to the value of the model in heightening the abilities of managers (V2) or benefiting the organiza-

tion as a whole (V3). On the contrary, it can easily be observed that frequently the

main value of a model lies in the discipline which it imposes on management

procedures. Procedural systematization of decision-making reduces "noise" in

managerial interactions, reduces time lags, organizes co-ordination, etc. For example,

users of project scheduling network models (PERT/CPM) often freely state that its

value to them lies, not in any expectation of improved accuracy or approach to optimality of their schedules, but rather in the routinized, systematic procedures

for schedule decision-making which occur. In general, when forecasts are highly

uncertain, and when the decisions to be made have wide tolerances (e.g., mid to

long term budgets), procedural systematization is likely to be an important aspect

of model valuation.

(4) Environments

The structure of model assessment formulated in the charts deals with the issues

which I judge to be intrinsic to OR models. It does not deal with matters, such as

attitudes of participants, which (as previously indicated in Section 2) seem to me

to be "environmental." Work by others on those environmental issues will be brought up in Section 4.

At some level a distinction along the lines of intrinsic versus environmental must

be made. No matter how this is done, it may be that the greatest barrier to satisfactory

understanding of model assessment in OR is - as opposed to the high degree to

which many natural science phenomena can be decoupled from their environments -

that in general in the OR domain, no matter how one attempts to differentiate the subject of analysis per se (a model and that which is modeled) from the relevant

Page 10: Model assessment objectives in or

94 A. Orden / Model assessment

environment, the interactions remain so "dense" that the separation has little value. Whether that is the case in substantive-OR is unknown.

4. O t h e r e x p l o r a t i o n s

I turn now to the question: - In what ways, if any, have there been substantial, cohesive efforts in the OR

community to advance understanding of model assessment, or related aspects of the epistemology of substantive-OR, beyond the "broad-brush", largely simplistic treatments which pervade the literature in the field?

There have been two developments of that kind which I consider relevant to this paper. Although they address issues which differ in some ways from those with which I have dealt, they are areas of exploratory work which like my own is motivated

by the belief that better understanding of the conceptual framework in which substantive-OR takes place is badly needed.

The lines of effort to which I refer are: (1) OR "implementat ion research", (2) Perspectives on the assessment of energy policy models.

(1) Implementation research

The term "implementat ion research" has been used fairly widely in OR circles to refer to theoretical and observational research on the behavior of managers with regard to the development and use of OR models. The research has been along lines such as:

- Effects of cognitive styles of managers on their use of OR. -Analys is of effects of personality differences between managers and OR

specialists.

-Adap ta t i on to OR of psychological frameworks for understanding behavioral change in individuals and organizations.

I suggested earlier that such matters be put under the heading, the environments(s) of OR models; and on that basis implementation research is complementary to the set of (intrinsic-to-OR) model assessment objectives which I have formulated.

During the 1970s about 200 papers appeared in the implementation research area [1]. Some of that literature dealt distinctly with OR, and some of it made little or

no distinction between OR and management information systems. During the 80s research of that kind which focuses specifically on OR has diminished, but implementation research on blends of OR and computer information systems has continued under the banner, "decision support systems" (DSS).

Since implementation research contends with the difficulties of understanding human behavior in dealing with OR, and since it does not as yet seem to have had a noticeable influence on OR modelers or on the teaching of OR, I consider it - like my analysis in Section 3 of assessment objectives - to be an exploratory area.

Page 11: Model assessment objectives in or

A. Orden / Model assessment 95

(2) Assessment of energy policy models The most intensive efforts in the history of OR to perform and to understand

model assessment took place as part of the widespread development of models for analysis of proposed American oil and energy policies during the 1970s. In linking those efforts to the aims of this paper several qualifications should be noted:

(a) The models were structured in regional, national, and international terms, and were intended for analysis of energy issues at societal and governmental levels. The activity aggregations, time horizons, and model purposes were outside the realm of ordinary OR in its usual settings in industrial and other organizations.

(b) The modeling activities (of government agencies, universities, consulting firms, and business associations) were funded primarily, but not entirely, by the Federal Government. In order to limit politicization of the models, the U.S. Congress mandated that there be independent model assessment activities. Accordingly, appropriate government agencies organized and provided funds for extensive model assessment programs which were separate from model development and use.

(c) The methods used in developing the models were drawn from econometrics as well as from OR. Some of the basic concepts originated in the large scale linear programming models of the oil industry. On the whole, the modeling concepts seem to have been oriented somewhat more toward OR than toward econometrics.

Two aspects of the model assessment activities which occurred seem to me to have been particularly noteworthy:

(1) It was an unusual opportunity in the history of OR to distinguish model assessment from model development and use. Various forms of independent assess- ment were explored; among them the establishment and operation of independent model assessment "laboratories".

(2) A number of conferences were held (in 1975 to 1980) on model assessment, and proceedings were published. There were numerous other publications. The volume of literature on model assessment which appeared in conjunction with the energy policy models of the 70s probably exceeds all other literature on OR model assessment which has ever appeared otherwise [2, 3]. (Very little of it went into the primary professional journals in OR, but it is accessible.)

I will comment in just one way on all those efforts: As noted above, one line of exploration of means of model assessment was the establishment of model assess- ment centers which would operate independently of model developers and users. Nominally that represents applying the spirit of independent validation to OR models which is fundamental in the natural sciences. That may seem attractive in principle, and it is viewed somewhat favorably in the literature to which I have referred; however, my own view (partly on the basis of that same literature) is that it is unrealistic to expect independent assessment centers to become a significant aspect of policy model assessment. The models are too complex, and their purposes are too diverse.

The considerable efforts on advancing understanding of model assessment which occurred in the energy policy modeling area do not seem to have led to any widely

Page 12: Model assessment objectives in or

96 A. Orden / Model assessment

accepted conclusions. As in the case of implementation research, my main aim has been to inform you of a significant body of exploratory work on model assessment.

5. Hypotheses

This paper began by asking whether any relatively simple concept can have a role

in substantive-OR which is analogous to the doctrine of model validation in the natural sciences. While model validations in the natural sciences are extremely diverse, and may be very complex, they are uniformly governed by the simple underlying concept that models must be tested for predictive reliability by indepen- dent experimentation/observation. In substantive-OR not only are model assess- ments very diverse, and generally complex, but the assessment objectives are also diverse and complex. The possibility of a guiding concept which, although simple,

has a vital role lies, I believe, not in model assessment objectives per se, but rather in calling for an additional step: deliberate realistic selection (or weighting) among recognized cognitive (and /or economic or operational) functions of models, thereby determining the appropriate assessment objectives. I hypothesize that treating this "simple" step as a fundamental requirement could in a sense be analogous to requiring experimental /observational validation in the natural sciences. The assess- ment of OR models would begin to "make sense".

The trouble lies in the absence of recognized cognit ive/economic/operat ional benefit functions of OR models. It is normal at present:

(a) To treat attainment of evident economic benefit as the primary function

whether or not that is actually the case. (b) When other "funct ions" of OR models are cons idered-e .g . , along lines

suggested in Section 3 - for such assessments to be entirely ad hoc. There is no discernible conceptual framework.

The source of the exploratory set of cognitive functions of models and the associated assessment objectives in Section 3 is belief on my part that if a random sample of OR models were carefully investigated, one would find each of those functions most significant in a number of cases, and that one would conclude that in general no one of them has a more prominent role in OR than the others.

I have little doubt, however, that if some other OR professionals would undertake, similarly, to devise a classification structure for cognit ive/economic functions of

models and associated assessment objectives, the collective result would be a messy collection of ideas with little or no consensual structure. I f there is merit in the hypothesis that explicit selection among cognitive functions of models should be a basic step which would determine the assessment objectives, then under present conditions nothing can be done since there have been no efforts to go beyond unstructured, ad hoc identification of the cognit ive/economic functions of OR

models. While the primary aim of Section 3 was to delineate my views on model assessment

objectives, whether or not that version has merit, its purpose is also to indicate by

Page 13: Model assessment objectives in or

A. Orden / Model assessment 97

i l lus t ra t ion that efforts are n e e d e d to es tabl ish a genera l ly r ecogn ized concep tua l

s t ructure for O R mode l assessment .

I said ear l ier that dur ing the 70s - at least in c o m p a r i s o n with the two prev ious

decades - subs t an t ive -OR ente red a " r eces s ion" . However , research on a lgor i thms,

c o m p u t a t i o n systems, and under ly ing ma thema t i c s con t inued with u n a b a t e d vigor.

Drawing on tha t con t inu ing research and deve lopmen t , I be l ieve subs tan t ive -OR

m o d e l i n g will regain m o m e n t u m . It cur ren t ly seems to be do ing so in the f inancial

ma rke t industr ies . I be l ieve a r enewed thrus t will be more successful and more s table

if the O R communi ty will seek and find be t te r u n d e r s t a n d i n g o f the m o d e l assessment

issues which I have ra i sed than i f it cont inues to dr i f t a imless ly on those matters .

References

[1] R. Doktor, R.L. Schultz and D.P. Slevin (eds), The Implementation of Management Science (North- Holland Publishing Co., Amsterdam, 1979).

[2] S.I. Gass (ed.), Validation and Assessment of Energy Models, Proceedings of a Workshop at the U.S. National Bureau of Standards in January, 1979, NBS Special Publication 569.

[3] S.I. Gass, "Decision-aiding models: Validation, assessment, and related issues for policy analysis," Operations Research 31 (1983) 603-631.