dynamic programming and geographical systems … · 2018. 12. 4. · a special case of stochastic...

43
DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS Ross D. MacKinnon Research Report No. 13 Environment Study Under a grant from BELL CANADA LTD. To be presented at the International Geographical Union's Commission on Quantitative Methods, Ann Arbor Invitational Conference, August 8-10, 1969. Component Study No. 8 Transportation Systems Department of Geography and Centre for Urban and Community Studies University of Toronto July, 1969.

Upload: others

Post on 17-Jun-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

DYNAMIC PROGRAMMING AND

GEOGRAPHICAL SYSTEMS

Ross D. MacKinnon

Research Report No. 13

Environment Study Under a grant from

BELL CANADA LTD.

To be presented at the International Geographical Union's Commission on Quantitative Methods, Ann Arbor Invitational Conference, August 8-10, 1969.

Component Study No. 8 Transportation Systems

Department of Geography and

Centre for Urban and Community Studies University of Toronto

July, 1969.

Page 2: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

Preface

This is Report No. 13 in the series on the Environment Study prepared in the

Department of Geography and the Centre for Urban and Community Studies under a grant

from Bell Canada, the first to be released under Component Study No 8. It is a

thorough review of the literature on the analysis of geographical systems within

dynamic programming frameworks.

The report considers dynamic planning and control processes and their geographical

implications. Thus it complements the more descriptive forecasting approaches

proposed by Professor Curry in Research Report No. 12. It represents a portion of

the technical and theoretical framework within which the dynamics of geographical

processes of Eastern Canada are to be studied. With normative models such as

dynamic programming, objectives must be specified, but the sensitivity of the

resulting patterns to alternative goal structures may be tested. It is hoped that

these and other dynamic frameworks will be applied to rural and urban as well as

transportation processes.

Initially, the dynamic programming approach is outlined, followed by a detailed

discussion of significant geographical applications of the techniques and finally

an evaluation of practical advantages and limitations.

Page 3: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

Table of Contents

1. Introduction

2. Basic Concepts of Dynamic Programming

2.1 The Systems Approach 2.2 Deterministic Models 2.3 Stochastic Models 2.4 Adaptive Models 2.5 Nonserial Systems

3. Applications of Dynamic Programming

3.1 General Applications 3.2 Transportation Systems 3.210 Optimal Path Problems 3.211 The Shortest Path Problem 3.212 Generalized Euler Paths 3.213 Travelling Salesman Problem

3.22

3.23 3.231 3.232

3.3 3.4 3.5

Transportation Flow Problems

Network Construction Problems Optimal Staging of Transportation Construction Location of a Routeway Connecting TWo Points

Regional and Locational Allocation Problems Water Resource Management Agricultural Economics

4. Some Fundamental Difficulties in the Application of Dynamic Programming

4.1 Computational Difficulties 4.2 Informational Requirements

5. Significance of the Dynamic Programming Approach for Geographic Problems

Bibliography

Page 4: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

DYNAMIC PROGRAMMING AND

GEOGRAPHICAL SYSTEMS

1. Introduction

Although dynamic programming can no longer be characterized as a "new"

approach to systems optimization, it is not widely known even to mathematically

oriented geographers. One reason for this undoubtedly lies in the fact that

geographers have traditionally avoided normative frameworks preferring instead

to describe selected aspects of past, current, and, even on occasion future

worlds, unencumbered by any explicit goal orientations. Even the more well

known technique of linear programming has been utilized only sparingly by

geographers in spite of its origins in an essentially geographical problem,

This study then ignores the apparent bias of geography against normative

models. The dynamic programming approach is first outlined in its various

formulations. Secondly, some of the significant geographical applications

of dynamic programming are discussed in some detail. Finally some of the

advantages and limitations of the approach are briefly cons1dered.

In this review, emphasis is placed on the substantive applications of

dynamic programming. Computational difficulties are frequently mentioned,

but strategies by which these can be overcome are not discussed in detail"

Only discrete time problems are considered. Thus, the dynamics of all the

problems are expressed in terms of simple difference equations rather than

differential-difference equations. The relationships between dynamic

programming and other control -cheory models are no I: discussed.

Page 5: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

2

2. Basic Concepts of Dynamic Programming

2.1 The Systems Approach

In recent years, there has been a growing movement in geography and other

disciplines towards the development of common frameworks which might stimulate

research having broad applicability in the study of a wide variety of phenomena.

Increasing emphasis is being placed on models which may describe the behaviour

of many otherwise unrelated processes. This search for theory or theoretical

frameworks common to a wide range of phenomena is one of the characteristics

of the systems approach which has become increasingly fashionable in the past

few years. Important aspects of the systems approach include feedback, feed-

forward, control, information, entropy, goal-seeking and multidimensional

dynamic relationships. Although an increasing number of geographers use these

and other systems concepts, very few have explicitly adopted mathematical

systems approaches in their research. Among the simplest of such approaches

is dynamic programming.

In both its formulation and solution procedures, dynamic programming is

markedly different from other types of mathematical programming. On the one

hand, it is extremely general so that a wide variety of problems can be form-

ulated as dynamic programming problems. On the other hand, there are no

computer programming packages which can be used to solve all or most of the

problems so formulated.*

As Nemhauser (28) states, "Multistage analysis is a problem solving

approach rather than a technique." The researcher must translate his problem

*The program of Bellmore, Howard, and Nemhauser (9) is perhaps the most useful program currently available.

Page 6: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

3

into a dynamic programming format, and even then, it is within his discretion

to specify the optimization technique which is to be used for each stage of

the process. This technique may be complete enumeration, linear or nonlinear

programming, Fibonacci or some other search technique. In summary, dynamic

programming is an approach which specifies a general procedure whereby some

complex and/or dynamic control problems may be solved sequentially, combining

optimal sub-problems in such a way that an optimum solution to the total

problem is obtained.

2.2 Deterministic Models

A more thorough discussion of the various formulations of dynamic

programming problems can be found in the ever increasing number of fine text-

books; for example, Bellman (3), Bellman and Dreyfus (6), Beckmann (2), Jacobs

(21), Nemhauser (28) and White (30). The following all too brief summaries

are presented to make the subsequent review of applications more meaningful.

The dynamic programming approach solves a decision-making problem in a

series of stages. In its simplest discret~deterministic form, the following

aspects of the process are known:

(i) the initial state X of the system or process (this is a numerical descriptor which may be either a scalar or a vector);

(ii) the set of possible decisions d which may be taken at each stage of the process;

(iii) the transfer function T which maps a state-decision pair into new system states;

(iv) the reward (or cost) function r which summarizes the immediate payoff or cost resulting from a given transformation;

(v) a criterion function f, a composition of all of the individual stage rewards, which is to be maximized or minimized;

Page 7: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

4

(vi) N, the number of stages in the process.

Wherever possible, the above notation is used in the remainder of this paper.

d( t)

I I

t'=N X (t) ~ System Processor

t._ ________ t~t~~ ... Figure 1

- -, t

I I

ft = ft+l 0

I 1-----:>,. r ( t)

:::- XTl)*

r (t)

*Note that this conforms with the convention of numbering the stages in reverse order i.e., X(t) is t stages from the end of the process.

The problem as summarized in Figure 1 is to select the sequence of feasible

d ( t), t=l, 2, •.. , N, such that the criterion function f is maxiniized or

minimized. Such a sequence of decisions is called the optimal policy.

The solution procedure depends upon Bellman's Principle of Optimality (3)

which states that "an optimal set of decisions has the property that whatever

the first decision is, the remaining decisions must be optimal with respect

to the outcome which results from the first decision." Thus, with only one

stage remaining, the problem becomes a single stage optimization problem,

the solution of which is in the form of a function of the input state X(l).

Decision d(l) is chosen such that r1

(X(l), d(l)) is maximized. With two

stages remaining, d(2) is chosen as a function of X(2) so that the composition

of the return from that stage and the subsequent stage is maximized.

Page 8: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

5

In general then, the recursion equations as adapted from Nemhauser (28)

are the following:

(1) ft (X(t)) = ma,x [rt (X(t),d(t)) oft_1 (Tt(X(t),d(t))], t=2,2, ... ,N d (t)

max [rt (X(t),d(t))], t=l d (t)

where "o" is a composition operator (generally addition, multiplication, or

selecting the maximum or minimum of (rt,ft_1)).

In the continuous case, the transfer function takes the form of a system

of differential equations; in the discrete, analytic case, it is in the form

of first order difference equations. For example, X(t) = X(t+l)-d(t+l). The

form of the recurrence relation should be interpreted much more generally,

however. Note that the transfer function T and the reward function r are

both subscripted. This implies that neither of these functions need be

invariant throughout the entire process. Indeed, the relationships may be

in the form of tabulated data. Thus, many systems which cannot be completely

described analytically by differential and/or difference equations may be

optimized using a dynamic programming approach.

Note that the solution to equation set (l) yields a sequence of d(t) for

a given value of X(N). Moreover, once the equations have been solved, different

values of X(N) can be postulated to determine the sensitivity of the optimal

policy and criterion function to different initial states, budget levels for

example. By solving for a particular initial state, we can obtain with little

additional effort the solution for the same system in all feasible initial

states.

Page 9: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

6

2.3 Stochastic Models

In its simplest form the dynamic programming problem under risk is very

similar to deterministic formulations.* In addition to the state and decision

varibles, a set of random variables s(t), t=l, ..• , N with independent and

known probability distributions is introduced. These variables enter into

the transfer and reward functions and the objective is modified so that the

expected value of the criterion function is to be maximized or minimized.

Thus the recurrence relations are now the following:

t=2, ... ,N

s(t)J of 1

(T (X(t),d(t)~(t)) t- t

max ~l (~(l))[r1 (X(l), q(l), s(l))J, t=l. d(t~)

where Pt (~(t)) is the probability of the random variable taking on value s(t)

in stage t and "o" is a composition operator (addition or multiplication).

Note that the solution to equation set (2) is in the form of (a) total

expected rewards and (b) conditional decisions. Only the initial decision d(N)

is determined since only the initial state X(N) is known with certainty. The

optimal policy is thus not a rigid plan, but rather a sequence of "if-then"

statements which allow the planner to respond to the future states of the

system as they come apparent (or indeed allow such responses to be completely

automated).

A special case of stochastic dynamic programming may be described as

*It is interesting to note that Bellman (5) admits that he first formulated dynamic programming as a stochastic problem. Only later did he discover its deterministic form and its relation to the calculus of variations.

Page 10: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

7

Markovian decision processes. The decision maker chooses the probabilistic

transfer function (a Markov chain transition probability matrix) at each

stage of the process in such a way that the total expected rewards are

minimized. Howard (19) developed an ingeneous alternative method of solution

to this class of problems. His famous illustrative example of·the taxicab

problem is essentially a locational decision-making problem of some interest

to geographers. Marble (46) suggests that some aspects of individual travel

behaviour could be described using this framework.

2.4 "Adaptive" Models

Some processes are characterized by uncertainty rather than risk, i.e.,

the true probabilities or the parameters of the probability distribution

are not known. In some of these cases, it is possible and potentially use-

ful to adopt a dynamic programming approach (4,109). An initial decision

is made on the basis of~ priori probabilities. That is, the problem is

assumed to be a stochastic dynamic programming problem. These estimates

are then revised on the basis of the results of that stage. Yet another

decision is made, the results monitored, and estimates revised. By continually

updating parameter estimates on the basis of working with the system, the

planner or controller gradually transforms the problem from one of making

decisions under uncertainty to one of decision making under risk.

This framework is intuitively appealing and one might argue very close

to implicit decision frameworks which are actually employed by planners and

controllers. Dynamic programming describes the problem in a general formalized

manner. The operational applicability of this approach to real decision

problems has been severely limited, however, since each unknown parameter or

Page 11: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

8

probability adds another state variable, and thus the limits of computational

feasibility are quickly encountered. Computational problems are briefly

discussed in a later section.

2.5 Nonserial Systems

All of the previous and most of the subsequent discussion assumes a

purely sequential process. The outputs of one stage become the inputs of

the following stage. This assumption ignores important processes in which

two sequential systems converge or diverge at a given stage, or systems

in which an output of stage t initiates a parallel process which is fed-back

or forward to become an input to the main process at stage, d+k' or t-k where

k > 1.

These more complex multistage decision problems are now amenable to

solution (28). Meier and Beightler (82), have described and optimized

branching multiple stage water resource systems using these relatively

recent techniques of nonserial dynamic programming.

3. Applications of Dynamic Programming

3.1 General Applications

Because of the inherent generally of the dynamic programming approach,

a wide variety of decision processes have been formulated as dynamic

programming problems. Many of the references listed in Part A of the

bibliography give some idea of the vast number of systems which have been

so described. Aris (1), Beckmann (2), Bellman (4,6,8), Hadley (17), Jacobs

(21), Kaufmann (23), Kaufmann and Cruon (24), and Nemhauser (28) are

especially notable in this respect.

Page 12: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

9

Inventory control models in which current stock levels are the state

variables, quantities ordered are the decision variables, and sales levels

are random variables are particularly suitable to formulate as dynamic

programming problems (95, 100, 102). In addition, however, the approach

has been used to describe mathematically the following decision problems:

component replacement, allocation of resources between alternative sub-

systems or over time, bottleneck situations, control of competitive processes,

curve fitting, control of economic trends, the knapsack problem, missile

trajectory problems, and many others.

It is possible that several of these topics may in certain cases have

some geographically interesting implications. For the purpose of this paper,

however, only those problems which relate directly to the spatial, regional,

and/or the man-environment traditions of geography have been considered.

3.2 Transportation Systems

Transportation problems have been among the most intensively studied

in operations research or management science. It is therefore not surprising

that many transportation problems have been studied within the dynamic

programming framework. The following discussion considers three somewhat

arbitrary categories of transportation topics relating to paths, construction,

and flows respectively.

3.210 Optimal Path Problems

3.211 The Shortest Path Problem

The familiar problem of determining'the shortest·path through a network

(or the kth shortest path) can be formulated and solved using the dynamic

Page 13: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

10

programming approach. For the deterministic case, the recurrence equation

is

0, .•• , N-1 dii = 0

where f. is the distance of the shortest path between nodes i and N. Dreyfus (37) noEes, however, that much more efficient methods have been developed to solve this problem. Only in cases where negative values of d .. are permitted should the dynamic programming formulation be employed. ~J

Dynamic programming has the additional advantage, however, that it can be readily extended to the stochastic case. In one of the more interesting extensions, Kalaba (43) formulates such a problem so that the criterion function is the probability of reaching a destination within a specified time period; the recurrence relation is

F (t) = 1 n

p'. ~J

(t-s) f. (s) ds J

i = 1, 2, •.. , N-1

where f1

(t) is the probability of reaching destination N in t time units or less, given the process is initiated in i and an optimal policy is adopted,

and Pi.(s) is the probability density function of moving from state ito state Jj in s time units.

Note that the solution of these equations would yield an optimal feedback control policy of which only the first move would be deterministically specified. Subsequent moves would depend upon the random outcomes of actual travel times, Such a framework could conceivably have practical applications in the automated routing of commodity and passenger vehicle systems.

3.212 Generalized Euler Paths

The first paper on graph theory, written more than two centures ago,

considers a problem which is essentiallyr'gedg:f:iphic itFltJ.lH:ure. ·' · GiVerl a· riVer,

islands and a set of bridges connecting the islands and main river banks, the

problem is to describe a route starting at any point which passes over each

Page 14: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

11

bridge exactly once and returns to the initial point. Either a feasible

solution to the problem exists or one does not exist. All feasible solutions

are optimal.

Bellman and Cooke (34) have generalized the problem using dynamic

programming so that the objective is to devise a cyclic route which passes

over each bridge (i.e. link or edge) at least once so that the number of

repetitions is mipimized. The state vector is defined to be a list (Q,E)

where Q is the node at which the tracing point currently lies and E is

composed of the set of edges remaining to be traversed. The decision is

o£ course the node at which the tracing point will be at the next stage.

The recurrence relation is then

where (1) Q1 and Q2 are nodes directly connected to Q

(2) link QQ1¢ E

(3) link QQ 2s E

and thus (4) E2=E- {QQ2}.

The authors outline their adaptation of the basic dynamic programming

algorithm which could be used to solve this problem. They admit, however,

that the procedure is currently computationally infeasible for graphs of high

complexity because of the vast number of possible combinations and permutations

of edges and nodes.

3.213 Travelling Salesman Problem

Among the most famout problems in network analysis, as well as one of

Page 15: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

12

the most resistant to adequate solution, is the travelling salesman problem.

Given a set of cities (points) and the distance between each pair, the

problem consists of constructing the cyclic graph of minimum length which

passes through every city. This problem has stimulated a vast amount of

research and for large numbers of points, it is still computationally

infeasible. (See Bellmore and Nemhauser (35) for review of the many

approaches to this problem).

Bellman (33) and Gonzale~ (39) offer dynamic programming approaches as

solution procedures for the travelling salesman problem. For more than

fifteen points, however, Gonzalez found the number of computations and

storage requirements to be excessive. This approach, while certainly more

effective than exhaustive enumeration, is clearly dominated by other

techniques (35) .

3.22 Transportation Flow Problems

The most widely known and used transportation model is of course the

Hitchcock-Koopmans Transportation Problem which determines those shipments

Xij which minimize the total cost of transportation subject to the constraints

that all resources are used and all demands are met.

That is,

MIN C = xij

subject to

N [ xi 0 = x. j=l J 1.

= y j

Page 16: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

13

Bellman (32) has shown that this and related problems can be readily

formulated as a dynamic programming problem. Using the Principle of

Optimality the demands of the Nth destination are determined as a function

of the resources at the various supply points; the demands of destination

N-1 are then determined as a function of the remaining resources at the n

supply points; etc. The state variables are the resources currently avail-

able at each of the supply points, i.e., x1

(t), x2

(t), ... , Xn (t). The

dynamics of the process are simply

xi (t) =xi (t+l) - xi,t+l i = 1,2, ... n t = 1, 2, ..• , N-1

The recurrence relations is thus n

ft (X1 (t), x2 (t), ... , Xn (t)) = MIN . [L Cit Xit + ft-l (Xl (t) -xtt i=l

xit 'xz (t)- x2t ... , Xn(t)- xnt)J

As Bellman notes, the computational feasibility of such a problem depends

almost entirely upon the number of sources since computation increases only

linearly with the number of stages (i.e. number of destinations). n

the number of state variables can be reduced by one since l: i=l

Moreover, N

= i: y • j=l j

t'hus a problem with 4 or 5 supply points and a very large number of

destinations can be solved. The advantage of this formulation is of course

that it is no longer necessary to assume proportional costs.

Bellman (32) considers two elaborations on this basic problem. The

first optimizes a process where the tth set of destinations becomes the

original set for the next set of demand points. The second considers problems

Page 17: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

14

which explicitly take network structure into account and thus imposes

capacity constraints on links and/or nodes.

Midler (47) has developed a dynamic programming model which determines

the optimal flow of different commodities through a multimodal transportation

system with stochastically variable demands. The model determines condition­

ally the combination of modes to be used, the assignment of commodity classes

to modes, the supply points which should serve each destination and the

rerouting of carriers from destinations to sources. The criterion function

is a quadratic user cost function.

The model is essentially an augmented inventory control model which

uses a moderately sophisticated matrix a+gebraic formulation. The precise

formulation is much too complex to discuss here in any detail. It does

demonstrate very clearly, however, the flexibility of the dynamic programming

approach in that there are fewer limitations on the form of relationships

than with other mathematical programming models. Computational difficulties,

however, limit the size of the problem. Midler states, for example, that a

problem with four origins and destinations, two modes and six commodity

classes would under certain circumstances be susceptible to solution.

A natural gas network flow problem is considered by Wong and Larson (55).

The problem is to determine the optimum suction and discharge pressure for

each compression station such that total compressor horsepower is minimized

subject to specified steady state flow and pressure constraints. The simple

single pipeline case is readily formulated and solved us!ng straightforward

serial dynamic programming. Single junction and multiple junction networks

are optimized using non-serial techniques described in Nemhauser (28). At

Page 18: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

15

any junction the number of state variables of the process increases according

to the number of pipelines eminating from that point.

Nemhauser (50), in recent years one of the most frequent contributors

to both the theory and applications of dynamic programming, uses a dynamic

programming model to determine an optimal scheduling policy for local and

express transit service. Net revenues as determined from schedule-dependent

usage equations and operating costs are maximized. The model assumes among

other things that the relation between usage and required waiting times is

known precisely. This of course is an important characteristic of dynamic

programming and mathematical programming approaches in general--the functional

relationships must be known and specified precisely. Mathematical programming

formulations can thus be used as heuristic devices which suggest areas in

which valuable research is needed. In dynamic programming, the dynamics

(real or artificial) and reward structure of the process must be known.

3.23 Network Construction Problems

3.231 Optimal Staging of Transportation Construction

Roberts (52) and Roberts and Funk (53) have suggested that a combination

of dynamic and linear programming approaches be used to formulate and solve

the problem of when and where to add links to an existing transportation

network. Morlok (48) has made a similar suggestion and is currently opera-

tionalizing a mixed integer problem in which dynamic programming is utilized

for the choice of binary developmental variables while linear programming

methods are used to select the best operational policy for each possible

configuration. Because of its relative accessibility and simplicity, however,

only the study of Funk and Tillman (38) is considered here in detail.

Page 19: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

16

Funk and Tillman have demonstrated the potential usefulness of dynamic

programming in scheduling the sequence of links to be added to an existing

highway network. The highway planning problem is viewed not simply as a

choice between a finite number of alternative network configurations, but

rather as a choice between alternative permutations as well as combinations

of links to added.

The state of the system is identified by the links which have already

been added to the network. Associated with each state is a set of feasible

decisions, i.e. those links which can still be added. Each state-decision

pair is mapped into an immediate cost (amortized construction, maintenance

and travel). These relationships are summarized by a set of hypothetical

numerical data. Two four-stage problems are solved for the simple numerical

example so that total system. costs are minimized subject to the constraint

that at most, and then exactly, one link is to be added in each stage of the

plan implementation process.

Several comments can be made about this illustrative problem which also

apply to many of the other examples considered in this paper:

(1) Rarely can all additions to a system be made simultaneously; thus some means to discover the optimal spatia-temporal ordering of transportation links or other planning actions is a potentially useful planning tool (38).

(2) The final solution is in the form of a sequence of planning actions, but in many cases a firm commitment need be made only to the first k stages. While those decisions are being implemented, more accurate and additional information may be forthcoming so that cost and/or reward functions can be revised. The remaining N-k stage problem could then be optimized using these revisions (45).

(3) Suppose as in (2) a firm commitment need be made only for the first k stages. Moreover, assume that there are many alternative, uncertain future environments, each of which implies a different cost/reward structure. The dynamic programming model is applied to each of these

Page 20: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

17

alternatives. We can say then that plan selection is concerned with the identification of ''optimal" sequences whose first k actions are "similar." By assumption, only the first k actions must be selected at stage N. During the first k stages, decisions about the following set of actions can be made in a similar manner. A sequence of first k decisions which is not common to many plans may be excluded if the criterion function is not very sensitive to its substituti6n by another sequence have a greater commonality (45).

(4) The final physical configuration may be significantly different depending upon whether a static minimum cost solution or a sequential decision-making framework is adopted (38).

(5) Computational difficulties abound because not only are the different combination of actions considered but also the different permutations; thus large transportation network and other planning problems tend to be unman~geable if a direct dynamic programming approach is used.

Gulbrandsun (41) considers a somewhat different problem of optimally

allocating resources to 77 "independent" groups of highway projects over four

or five year periods. Independence in this case implies that investment in

one project will not influence the efficiency of investment in any other project.

Using Lagrangian multipliers and dynamic programming, an allocation of resources

to projects over time is calculated. It is interesting to note, however, that

in order to make the problem feasible, the stages of the problem consist of the

77 projects, the decision variables d(t) are the ordered 4-triple of resources

allocated to the tth project in each of the four time periods, and the state

variables X(t) are the total resource budgets of each of the time periods after

N-t projects have been considered. The problem could not be solved if the

d~cieion vector consisted of the resources committed to the 77 projects in

the tth time period.

3. 232 Location o'f a RoutewayConnecti~g Two Points

Many problems which are not intrinsically dynamic can be artificially

Page 21: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

18

assumed to be sequential in order to utilize the dynamic programming approach.

For example, Werner's (54) multivariate refraction problem of connecting two

cities, located in a region where costs are inhomogeneous, such that the

joint flow and construction costs are minimized would seem, conceptually

at least, to be a dynamic programming problem.*

Kaufmann (23) and others have considered a discrete version of the above

problem as a special case of the shortest path problem, and therefore sus-

ceptible to solution by dynamic programming. An intere$ting variation on

this problem is considered by Groboillot and Gallas (40). The objective

is to connect two cities so that total amoritzed investment operating and

maintenance costs are minimized subject to maximum curvature and gradient

constraints. The problem is viewed us a special case of the shortest path

problem so that the recurrences relation is

fk = Min [fj + cjk]

jEEik

where f. is the total cost associated with the optimal route from the initial J

point to some intermediate point j.

Eik is the set of points in section i from which point k can be reached

Cjk is the cost of reaching point k from point j.

The curvature constraints are achieved simply by limiting the possible

edges in the graph of the decision tree. Similarly, using a three dimensional

graph (the third dimension being elevation), the gradient constraint is ensured

by not permitting large changes in altitude from one section (stage) to

another.

The authors have used this method with some success in planning the

*This was suggested in conversation by A. J. Scott and is mentioned in Scott (106). Each cost region is a stage of the process. The locational coordin­ates of the intersection of the routeway and regional boundaries are the system states. The angles of refraction are the decision variables.

Page 22: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

19

location of roadways~ in spite of (and perhaps because of) this experience

with this approach, they are fully cognizant of the severe operational

limitations arising from excessive storage and computational requirments.

Apparently they have not utilized any other shortest path solution procedures.

3.3 Regional and Locational Allocation Problems

The assignment of people or things to a set of regions arid·· the location

of a set of service facilities so that some objective function is optimized

are two of the central problems in normative geography. These are significant

problems that increasingly are occupying certain economists, operation

researchers, city planners, and geographers.

The regio_J?..§!.!.. __ ?_S::c;_ignme_~~ problem in its simplest form where there is

no spatial de{rendeuce of returns is readily formulated as a dynamic

programming prohlcJ1\, r;ivcn a fixed quAntity () of a resource (water,

capital, personnel, voting power, etc.) and a return function for each

region, what is the optimal allocation of that resource to the N different

regions. The problem is to

subject to

N MAX ) r. (d

1)

i=l l

f' d. { Q '1=1 l

i = 1,2, ... N.

Using dynamic programming, the "·dynamics" of the allocation process are

simply,

X (t) = X (t+l) - d(t+l)

X (N) = Q

Page 23: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

20

and the recursion relation is

ft (X(t)) =MAX [rt (d(t)) + ft-l 'Cxt~l))]

0 ~ d ( 1) ~ Q

f1

(X(l)) =MAX [r1

(d(l))]

0 ~ d (1) ~ Q

The basic model could be used to determine the optimal assignment of salesmen

to saf'Fta regions (Nemhauser (28)), capital to water resource development

sites (Hall and Buras (74)) and many other simple regional assignment problems.

Hall (70, 71) uses a model with slightly different dynamic equations which

reflect first order spatial dependence to allocate water to linear regions

along a water supply canal. Burt and Hartis (57) adapt this basic model

in order to assign voters to U. S. Congressional districts so that a measure

of equal representation is optimized subject to the constraint that districts

are as compact as possible.

An important dynamic regional investment problem which takes inter­

regional 4ependencies into account is considered by Erlenkotter (58). The

problem is simply to determine the regional allocation of plant investment

so that all demands are satisfied and the present value of shipping and

capital are minimized given the constant rates of regional demand increases,

interregional production-shipment costs, and plant investment cost functions

for each region. The dynamics of the process simply observe that current

excess capacity (possibly negative) is equal to previous excess capacity

plus plant investment in the previous time period minus the regional growth

in demand. The author notes that for more than two producing areas, the

straightforward dynamic programming approach would become infeasible. By

Page 24: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

21

redefining the state vector and assuming a concave investment cost function,

the problem may be reformulated so that a two region problem becomes a one­

dimensional dynamic programming problem. The author concludes that a four

region problem is the largest which is computationally feasible.

In recent years there seems to be a growing recognition that the

location-allocation problem is of significant practical as well as theoret­

ical importance. Much of the recent concern has been with devising

efficient exact or approximation algorithms for computing solutions to

location-allocation problems (59). Among these is the presentation of

Bellman (56) which formulates the problem within a dynamic programming

framework.

Of perhaps greater interest are dynamic location-allocation problems­

i.e. situations in which account must be taken of growth and/or changing

patterns of demand and resource availability. The possibility of adding

new facilities to the system in response to such changes should be considered.

Teitz (60) discusses some of the conceptual considerations involved in this

dynamic planning problem. The initial location decision cannot be made in

isolation from predicted future system states and inputs and possible

subsequent decisions.

Consider a problem which is apparently similar in nature to Funk and

Tillman ~ network link addition problem (38) in which point facilities are

to be added to a current system over a ser·ies of stages so that total

discounted travel costs are to be minimized over the entire length of the

process. The problem is different in a number of respects the most important

of which is that the solution space is not finite. In the static case this

Page 25: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

22

does not present a major problem, but in the dynamic case account must be

taken of the possibility that one consumer may be assigned to one facility

in one stage and another one in a subsequent stage; thus the respective

travel times must be weighted by their durations. At this stage, it is

not clear how a dynamic programming approach could be used to resolve this

perplexing and significant problem.

3.4 Water Resource Management

Of all the areas in which dynamic programming has been applied to

geographically related topics, water resource management problems are certainly

the most numerous. This arises in part from the fact that many aspects of

these systems can be readily specified in terms of simple difference equations

in ~hich at least one of the components is a decision or control variable.

Rivers in particular may be considered as one directional and one dimensional

spatial systems. Moreover, the processes which operate on these systems

(weather and man-initiated controls) may be assumed to be one directional

lag-one processes.

In water resource systems, the relationships between ~xogenous inputs

such as streamflow, rainfall, and evaporation rates, decision variables such

as water releases and transfers, and outputs or consequences such as new

water levels can often be approximated by systems of first order difference

equations usually linear and often stochastic. Exogenous inputs are

probabilistically predictable because of the long time series which are often

available for particular streams and rivers.

The difference equations usually are simply mass balance equations such

as the following:

Page 26: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

23

X(t) = X (t+l) - d(t+l) -~(t+l) + A (t+l) + ~(t+l)

where X = water in reservoir (state variable)

d • water released (decision variable)

~ =water loss by evaporation (exogenous, stochastic variable)

A • streamflow into reservoir (exogenous, stochastic variable)

~ = precipitation (exogenous stochastic variable)

and the probability density functions for ~. ,\ and ~ are known. The objective

then is to find a conditional sequence of releases which maximize the annual e~pected

net return subject to certain physical and perhaps socio-economic constratnts.

This general framework with modifications is used by Hall and Howell (77),

Buras (65), Burt (66), Burt (67), Young (86), Sweig and Cole (83), Hall,

Butcher and Esogbue (76), and Butcher (69).

Buras (65) uses an interesting variation of such a framework in modelling

the optimal joint operating policy for aquifers and reservoirs. This model has

the advantage that it can be readily understood, yet still provides some

insight into the sytems aspects of the dynamic programming approach. There

are three state variables in the process and therefore three difference

equations:

X1 (t) = X1 (t+l) + ;xl (t+l) - dl (t+l) - d2 (t+l)

X2 (t) = x2(t+l) + x3 (t+l) d3 (t+l)

x3 (t) = d1 (t+l) + ,\2 (t+l)

where xl = water in surface reservoir

x2 = water in aquifer

x3 = water in recharge facility

,\1 = streamflow

,\ 2 = natural inflow to aquifer

Page 27: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

24

d1 = water release from surface reservoir for groundwater

d2 = water release from reservoir to irrigate land A s

d3 = water pumpage from aquifer to irrigate land A g

The recurrence relation is

j=l

X2(t-l), X3(t-l)J

where ~t(X2 , x3) is the tth stage return from irrigation.

and S is an appropriate discount factor

In addition to the optimal timing of resource utilization, dynamic

programming has been used in the allocation problem discussed in a previous

section. Hall and Buras (74) for example consider the problem of selecting

resource development sites from a finite number of possibilities and the

extent to which those sites should be developed. Hall (70, 71) optimally

allocates water to regions along a water supply canal. Hall (73) uses the

approach to determine the optimal allocation of water to different uses.

There seems to be no reason why these models which have been developed

to optimize water resource systems could not be modified and used in the

modelling of other resource management problems. Burt (66) states that

these approaches are applicable to any temporal resource allocation problem

in which the resource is either fixed in supply or partially renewable, thus

allowing a difference equation model formulation.

3.5 Agricultural Economics

Agricultural economics would seem to be of interest to geographers on

at least two counts. First, agricultural topics have been among the most

Page 28: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

25

popular in economic geography, perhaps because the interaction between man'~

activities and his physical environment is of obvious importance in farming.

Secondly, agricultural economics has mixes and levels of theoretical,

empirical and technical orientation which appear to be of particular

relevance in geography's current stage of development.

Dynamic programming models are relatively few in agricultural economics,

but there seems to be a growing awareness of the potential significance of

control theoretic approaches in agricultural studies (92). Burt and Allison

(89) consider the problem of crop rotation as a Markovian decision process.

The objective is to maximize discounted expected returns by choosing an

appropriate conditional policy of planting wheat or leaving the land in fallow.

The states of the process are different moisture levels (i = 1, 2, ... M). By

selecting one of the two values of the decision variable d, a choice of a d

Borrespanding transition probability matrix P(ij) is also made, associated

with each of which is a matrix of rewards r(iJ) - the immediate returns

i i f f h i th to the jth ar s ng rom a movement rom t e soil moisture levels. From

d these two matrices, the expected immediate returns ri can be obtained for

each state-decision pair. The recurrence relation is thus d d

R(i) + S ~ P (ij) ft-l (j) T=l

where S is appropriate discount factor.

In most cases, a constant policy ts optimal for large values of N, and the

expected present value can be approximated by

lim F (N) = (I - BP)-lR N-+oo

which is readily obtained by solving M simultaneous linear equations.

Page 29: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

26

In a recent paper, Burt (1969) gives a macro-economic policy application

f d . h f.) o ynam1c programming in w ich the decision variable is P (t the government

controlled price of fluid milk. The dynamic aspect of the problem arises

from the distributed lag form of the supply equation:

qs(t) = b0

+ b1

(Pf(t+l) + b2Pm(t+l) + b

3 qs(t+l)

wnere((t) is the total amount of milk supplied

and f(t) is the price of milk for manufacturing

The state variables are Pf(t+l) and qs(t+l); thus the difference equations

f are the one given above and S(t) = P (t+l), where S(t) is a dummy variable.

Burt subsequently modifies his model so that a measure of social value is

maximized subject to some minimal farmer income constraint.

Burt (87) summarizes some of the other actural and potential applications

of dynamic programming in agricultural economics including decisions about

farm expansion and the replacement of livestock, machinery and other assets.

In currently developing research ares, there is often much confusion

and ambiguity concerning terminology. The dynamic programming literature

itself is remarkably free of such ambiguities. There is some apparent

confusion, however, in the discussion of related programming approaches. For

example, Loftsgard and Heady (93), Day (90) and Day and Tinney (91) use linear

programming in a recursive manner. These studies are distinguishable from

dynamic programming since they do not utilize Bellman's Principle of Optimality

and the process is not optimized over its entire duration by composing the

individual stage rewards.

Page 30: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

27

4. Some Fundamental Difficulties in the Application of Dynamic Programming

4.1 Computational Difficulties

Frequently in the above discussion the problem of computational feasibility

has been mentioned. A considerable amount of research has been undertaken

in order to modify straightforward dynamic programming methods so that

increasingly complex problems have become susceptible to solution. A

detailed discussion of these methods is not undertaken here; this review

has sought to illustrate the basic dynamic programming approach with simple

seographical planning and control problems, rather than to provide an

exhaustive summary of all aspects of dynamic programming methodology.

Dynamic programming does not ~ priori determine the method whereby

the optimal decision function for any stage is to be attained. This optimum

may be derived using the calculus (taking partial derivatives of the criter­

ion function with respect to the decision variable(s)), by a mathematical

programming method, or by exhaustive enumeration and comparison of all the

possible stage decisions. Where many decisions are possible, the latter

alternative may be unwieldly. In such cases optimal search techniques such

as Fibonacci search may be employed (6, 28).

The most severe limitation of dynamic programming is imposed by the

number of state variables in the process. In most cases, three or four

state variables are the most which can be handled computationally by

dynamic programming. Bellman (5) suggests that this number will increase

to fifteen or twenty as computers become larger and more sophisticated.

Lagrangian multipliers can be utilized to eliminate one of the state

variables. Gulbrandson (41) for example has done this in his highway

Page 31: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

28

investment problem.

In the Hitchcock-Koopmans transportation problem with n supply points,

the dimension of the state vector can be reduced to n-1 because by

assumption,

X n

N n-1 = L Yj - [ xj.

j .. l i=l

Yet another way to reduce the problem of dimensionality is to increase

the grid size i.e. to reduce the range of values which the state variables

t11ay assume.

These and other computational refinements are discussed in several of

the references in Part A of the bibliography. Of particular interest in

this regard are Bellman and Dreyfus (6) and Nemhauser (28).

The "curse of dimensionality" is particularly severe in dynamic

geographical problems since the typical situation involves many spatially

dependent units each with at least one state variable which change over

time. One possible alternative is to redefine the problem so that each

spatial unit constitutes a stage while there is a component of the state

vector for each time period (41).

4.2 Informational Requirements

Dynamic programming demands a considerable awareness on the part of

the researcher of the nature of the process he is attempting to optimize"

The dynamics of the process should be specifiable in terms of difference

or differential-difference functional equations in which the state of the

process at any stage is dependent upon the preceding state, the preceding

decision, and perhaps some exogenous (and stochastically predictable)

Page 32: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

29

variables. Moreover the relationships between system states, decisions, and

rewards must be well known.

Very few geographers have studied processes in a format which is amenable

to optimization within a dynamic programming framework. This review has

demonstrated, however, that many geographic processes can be so formulated.

5. Significance of the Dynamic Programming Approach for Geographic Problems

Dynamic programming offers both a set of very general computational

procedures and a theoretical framework in which the control of dynamic

processes can be studied. As a computational procedure, it has certain

advantages and disadvantages over other optimization methods. It is not

restricted to the optimization of sets of linear equations. It is admirably

appropriate for certain kinds of sensitivity analyses since it gives the

optimal policy for the entire set of initial states. Computational effort

and storage requirements, while very sensitive to the number of state

variables are extremely tolerant with respect to the number of stages. Highly

constrained problems in general have smaller computational times. Finally,

note that the dynamics of the process need not be formulated in terms of

mathematically analytic transfer functions. Many of the applications of

dynamic programming use data which are in tabular form.

More important perhaps than the computational aspects of the approach

are the potential theoretical implications. In order to utilize dynamic

programming, the researcher must think of problems in terms of rigorously

defined sequential feedback processes. In recent years there has been an

increasing amount of discussion about systems, goals and dynamics in geography

Page 33: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

30

and in sci«!nce in general. This review has shown that dynaaic pro1rammina

provides a relatively simple framework within which to •tudy the dynaaica

of certain geographic, goal-oriented systems.

Page 34: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

BIBLIOGRAPHY

PART A

GENERAL REFERENCES

(1) Aris, R., Discrete Dynamic Programming (New York: Blaisdell, 1964).

(2) Beckmann, M.J., Dynamic Programming of Economic Decisions (New York: Springer-Verlag, 1968).

(3) Bellman, R.E., Dynamic Programming (Princeton, N.J.: Princeton University Press, 1957).

(4) Bellman, R., Adaptive Control Processes: A Guided Tour (Princeton, N.J.: Princeton University Press, 1961).

(5) Bellman, R., Some Vistas of Modern Mathematics (Lexington, Ky: University of Kentucky Press, 1968).

(6) Bellman, R. and S. E. Dreyfus, Applied Dynamic Programming, (Princeton, N.J.: Princeton University Press, 1962).

(7) Bellman R. and R. Kalaba, Dynamic Programming and Modern Control Theory (New York: Academic Press, 1965).

(8) Bellman, R. and R. Karush, "Dynamic Programming: A Bibliography of Theory and Application," Rand Corp. Memorandum RM-3951-PR, 1964.

(9) Bellmore, M., G. Howard, and G. L. Nemhauser, "Dynamic Programming Computer Model 4", Dept. of Operations Research & Industrial Engineering, The Johns Hopkins University, Baltimore, Md., July, 1966.

(10) Blackwell, D., "Discounted Dynamic Programming," Annals of Mathematical Statistics, Vol. XXXVI (1965), pp. 226-235.

(11) Blackwell, D., "Discrete Dynamic Programming" Annals of Mathematical Statistics, Vol. XXXIII (1962), pp. 719-726.

(12) Deledicq, A., "Progra<nmation Dynamique Discrete", Revue Francaise de l'Information et de Recherche Operationelle, Vol. II, No. 11 (1968) pp. 13-32.

(13) Denardo, E.V. and L.G. Mitten, "Elements of Sequential Decision Processes," Journal of Industrial Engineering, Vol. XVIII (196 7), pp. 106-112.

(14) Derman, C., "Markovian Decision Processes--Average Cost Criterion," in G.B. Dantzig and A.F. Vernott, Jr. (eds.), Mathematics of the Decision Sciences Part 2 (Providence, R.I.: American Mathematical Society, 1968), pp. 139-148.

Page 35: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

32

(15) Dreyfus, S., "Dynamic Programming," in R.L. Ackoff (ed.), Progress in Operations Research, Vol. I (New York: Wiley, 1961), pp. 211-242.

(16) Dreyfus, S.E., Dynamic Programming and the Calculus of Variations, (New York: Academic Press, 1965).

(17) Hadley, R., Nonlinear & Dynamic Programming, (Reading, Mass.: Addison­Wesley, 1964).

(18) Hellier, F.S. and G.J. Lieberman, Introduction to Operations Research, (San Francisco: Holden-Day, 1967).

(19) Howard R.A., Dynamic Programming and Markov Processes, (Cambridge, Mass.: M.I.T. Press, 1960).

(20) Howard, R.A. "Dynamic Programming," Management Science, Vol. XII (1966), pp. 317-348.

(21) Jacobs, O.L.R., An Introduction to Dynamic Programming, (London: Chapman and Hall, 196 7).

(22) Karlin, S. "The Structure of Dynamic Programming Models," Naval Research Logistics Quarterly, Vol. II (1955), pp. 285-294.

(23) Kaufmann, A., Graphs, Dynamic Programming and Finite Games, (New York: Academic Press, 1967).

(24) Kaufmann, A. and R. Cruon, Dynamic Programming: Sequential Scientific Management, (New York: Academic Press, 1967).

(25) , ' Lanery, E.,"Etude Asymptotique des Systemes ' Markoviens a Commande," Revue Francaise de l'Informatique et de Recherche Operatione1le,

' Vol. I, No. 5 (1967), pp. 3-56.

(26) Larson, R.E., State Increment Dynamic Programming, (New York: Elsevier, 1968).

(27) Mitten, L.G., "Composition Principles for Synthesis of Optimal Multi­stage Processes," Operations Research, Vol. XII (1964), pp. 610-619.

(28) Nemhauser, G.L., Introduction to Dynamic Programming, (New York: Wiley, 1966).

(29) Roberts, S.M., Dynamic Programming in Chemical Engineering and Process Control, (New York: Academic Press, 1964).

(30) White, D.J., Dynamic Programming, (San Francisco: Holden-Day, 1969).

Page 36: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

33

(31) Wilde, D.J. and C.S. Be ihtler, Foundations of Optimization, (Englewood Cliffs: Prentice-Hall, 1967).

PART B

TRANSPORTATION

(32) Bellman, R., "Notes on the Theory of Dynamic Programming: Transportation Models," Management Science, Vol. IV (1958), pp. 191-195.

(33) Bellman, R., "Dynamic Programming Treatment of the Travelling Salesman Problem," Journal of the Association of Computing Machinery,Vol. IX (1962), pp. 61-63.

(34) Bellman, R. and K.L. Cooke, "The Konigsberg Bridges Problems Generalized," Journal of Mathematical Analysis and Applications, Vol. XXV (1969), pp. 1-7.

(35) Bellmore, M. and G.L. Nemhauser, "The Travelling Salesman Problem: A Survey," Operations Research, Vol. XVI (1968), pp. 538-558.

(36) Cooke, K.L. and E. Halsey, "The Shortest Route Through a Network with Time-Dependent Internodal Transit Times," Journal of Mathematical Analysis and Applications, Vol. XIV (1966), pp. 493-498.

(37) Dreyfus, S.E., "An Appraisal of Some Shortest Path Algorithms," Operations Research, Vol. XVII (1969), pp. 395-412.

(38) Funk, M.L. and F.A. Tillman, "Optimal Construction Staging by Dynamic Programming," ASCE Journal of the Highway Division, Vol. XCIV

(Nov. 1968), pp. 255-265.

(39) Gonzalez, R.H., "Solution of the Travelling Salesman Problem by Dynamic Programming on the Hypercube," Technical Report No. 18, (Cambridge, Mass.: MIT Operations Research Center, 1962).

(40) Groboillot, J.L. and L. Gallas, "Optimalisation d'un Project Routier Par Recherche du Plus Court Chemin dans un Graphe ~Trois Dimensions," Revue Francaise d'Informati ue et Recherche 0 erationelle, Vol.I, No. 2 (196 ), pp. 99-121.

(41) Gulbrandsen, 0., "Optimal Priority Rating of Resources-Allocation by Dynamic Programming," Transportation Science, Vol. I, (1967), pp • 251- 26 0 .

(42) Joksch, H.C., "The Shortest Route Problem with Constraints," Journal of Mathematical Analysis and Applications, Vol. XIV (1966), pp.l91-199.

Page 37: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

34

(43) Kalaba, R., "Graph Theory & Automatic Control," in E. F. Beckenbach (ed.), Applied Combinatorial Mathematics, (New York: Wiley 1964), PP. 237-252.

(44) Kumar, S., "Optimal Location of Recovery Points for Vehicular Traffic Subject to Two Types of Failures," Canadian Operational Research Society Journal, Vol. VI (1968), pp. 38-43.

(45)

(46)

(47)

(48)

(49)

(50)

(51)

(52)

(53)

(54)

(55)

MacKinnon, R.D., "System Flexibility within a Transportation Context," Unpublished Ph. D. Dissertation, Department of Geography, North­western University, Evanston, Illinois, 1968.

Marble, D.F., "A Theoretical Exploration of Individual Travel Behavior," in W.L. Garrison and D.F. Marble (eds.), Quantitative Geography Part I: Economic and Cultural Topics, (Evanston, Illinois: Department of Geography, Northwestern University, 1967), pp.33-53.

Midler, J. L., "A Stochastic Multiperiod Multimode Transportation Model," Transportation Science, Vol. III (1969), pp. 8-29.

Morlok,. E.K., "A Goal-Directed Transportation Planning Model," Research Report, Transportation Center, Northwestern University, Evanston, Illinois, Jan. 1969.

Morlok, E.K. and R.F. Sullivan, "The Optimal Fixed Network Development Mo:lel," Research Report, Transportation Center, Northwestern University, Evanston, Ill., April 1969.

Nemhauser, G. L., "Scheduling Local and Express Service," Transportation Science, Vol. III (1969), pp. 164-175.

th Pollack, M., "Solutions of the k Best Route Through a Network--A Review," Journal of Mathematical Analysis Applications, Vol. III (1961), pp. 547-559.

Roberts, P.O., "Transportation Planning: Models for Developing Countries,'' Unpublished Ph.D. Dissertation, Department of Civil Engineering, Northwestern University, Evanston, Illinois, 1966.

Roberts, P .A. and M. L. Funk, "Toward Optimum Methods of Link Addition in Transportation Networks," M. I. T. Monograph (Sept. 1964).

Werner, C., "The Law Refraction in Transportation Geography: Its Multivariate Extension," The Canadian Geographer, Vol. XII (1968), pp. 28-40.

Wong, P.J. and R.E. Larson, "Optimization of Tree-Structured Natural­Gas Transmission Networks," Journal of Mathematical Analysis and Applications, Vol. XXIV (1968), pp. 613-626.

Page 38: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

35

PART C

REGIONAL AND LOCATION

ALLOCATION PROBLEMS

(56) Bellman, R., "An Application of Dynamic Programming to Location­Allocation Problems," Society Industrial and Applied Mathematics Review, Vol. VII (1965), pp. 126-128.

(57) Burt, 0. and C. Harris, Jr., "Appointment of the U.S. House of Representatives: A Minimum Range, Integer Solution, Allocation Problem," Operations Research, Vol. XI (1963), pp. 648-652.

(58) Erlenkotter, D., "Two Producing Areas--Dynamic Programming Solutions," Chapter XIII in A.S. Manne (ed.), Investments for Capacity Expansion, (London: George Allen and Unwin, 1967), pp.210-227.

(59) Scott, A.J., "Location-Allocation Systems," Geographical Analysis, (forthcoming).

(60) Teitz, M. B., "Toward a Theory of Urban Public Facility Location, 11

Papers Regional Science Association, Vol. XXI (1968), pp. 35-52.

PART D

NATURAL RESOURCE

MANAGEMENT

(61) Arimizu, T., '~orking Group Matrix in Dynamic Model of Forest Manage­ment," Journal of Japanese Forestry Society, Vol. XL (1958), pp. 185.

(62) Bellman, R. and R. Kalaba, "Some Mathematical Aspects of Optimal Predation in Ecology and Boviculture, 11 Proceedings of the National Academy of Science (U.S.) Vol. XLVI (1960).

(63) Beard, L.R., "Optimization Techniques for Hydrologic Engineering," Water Resources Research, Vol. III (1967), pp. 809-815.

(64) Boughton, W.C., "Optimizing the Gradients of Channels by Dynamic Programming," Journal of the Institute of Engineers, Vol. XXXVIII (1966), pp. 303-306.

Page 39: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

36

(65) Buras, N., "Conjuctive Operation of Dams and Aquifers," ASCE Journal of the Hydraulics Division, Vol. LXXXIX (Nov. 1963).

(66) Burt, 0. R., "Optimal Resource Use Over Time with an Application to Ground Water," Management Science, Vol. XI No. 1 (1964), pp.80-93.

(67) Burt, O.R., "Economic Control of Groundwater Reserves," Journal of Farm Economics, Vol. XLVIII (1966), pp. 632-647.

(68) Butcher, W.S., "Stochastic Dynamic Programming and the Assessment of Risk, 11 Proceedings of National Symposium on the Analysis of Water Resource Systems, Denver, Colorado, 1968.

(69) Butcher, W.S., 'Nathematical Models for Optimizing the Allocation of Stored Water," in The Use of Analog and Digital Computers in Hydrology, Symposium of International Association of Scientific Hydrology, Tuscan, Arizona, Dec. 1968.

(70) Hall, W.A., "Aqueduct Capacity Under Optimum Benefit Policy," ASCE Journal of Irrigation Drainage Division, Vol. LXXXVII (1961)' pp. 1-ll.

(71) Hall, W.A., "Aqueduct Capacity Under an Optimum Benefit Policy," (with discussion) Transactions American Society of Engineers, Vol. CXXVIII (1963), pp. 162-172.

(72) Hall W.A., "A Method for Allocating Costs of a Water Supply Canal, 11

Journal of Farm Economics, Vol. XLV (1963), pp. 713-720.

(73) Hall, W.A., "Optimum Design of a Multiple-Purpose Reservoir," ASCE Journal of the Hydraulics Division, Vol. XC, (July 1964), ---­PP. 141-149.

(74) Hall, W .A., and N. Buras, "The Dynamic Programming Approach to Water Resources Development," Journal of Geographical Research, Vol. LXVI (1961), pp. 517-520.

(75) Hall, W.A., and W.S. Butcher, "Optimal Timing of Irrigation" ASCE Journal of Irrigation and Drainage Division, Vol. XCIV (June 1968), pp. 267-274.

(76) Hall, W.A., W.S. Butcher and A. Essogbue, "Optimization of the Operation of a Multipurpose Reservoir by Dynamic Programming," Water Resources Research, Vol. IV (1968), pp. 471-477.

(77) Hall, W.A. and D.T. Howell, "The Optimization of Single Purpose Reservoir Design with the Application of Dynamic Programming," Journal of Hydrology, Vol. I (1963), pp. 355-363.

Page 40: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

37

(78) Hall, W.A. and T.G. Roefs, "Hydropower Project Output Optimization," ASCE Journal of the Power Division, Vol. XCII (Jsn. 1966), pp.67-79.

(79) Liebman, J.C. and W.R. Lynn, "The Optimal Allocation of Stream Dis­solved Oxygen," Water Resources Research, Vol. II (1966), pp.581-591.

(80) Little, J.D.C., "The Use of Storage Water in a Hydroelectric System", Operations Research, Vol. III (1955), pp. 187-197.

(81) Loucks, D.P., "A Comment on Optimization Methods for Branching Multi­stage Water Resource Systems," Water Resources Research, Vol. IV (1968), pp. 447-450.

(82) Meier, W.L. and C.S. Beightler, "An Optimization Method for Branching Multistage Water Resource Systems," Water Resources Research, Vol. III (1967), pp. 645-652.

(83) Schweig, Z. and J.A. Cole, "Optimal Control of Linked Reservoirs," Water Resources Research, Vol. IV (1968), pp. 479-497.

(84) Watt,K.E.F., "Dynamic Programming, 'Look-Ahead Programming, 1 and the Strategy of Insect Pest Control," The Canadian Entomologist, Vol. XCV (1963), pp. 525-536.

(85) Watt, K.E.F., Ecology and Resource Management, (New York: McGraw-Hill, 1968).

(86) Young, G.K., Jr., "Finding Reservoir Operating Rules," ASCE Journal of the Hudraulics Division, Vol. XLIII (Nov. 1967), pp. 297-321.

PARTE

AGRICULTURAL ECONOMICS

(87) Burt, O.R., "Operations Research Techniques in Farm Management: Potential Contributions," Journal of Farm Economics, Vol. XLVII (1965), pp. 1418-1426.

(88) Burt, 0. R., "Control Theory for Agricultural Policy: Methods and Problems in Operational Models," American Journal of Farm Economics, Vol. LI (1969), pp. 394-404.

(89) Burt, O.R. and J.R. Allison, "Farm Management Decisions with Dynamic Programming," Journal of Farm Economics_, Vol. XLV (1963), pp .121-136.

(90) Day, R.H., Recursive Programming and Production Response, (Amsterdam: North Holland Publishing Co., 1963).

Page 41: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

38

(91) Day, R.H. and E.H. Tinney, "A Dynamic Von Thllnen Model," Geographical Analysis, Vol. I (1969), pp. 137-151.

(92) Fox, K.A. (Chairman) "The Potential Role of Control Theory in Policy Formulation for the U.S. Agricultural Industry," American Journal of Agricultural Economics, Vol. LI (1969), pp. 383-409.

(93) Loftsgard, L.D. and E.O. Heady, "Application of Dynamic Programming Models for Optimal Farm and Home Plans," Journal of Farm Economics, Vol. XLI (1959), pp. 51-62.

(94) Tintner, G., '*What Does Control Theory Have To Offer?" American Journal of Agricultural Economics, Vol. LI (1969), pp. 383-393.

PART F

MISCELLANEOUS

REFERENCES

(95) Arrow, K.J., S. Karlin and H. Scarf, Studies in the Mathematical Theory of Inventory and Production, (Stanford, California: Stanford University Press, 1963).

(96) Bellman, R.E., "Bottleneck Problems Functional Equations, and Dynamic Programming," Econometrica, Vol. XXIII (1955), pp. 73-87.

(97) Bellman, R. and R. Kalaba, "On kth Best Policies," Journal of the Society of Industrial and Applied Mathematics, Vol. VIII (1960), pp. 582-588.

(98) Dorfman, R., "An Economic Interpretation of Optimal Control Theory," Discussion Paper No. 54, Harvard Institute of Economic Research, Harvard University, Cambridge, Mass., November 1968.

(99) Emerson, M.J., "Dynamic Programming and Export Base Theory," Paper presented at the Eighth Annual Meeting, Western Regional Science Association, Feb., 1969.

(100) Inglehart, D. L., "Recent Results in Inventory Theory," Journal Industrial Engineering, Vol. XVIII (1967), pp. 48-51.

(101) Murphy, Roy E., Adaptive Processes in Economic Systems, (New York: Academic Press, 1965).

(102) Scarf, H., D. Gilford and M. Shelly (eds.) Multistage Inventory Models and Techniques, (Stanford, California: Stanford University Press, 1963).

Page 42: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

39

(103) Schlager, K.J., "A Land Use Plan Design Model," Journal of the American Institute of Planners, Vol. XXXI (1965), pp.

(104) Schlager, K.J., "A Recursive Programming Theory of the Residential Land Development Process," Highway Research Record No. 126, (Washington D.C.: Highway Research Board, 1966).

(105) Schlager, K.J., "Land-Use Planning Design Models," ASCE Journal of the Highway Division, Vol. XCIII, (1967), pp. 135-142.

(106) Scott, A.J., Combinatorial Programming, Spatial Analysis and Planning, (New York: Wiley, forthcoming).

(107) Southeastern Wisconsin Regional Planning Commission, "A Mathematical Approach to Urban Design," SWRPC Technical Report No. 3, Waukesha, Wisconsin, 1966.

(108) White, D.J., "Forecasts and Decisionmaking," Journal of Mathematical Analysis and Applications, Vol. XIV (1966), pp. 163-173.

(109) Ying, C. C., "Learning by Doing--An Adaptive Approach to Multiperiod Decisions," Operations Research, Vol. XV (1967), pp. 797-812.

Page 43: DYNAMIC PROGRAMMING AND GEOGRAPHICAL SYSTEMS … · 2018. 12. 4. · A special case of stochastic dynamic programming may be described as *It is interesting to note that Bellman (5)

REPORTS IN THIS SERIES

No.

1. L. S. Bourne and A. M. Baker, Urban Development in Ontario and Quebec: Outline and Overview, Sept. 1968 (Component Study 3).

2. L. S. Bourne and J. B. Davies, Behaviour of the Ontario-Quebec Urban System: City-Size Regularities, Sept. 1968 (Component Study 3).

3. T. Bunting and A. M. Baker, Structural Characteristics of the Ontario-Quebec Urban System, Sept. 1968 (Component Study 3).

4. S. Golant and L. S. Bourne, Growth Characteristics of the Ontario­Quebec Urban System, Sept. 1968 (Component Study 3).

5. L. S. Bourne, Trends in Urban Redevelopment, August, 1968 (Component Study 3).

App. Statistical Appendix, List of Cities and Urban Development Variables (L. S. Bourne) August, 1968 (Component Study 3).

6. J. W. Simmons, Flows in an Urban Area: A Synthesis, November, 1968 (Component Study 3b).

7. E. B. MacDougall, Farm NUmbers in Ontario and Quebec: Analyses and Preliminary Forecasts, Sept. 1968 (Components Study 5).

8. G. T. McDonald, Trend Surface Analysis of Farm Size Patterns in Ontario and Quebec 1951 - 1961, Sept. 1968 (Component Study 5).

9. Gerald Hodge, Comparisons of Structure and Growth of Urban Areas in Canada and the U.S.A. February, 1969 (Component Study 4).

10. C. A. Maher and L. S. Bourne, Land Use Structure and City Size: An Ontario Example, January, 1969 (Component Study 3).

11. GUnter Gad and Alan Baker, A Cartographic Summary of the Growth and Structure of the Cities of Central Canada, March, 1969 (Component Study 3).

12. Leslie Curry, Univariate Spatial Forecasting July, 1969 (Component Study 3c).

13. Ross D. MacKinnon, Dynamic Programming and Geographical Systems: A Review, July, 1969 (Component Study 8).

14. L. S. Bourne, Forecasting Land Occupancy Changes Through Markovian Probability Matrices: A Central City Example, August, 1969 (Component Study 3).