modelling, simulation and optimisation of network planning...
TRANSCRIPT
1
Modelling, Simulation and Optimisation
of Network Planning Methods
A. SADEGHEIH
Department of Industrial Engineering
University of Yazd, P.O.Box: 89195-741
IRAN, YAZD
E-mail: [email protected]
Abstract: In this paper, three iterative improvement methods, simulated annealing, genetic algorithm and tabu search, are proposed to search for solutions to network planning problems.
Then, a brief summary of the heuristic methods, single-stage optimization methods, time-phased
optimization methods, artificial intelligence techniques and iterative improvement methods are
presented. Finally, the performance of each of the techniques is studied and the results compared
with those from conventional methods. Also, some of the important characteristics of network
planning methods and their strengths and weaknesses are identified and compared.
Key-words Heuristic Methods, Tabu Search, Simulated Annealing, genetic Algorithms, Artificial
Intelligence Techniques, Iterative Improvement Methods.
1 Introduction An iterative improvement method is a search
method that starts with an initial solution
and tries to improve this solution by ‘local
modification’.
In general, a characteristic of heuristic
techniques is that strictly speaking an
optimal solution is not sought, instead the
goal is a ‘good’ solution. Whilst this may be
seen as an advantage from the practical
point of view, it is a distinct disadvantage if
there are good alternative techniques that
target the optimal solution.
Heuristic methods are based on intuitive
analysis so they are relatively close to the
way that engineers think. They can give a
good design scheme based on experience
and analysis. However, they are not strict
mathematical optimization methods. In
contrast to mathematical methods, heuristic
methods can be considered to be custom-
made. Some of them help to simulate the
way a system planner employs analytical
tools such as load-flow programs and
reliability analysis involving simulations of
the planning process through automated
design logic. Heuristics are employed in two
basic situations: 1. when a problem does not
have an exact solution because of inherent
ambiguities in the problem statement or
available data, medical diagnosis is an
example of this; 2. when a problem has an
exact solution, but the computational cost of
finding it may be prohibitive, such as in
chess and production scheduling.
Unfortunately, like all rules of discovery and
invention, heuristics are fallible. A heuristic
is only an informed guess at the next step to
be taken in solving a problem. They are
often based on experience or intuition.
When there is only limited information
heuristic search is often the only practical
answer. Heuristic search problems (HSP)
are often not easily described in a form that
leads to immediate mathematical derivations
of an optimal solution. Often heuristics are
developed by trial and error, in conjunction
with a number of reasonable
approximations, simplifications, reasonable
guesses, or domain-specific problem
knowledge.
Single-stage optimization methods can be
used for determining the optimal network
expansion from one stage to the next. But
they do not give the timing of the expansion.
The mathematical programming techniques
used in single-state optimization methods
include linear programming; integer
programming and non-linear programming.
Proceedings of the 8th WSEAS Int. Conference on Automatic Control, Modeling and Simulation, Prague, Czech Republic, March 12-14, 2006 (pp281-286)
2
A time-phased optimization method can
include inflation and interest rates, etc. in
the comparison of various network
expansion plans. Both integer programming
and dynamic programming optimization
methods have been used to solve the time-
phased network expansion models. Integer
programming has been applied by dividing a
given time horizon into numerous annual
sub-periods. Consequently, the objective
function in terms of present worth of a cost
function is minimized in order to determine
the capacity, location, and timing of new
facilities subject to defined constraints.
Artificial Intelligence is a way of making a
computer behave ‘intelligently’. This can be
accomplished by studying how people think
when they are trying to make decisions and
solve problems, breaking those thought
processes down into basic steps, and
designing a computer program that solves
problems using those same steps. AI thereby
provides a simple, structured approach to
designing complex decision making
programs. The goal of an AI system is to
analyze human behavior in the fields of
perception, comprehension, and decision
making in the ultimate hope of reproducing
the behavior on a machine, namely a
computer [1-4].
2 Linear, Non Linear, Integer
and Dynamic Programming Mathematical programming (MP), and
especially linear programming is one of best
developed and most used branches of
operation research. It concerns the optimal
allocation of limited resources among
competing activities, under a set of
constraints imposed by the nature of the
problem being studied. These constraints
can reflect financial, technological,
marketing, organizational, or many other
considerations. In broad terms, mathematical
programming can be defined as a
mathematical representation aimed at
programming or planning the best possible
allocation of scarce resources. When the
mathematical representation uses linear
functions exclusively, it has a linear
programming model.
Linear programming has been used
successfully in the solution of problems
concerned with the assignment of personnel,
distribution and transportation, power
engineering, banking, education, petroleum,
social problems, etc.
In this review, two categories of integer
programming methods are reviewed here: 1.
search methods; 2. cutting methods.
The most important search method is the
branch and bound technique which applies
directly to both the pure and mixed
problems. The general idea of the method is
first to solve the problem as a continuous
model.
Cutting methods, which are developed
primarily for integer linear problems, start
with the continuous optimum. By
systematically adding special ‘secondary’
constraints, which essentially represent
necessary conditions for integrality, the
continuous solution space is gradually
modified until its continuous optimum
extreme point satisfies the integer
conditions. The name ‘cutting methods’
stems from the fact that the added
‘secondary’ constraints effectively cut
certain parts of the solution space that do not
contain feasible integer points.
Cutting planes does not partition the feasible
region into sub-divisions, as in branch and
bound approaches, but instead works with a
single linear program, which is refined by
adding new constraints until the new
constraints solution is found.
Non-linear programming problems come in
many different shapes and forms. Unlike the
Simplex Method for linear programming,
there exists no single algorithm that will
solve all of them. Instead, algorithms have
been developed for various individual
special types of non-linear programming
problems. Two examples of non-linear
programming applied to single-state network
programming are: 1. the gradient search
method; 2. quadratic programming.
The mathematical programming technique
used in the time-phased optimization
method is Dynamic Programming. Dynamic
programming is a computational technique
best suited to the optimization of sequential
Proceedings of the 8th WSEAS Int. Conference on Automatic Control, Modeling and Simulation, Prague, Czech Republic, March 12-14, 2006 (pp281-286)
3
or multi-stage decision making problems.
Dynamic programming converts such multi-
stage decision problems into a series of
single-stage decision problems, each with
one or a few decision variables. Then,
starting with the first stage, each stage is
optimized over possible alternative feasible
decisions within the stage, while taking into
consideration the cumulative effect of the
optimum decisions made in the previous
stages. The ultimate solution of the problem
is then generated from among the available
stage optima.
With mathematical optimization methods
the weaknesses are most require a large
number of decision variables; most require
long computation times; most allow no user
interaction; models are fixed by program
formulation; considerable effort and good
mathematical knowledge is usually required
for adaptation to specific applications and
most integer programming problems are
such that the amount of work needed to
produce a guaranteed optimal solution
increases exponentially with the problem
size. But one of the main advantages of the
mathematical programming methods is that
an optimal solution is guaranteed.
3 Expert System An expert system (ES) differs from a
conventional computer program in two
essential ways. An expert system, at any
time, can explain its behavior to the human
expert and receive new pieces of
information from the human expert without
any new programming being required. There
is no doubt that expert systems have a major
role to play in transmission planning. But
creating a knowledge base for an expert
system is difficult if an application requires
a more complicated conflict resolution
mechanism, or when the rules are difficult to
obtain. With the development of expert
system theory and techniques, some new
expert system approaches to the network
programming have been proposed in recent
years [5-9]. The main advantage of the
expert system approach lies in its ability to
simulate the experience of planning experts
in a formal way. However, knowledge
acquisition is always a very difficult task in
applying this method. Moreover,
maintenance of the large knowledge base is
very difficult. Expert systems are not
appropriate for solving combinatorial search
problems, rather they are more useful for
analysis of models and their solutions.
4 Simulated Annealing, Genetic
Algorithm and Tabu Search The iterative improvement methods used for
network programming are principally
simulated annealing, genetic algorithms and
tabu search.
Combinatorial optimization can be defined
by a similar process. This process can be
formulated as the problem of finding, among
a potentially very large number of solutions,
a solution with minimal cost. Now, by
establishing a correspondence between the
cost function and the free energy, and
between the solutions and the physical
states, it can introduce a solution method in
the field of combinatorial optimization
based on a simulation of the physical
annealing process. The simulated annealing
(SA) technique was first introduced by
Kirkpatrick [10]. This idea was based on the
Metropolis Algorithm. Annealing is the
physical process of heating up a solid,
followed by cooling it down until it
crystallizes into a state with a perfect lattice.
During this process, the free energy of the
solid is minimized. Practice shows that the
cooling must be done carefully in order not
to get trapped in locally optimal lattice
structures with crystal imperfections.
Simulated annealing, from which the
algorithm gets its name, i. Compared to the annealing process in
metals.
ii. To accept all moves that result in a
reduction in cost.
iii. Moves that result in a cost increase are
accepted with a probability that decrease
with the increase in cost.
iv. Parameter T, called the temperature, is
used to control the acceptance probability.
v. The acceptance probability is given by,
)T
C(Exp
∆∆∆∆−−−− (1)
Proceedings of the 8th WSEAS Int. Conference on Automatic Control, Modeling and Simulation, Prague, Czech Republic, March 12-14, 2006 (pp281-286)
4
Where C∆∆∆∆−−−− is the cost increase.
vi. Higher values of T cause more such
moves to be accepted.
vii. T is gradually decreased so the cost
increasing moves have less chance of being
accepted.
viii. Ultimately, the T is reduced to a very
low value so that only moves causing a cost
reduction are accepted, and the algorithm
converges to a low cost configuration.
In simulated annealing, the cooling scheme
plays an important part. If the temperature is
reduced too rapidly, it may not increase the
probability of finding better solutions. On
the other hand, the slower the temperature is
reduced, the longer the time that the
algorithm will take to achieve the final
solution.
Genetic algorithms (GA’s) are suitable for
traversing large search spaces since they can
do this relatively rapidly and because the
mutation operator diverts the method away
from local optima, which will tend to
become more common as the search space
increases in size. Other key advantages of
GA’s are their generality and ease of
applicability. Empirical analysis of the
performance of GA’s and the effects of
GA’s parameters on this performance in
addition to the aforementioned advantages
shows that GA’s are a feasible, robust and
practical engineering tool and are considered
further in this paper for network
programming in response to the weaknesses
seen in the mathematical methods that are
conventionally applied to network
programming.
Genetic algorithm which works by: i. Emulating the natural process of
evolution.
ii. Means of progressing toward the
optimum solution.
iii. Starts with an initial set of random
configurations, called population (each
individual in the population is a string of
symbols, usually a binary bit string
representing a solution).
iv. Each iteration, called generation, the
individuals in the current population, are
evaluated – fitness value – fitter individuals
have a higher probability of being selected.
v. To generate new individuals called
offspring, by using genetic operators
(crossover, mutation and inversion).
vi. The offspring are next evaluated, and
new generation is formed by selecting some
of the parents and offspring, once again on
the basis of their fitness.
A binary chromosome consists of binary
digits. A 1 or 0 in the string may, in a
Boolean scheme, correspond to whether
some condition is true or false, or bits may
be strung together to form binary words that
will translate either directly or indirectly into
continuous valued variables, as shown in
Figures 1 and 2 respectively.
1 0 1 1 1 1 0 1 1 1 0 0 0 0 0 Fig.1: Each bit in a binary chromosome may represent some Boolean condition
1111000 11110011 111111100 1111111000 1111111111111 Fig.2: Binary string representation of parameters
In this paper binary encoding is used. This
representation has several advantages. It is
simple to create and manipulate, many types
of information can be easily encoded and the
genetic operators are easy to apply. In this
paper, two parents are selected, and two
children are produced. For each bit position
on the two children, it is decided randomly
as to which parent contributes its bit value to
which child. Table 1 shows uniform
crossover in action. For each bit position in
the parents, a random binary template
indicates which parent will contribute its
value in that position to the first child. The
second child receives the bit value in that
position from the other parent.
Proceedings of the 8th WSEAS Int. Conference on Automatic Control, Modeling and Simulation, Prague, Czech Republic, March 12-14, 2006 (pp281-286)
5
Table1: Uniform crossover
Child-2 0 0 0 0 1 1 1 0 0 0
Parent-1 0 0 0 1 0 1 0 0 0 0
Binary Template 0 0 0 1 1 0 1 0 0 1
Parent-2 0 0 0 0 1 1 1 1 0 0
Child-1 0 0 0 1 0 1 0 0 0 0
Uniform crossover generally works better
than one and two-point. Empirically,
uniform crossover is shown to be more
effective on a variety of function
optimisation problems. In this paper one
child is formed by taking a mixture of bits
from its two parents according to a random
bit string. Where a ‘1’ in the random bit
string indicates that in that position the child
inherits the corresponding bit from parent-1,
whilst a ‘0’ causes inheritance from parent-
2.
Tabu search (TS) was developed by Glover
[11]. TS has now become an established
optimization approach that is rapidly
spreading to many new fields. For example,
successful applications of TS have been
reported recently in solving some power
system problems, such as hydro-thermal
scheduling, fault section estimation, alarm
processing and transmission network
planning [12].
Unlike GA and SA – analogy from
nature,
i. Search technique concerned with imposing
some restrictions to guide the search
process.
ii. At each iteration, the neighbourhood of
the current solution is explored and the best
solution in the neighbourhood is selected as
the new current solution.
iii. Different from other local search
techniques, in that the procedure does not
stop when no improvement is possible. The
best solution in the neighbourhood is
selected as the current solution, even if it is
not better than the current solution.
iv. This strategy allows escape from local
optima and, consequently, exploration of a
larger proportion of the solution space.
v. Another important aspect of TS is the
Tabu-List. The Tabu-list stores
characteristics of previous moves so that
these characteristics can be used to classify
certain moves as tabu (to be avoided) for a
number of iterations.
Tabu search and simulated annealing have
been applied successfully to a great variety
of problems. Their key advantages are their
generality and ease of applicability. Another
advantage is their ability to escape local
optima. The effectiveness of tabu search
depends heavily on the way that the tabu list
is defined and manipulated. This is a
disadvantage of TS.
5 Conclusions Genetic algorithm, Simulated annealing and
Tabu search have four attributes as follows:
1. Simple model
2. Optimal solution targeted
3. Ability to escape local optima
4. Multi-soluble techniques capability.
Mathematical programming has two
attributes as follows:
1.Optimal solution guaranteed
2. Fixed program formulation.
Heuristics and Expert system have two
attributes as follows:
1. Good solution targeted
2. Application specific.
Table (2) summarises some of the important
characteristics of the network planning
problems.
Proceedings of the 8th WSEAS Int. Conference on Automatic Control, Modeling and Simulation, Prague, Czech Republic, March 12-14, 2006 (pp281-286)
6
Table (2): Increase of some attributes
Level of
mathematical
knowledge
required
GA,
HSP
SA,
TS
ES
MP
Computational
cost
GA
SA, TS,
HSP
ES
MP
Number of
decision variables
and constraints
GA
SA,
TS,
HSP
MP
ES
Difficulty in
customization for
different
application
GA
SA, TS, HSP, ES
MP
References [1] 1. A. Sadegheih, “Global optimisation using
evolutionary algorithms, simulated annealing
and tabu search”, WSEAS Transactions on
Information Science and Applications, Issue 6,
Vol.1, 2004, pp. 1700-1706.
[2] A. Sadegheih, “Scheduling problem using
genetic algorithm, simulated annealing and the
effects of parameter values on GA performance”,
Applied Mathematical Modelling, Vol. 30/2,
Accepted 15 March 2005, pp. 147-154.
[3] A. Sadegheih, “Empirical analysis of
optimization techniques”, WSEAS Transactions
on Mathematics, Vol. 1, Issues 1-4, 2002. pp. 7-
12.
[4] A. Sadegheih, “Simulated annealing in large
scale system”, WSEAS Trans. on Circuits and
Systems, Vol . 3, No.1, 2004, PP.116-122.
[5] L. Dubost and A. Hertz, “Expert systems as
network control support tools”, in Proc. Second
Symp. Expert Systems Application to Power
Systems, Seattle, WA, pp. 28-32, 1989.
[6] R. D. Christie and S. Talukdar, “Expert
systems for on line security assessment: A
preliminary design”, IEEE Trans. Power Syst.,
1988, pp. 654-659.
[7] B. P. Lam, W. E. Kazibwe, N. D. Reppen,
and G. W. Woodzell, “An investigation of expert
system applications to contingency selection and
analysis”, in Proc. Second Symp. Expert
Systems Application to Power Systems, Seattle,
WA, 1989, pp. 165-169.
[8] K. Vu, T-K. Ma, R. Fischl, and C.-C. Liu,
“An expert system for voltage security
assessment”, in Proc. First Symp. Expert
Systems Application to Power Systems,
Stockholm-Helsinki, 1988, pp. 1-6.
[9] S. J. Cheng, O. P. Malik, and G. S. Hope,
“An expert system for voltage and reactive
power control of a power system”, IEEE Trans.
Power Syst., Vol. PWRS-3, 1988, pp. 1449-
1455.
[10] A. S. Kirkpatrick, C. D. Gelatt and M. P.
Vecchi, “Optimization by simulated annealing”,
Science, Vol. 220, No. 4598, 1983, pp. 671-680.
[11] F. Glover, “Future paths for integer
programming and links to artificial intelligence”,
Computers and Operations Research, Vol. 13,
1986, pp. 533-549.
[12] A. Sadegheih, “Optimisation using SA and
EA”, Far East, J. Math. Sci. Vol. 10, No. 3,
2003, pp. 247-254.
Proceedings of the 8th WSEAS Int. Conference on Automatic Control, Modeling and Simulation, Prague, Czech Republic, March 12-14, 2006 (pp281-286)