[ieee nafips 2009 - 2009 annual meeting of the north american fuzzy information processing society -...

5
Generation of Optimal Functions Using Particle Swarm Method over Discrete Intervals Frederick Shamieh Dept. of Mechanical, Materials and Aerospace Engineering University of Central Florida Orlando, FL, United States Abstract – Particle swarm optimization is a computational learning technique designed to find a global and optimal solution upon or within a function. The output, usually singular, is characteristically accurate as the nature of the system is to maintain a balance of convergence and sample diversity. This paper aims to introduce the process of using a multi-level evaluation approach of particle swarm optimization to generate a solution function. Multiple variable assessment is replaced with sequential interval assessment of repeated variables and pieced together to form the framework of an optimized function. I. Introduction In Particle Swarm Optimization (PSO), natural social principles are mimicked in an evolutionary algorithm to create a sense of intelligence. The area of interest was pioneered by James Kennedy and Russell Eberhart in 1995 in an attempt to create a new optimization method and improve on existing neural networks structure in order to expedite the calculation efficiency and alleviate the system complexity. [1,2]. The search for an optimal answer can be compared to the movement of a point within a pre-defined space. Each variable under evaluation is symbolized by a dimension within the space containing the single point representing a solution to a function of the variables. Achieving a maximum or minimum value for this point designates a location that intersects all dimensions corresponding to respective values that satisfy the output. Using PSO, multiple points or particles can be distributed around the problem space and forced to travel in paths dictated by the best-fit output. The simultaneous shift in position for all particles is an iterative process that repeats until the completion of a predetermined satisfaction criteria. The design of an optimization algorithm is generally targeted toward finding a single optimal solution to a given problem; the problem is often in the form of a function or state, while the solution is commonly expressed as a value or a location within the search domain [3]. Chengying "Cheryl" Xu, Ph.D.* Dept. of Mechanical, Materials and Aerospace Engineering University of Central Florida Orlando, FL, United States In this paper, the Particle Swarm Optimization over Discrete Intervals (PSODI) technique is proposed. The theory and justification behind the new method is explained in Section II. Simulations and examples are provided in Section III to validate the proposed method. Lastly, findings and possible applications are presented in the conclusion, Section IV. II. Proposed Application of PSO With the knowledge that the traditional application of the PSO technique is the definition of variables to optimize a function output, it should be noted that multiple variables are often solved for simultaneously, i.e. the solution for an optimal P(x,y) is represented by a single point determined in a two dimensional problem space. Coincident validation for quantities of variables beyond physically constructible three- dimensional space can be explored through the assumption that problem space is ideally Euclidean. A second level PSO application can be defined as the definition of variables within a function to optimize the output of a function in which it is embedded e.g. if P(y) and y(x), then P can be optimized by solving for x using the given function y(x). This notion can be expanded upon to demonstrate that if a function is comprised of functions that share common variables, the definition of the underlying dependencies can be used to optimize the top level output. The entire process can be summarized in a 6 step process illustrated in Fig. 1. The process begins with the initialization of possible solutions represented by particles in space. Step two is the communication between the particles to determine the best solution found between the samples. Next, depending upon the distance to the best possible solution the weight, current velocity and projected direction of movement are altered. Once all convergence criteria are updated, the particles move to create the next iteration. The entire system is then checked against design parameters to determine whether the current best solution is accurate enough and the grouping of the particles around that solution is dense enough. Correspondence Author: Dr. Chengying Xu 978-1-4244-4577-6/09/$25.00 ©2009 IEEE The 28th North American Fuzzy Information Processing Society Annual Conference (NAFIPS2009) Cincinnati, Ohio, USA - June 14 - 17, 2009

Upload: chengying

Post on 10-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE NAFIPS 2009 - 2009 Annual Meeting of the North American Fuzzy Information Processing Society - Cincinnati, OH, USA (2009.06.14-2009.06.17)] NAFIPS 2009 - 2009 Annual Meeting

Generation of Optimal Functions Using Particle Swarm Method over Discrete Intervals

Frederick Shamieh Dept. of Mechanical, Materials and Aerospace Engineering

University of Central Florida Orlando, FL, United States

Abstract – Particle swarm optimization is a computational learning technique designed to find a global and optimal solution upon or within a function. The output, usually singular, is characteristically accurate as the nature of the system is to maintain a balance of convergence and sample diversity. This paper aims to introduce the process of using a multi-level evaluation approach of particle swarm optimization to generate a solution function. Multiple variable assessment is replaced with sequential interval assessment of repeated variables and pieced together to form the framework of an optimized function.

I. Introduction

In Particle Swarm Optimization (PSO), natural social principles are mimicked in an evolutionary algorithm to create a sense of intelligence. The area of interest was pioneered by James Kennedy and Russell Eberhart in 1995 in an attempt to create a new optimization method and improve on existing neural networks structure in order to expedite the calculation efficiency and alleviate the system complexity. [1,2].

The search for an optimal answer can be compared to the

movement of a point within a pre-defined space. Each variable under evaluation is symbolized by a dimension within the space containing the single point representing a solution to a function of the variables. Achieving a maximum or minimum value for this point designates a location that intersects all dimensions corresponding to respective values that satisfy the output. Using PSO, multiple points or particles can be distributed around the problem space and forced to travel in paths dictated by the best-fit output. The simultaneous shift in position for all particles is an iterative process that repeats until the completion of a predetermined satisfaction criteria.

The design of an optimization algorithm is generally targeted toward finding a single optimal solution to a given problem; the problem is often in the form of a function or state, while the solution is commonly expressed as a value or a location within the search domain [3].

Chengying "Cheryl" Xu, Ph.D.*

Dept. of Mechanical, Materials and Aerospace Engineering University of Central Florida Orlando, FL, United States

In this paper, the Particle Swarm Optimization over

Discrete Intervals (PSODI) technique is proposed. The theory and justification behind the new method is explained in Section II. Simulations and examples are provided in Section III to validate the proposed method. Lastly, findings and possible applications are presented in the conclusion, Section IV.

II. Proposed Application of PSO With the knowledge that the traditional application of the

PSO technique is the definition of variables to optimize a function output, it should be noted that multiple variables are often solved for simultaneously, i.e. the solution for an optimal P(x,y) is represented by a single point determined in a two dimensional problem space. Coincident validation for quantities of variables beyond physically constructible three-dimensional space can be explored through the assumption that problem space is ideally Euclidean.

A second level PSO application can be defined as the

definition of variables within a function to optimize the output of a function in which it is embedded e.g. if P(y) and y(x), then P can be optimized by solving for x using the given function y(x). This notion can be expanded upon to demonstrate that if a function is comprised of functions that share common variables, the definition of the underlying dependencies can be used to optimize the top level output. The entire process can be summarized in a 6 step process illustrated in Fig. 1. The process begins with the initialization of possible solutions represented by particles in space. Step two is the communication between the particles to determine the best solution found between the samples. Next, depending upon the distance to the best possible solution the weight, current velocity and projected direction of movement are altered. Once all convergence criteria are updated, the particles move to create the next iteration. The entire system is then checked against design parameters to determine whether the current best solution is accurate enough and the grouping of the particles around that solution is dense enough.

Correspondence Author: Dr. Chengying Xu

978-1-4244-4577-6/09/$25.00 ©2009 IEEE

The 28th North American Fuzzy Information Processing Society Annual Conference (NAFIPS2009)Cincinnati, Ohio, USA - June 14 - 17, 2009

Page 2: [IEEE NAFIPS 2009 - 2009 Annual Meeting of the North American Fuzzy Information Processing Society - Cincinnati, OH, USA (2009.06.14-2009.06.17)] NAFIPS 2009 - 2009 Annual Meeting

If the required stopping condition is met, the process is terminated; if the required conditions are not met, the system iterates until they are satisfied. Because the intent of the research is focused on the application of the PSO method, the effective breakdown of each step will not be scrutinized.

Figure 1. Simplified PSO Logic

Using the previous example, let us replace the value x

with a continuous linear function and select a finite number of points along x at which to evaluate y(x). Let us also maintain the condition that the desired value of P can be converged upon within the problem space as a minimum, maximum or a certain given value. Lastly, the stipulation that y(x) is known shall be removed. The result is a known x, a known P and a function that can be evaluated over known intervals to satisfy P. Each evaluation of y(x) at given intervals of x can be assessed using the PSO method; each isolated value of x is represented by a dimension though which y(x) contains all solutions. If done sequentially and reconstructed values of y(x) can be consecutively arranged to create the points from which a function can be derived.

The basic PSO algorithm advances particle position using

(1) and (2) where i is the particle index, k is the time index, v is the velocity of the particle, x is the position on the particle, p is the particle best solution and G is the is the global best solution [4,5].

))(())(()()1( 21 kxGkxpkvkv iiiiiii (1)

)1()()1( kvkxkx iii (2)

))(())(()()()1( 2211 kxGkxpkvkkv iiiiiii (3)

The common velocity algorithm then becomes (3) where γ is a randomly generated number on the interval [0,1], φ represents an inertia function and α is an acceleration constant [4,5]. To reduce difficulty in tracking, particles and their movement will reference G through current proximity as opposed to a global coordinate system. Ideally, the basic PSO algorithm will be used across multiple dimensions, each of which attempt to optimize a function that represents all possibilities for the variable under questions in a given interval.

Because measurements of distance are unaffected by the

quantity of dimensions in Euclidean space, the number of variables considered can be increased to enhance accuracy. The process and its proposed precision augmentation inversely parallel the theory behind a Riemann sum. A Riemann sum exercises creation of nonexistent intervals constructed and fitted to a set function in order to achieve a desired output while this application of PSO utilizes the creation of nonexistent functions being constructed and fitted to a desired output through defined and set intervals. The use of PSO to construct a function using points solved for through isolated regions is termed Particle Swarm Optimization over Discrete Intervals.

III. Results and Discussion

To demonstrate and assess the effectiveness of the application, the PSODI technique methodology will be applied to a path-finding problem; the objective being function generation using the output over discrete intervals. Two points in two-dimensional space are first separated by a given distance and the function between them can be defined as y(x). By segmenting x, solving for y(x) over each interval, sequentially adding the y(x) values and evaluating their sum in a function termed L, a function representing the shortest distance can be solved for.

Points are created at coordinates (0,5) and (1,5) while distance between the points is discretely divided along the x direction into two equal intervals. With the end conditions fixed, the position y(x) at the interval boundary will be evaluated twice, at the end of the first and at the beginning of the second interval under the assumption that the shortest possible distance may in fact be discontinuous. Two intervals imply two degrees of freedom (DOF) in a single dimension. Working within a set range and using five solution outputs, initial estimates at the interval boundary are demonstrated in Fig. 2. This graphical interpretation of the solution outputs is termed solution space.

The 28th North American Fuzzy Information Processing Society Annual Conference (NAFIPS2009)Cincinnati, Ohio, USA - June 14 - 17, 2009

Page 3: [IEEE NAFIPS 2009 - 2009 Annual Meeting of the North American Fuzzy Information Processing Society - Cincinnati, OH, USA (2009.06.14-2009.06.17)] NAFIPS 2009 - 2009 Annual Meeting

Figure 2. Two DOF Solution Space.

Each plot point represents a possible y(x) value at the

interval boundary. The y(x) value within the first interval itself is assumed to be an average between the y(x)n and y(x)n-

1, in this case the coordinate (0,5). Similarly, the second interval is evaluated using coordinate (1,5) and an interval boundary value. Plotting the average values existing over interval 1 using the five solution outputs against the corresponding values interval 2 will generate a mesh of nodes, demonstrated by Fig. 3 in what is deemed problem space.

Figure 3. Two DOF Problem Space.

The PSO technique can now be applied to the problem

with the average value of each interval viewed as an independent variable. Convergence of the nodes to an optimal solution in problem space can be seen in Fig. 4. Due to the simplicity of the example, each iteration is computed almost instantaneously and the total simulation time is insignificant. It should be noted that increasing the complexity of the investigated function, the number of nodes or the number of intervals would increase the computation time.

Figure 4. Nodal Convergence.

The implication of the nodal movement is portrayed in the solution space, Fig. 5.

Figure 5. Solution Convergence.

This data implies that the function y(x) passing through

coordinates (0,5) and (1,5) with the shortest possible distance also intersects the coordinate (0.5,5). Combining the boundary criteria with the output data confirms that the shortest possible line is given by (4).

5)( xy (4)

An increase to the number of intervals suggests a higher

resolution to the solution. Every additional interval is viewed as another variable in the PSO method, thus another dimension in problem space. The previous problem is solved a second time with three intervals leading to the three degree of freedom problem space and its convergence on the same solution depicted in Fig. 6.

The 28th North American Fuzzy Information Processing Society Annual Conference (NAFIPS2009)Cincinnati, Ohio, USA - June 14 - 17, 2009

Page 4: [IEEE NAFIPS 2009 - 2009 Annual Meeting of the North American Fuzzy Information Processing Society - Cincinnati, OH, USA (2009.06.14-2009.06.17)] NAFIPS 2009 - 2009 Annual Meeting

Figure 6. Three DOF Problem Space.

Presuming the problem space is Euclidean, taking the concept further does not require the depiction of problem space graphically; solution space is sufficient for function generation. Figure 7 demonstrates a near identical problem with end constraints updated from (0,5) and (1,5) to (0,0) and (600,0) solved using six discrete intervals along the x-axis. As PSO is known to be a genetic algorithm, the convergence factors were updated such that convergence within a rough realm of accuracy was accomplished after only 8 iterations and a undisputed near exact solution after 10 iterations.

Figure 7. Convergence after 8 iterations

To demonstrate the versatility of the intended application, the location of one point was relocated from off the straight horizontal line. With the aid of a regression analysis and the

data shown in Fig. 8, the accurate result was duplicated to produce (5).

6

5)(

xxy (5)

Figure 8. Updated Function Creation

Once convergence and accuracy were confirmed, the

concept of constraint was addressed. Mathematical systems often require limitations that represent physical impossibilities or boundary conditions. These constraints would be mimicked in the path-finding application with the introduction of boundaries and obstructions within the shortest possible distance between points. Relating back to the system of (0,0) and (600,0) an inconsistency was constructed in the shape of a rectangle between (250,-400) and (350,250). Logic leads to the assumption that the shortest path between the two points would be located within the positive side of the y-axis although a possible path on the negative side does exist. The same PSODI algorithm used in the previous example yielded the results displayed in Fig. 9.

Figure 9. Single Obstruction

The 28th North American Fuzzy Information Processing Society Annual Conference (NAFIPS2009)Cincinnati, Ohio, USA - June 14 - 17, 2009

Page 5: [IEEE NAFIPS 2009 - 2009 Annual Meeting of the North American Fuzzy Information Processing Society - Cincinnati, OH, USA (2009.06.14-2009.06.17)] NAFIPS 2009 - 2009 Annual Meeting

These points can be simply strung together to create a

function that satisfies the system constraints as well as the objective. It should be noted that the width of the obstruction was ignored by the algorithm. Although the shortest possible path was determined, the generated function would intersect the obstruction in two places. To rectify this problem, more discrete intervals are required to improve the resolution of the data output. The accuracy of the PSODI method and the same lack of resolution are both demonstrated with the addition of more obstructions in Fig. 10. Inconsistencies were added to the second and forth intervals in the form of minimums while a third inconsistency was added to the third interval in the form of a maximum.

Figure 10. Multiple Obstructions

IV. Conclusion and Future Work

The PSODI technique has proved to be an effective method by which to generate optimized functions based on upper level criteria. This method can be successfully implemented in any case that concerns optimized functions of multiple functions sharing common variables or multiple variables in which one may be expressed as the function of another. The concept of PSODI does not define an application, nor is it constricted to a single set of possible purposes. The type of PSO, the efficiency of convergence and the method of regression are all modifiable based upon necessity. The relevance of this process can be utilized in the engineering industry in any optimization sense. Mathematically, the upper level criteria that must be satisfied can include analysis of the generated curve itself. It is in this manner that PSODI, with finite boundary conditions and defined inconsistencies, can be applied to the reduction or maximization of area under the curve. The proposal of this effort may include the optimization of work done, energy wasted, cost and many others evaluated over time.

References [1] Kennedy, J. and Eberhart, R., “Particle Swarm Optimization.” Proceedings of IEEE Conference on Neural Networks, Perth, Australia, 1995, pp. 1942-1948. [2] Eberhart, R. and Kennedy, J., “A new optimizer using particle swarm theory.” Proceedings of Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, October 1995. [3] Particle Swarm Optimization for Multimodal Functions: A Clustering Approach Alessandro Passaro and Antonina Starita Dipartimento di Informatica, Universit`a di Pisa, Largo Pontecorvo 3, 56127 Pisa, Italy Correspondence should be addressed to Alessandro Passaro, [email protected] Received 13 July 2007; Revised 18 December 2007; Accepted 8 February 2008 Recommended by Riccardo Poli [4] Eberhart, R., Simpson, P., Dobbins, R., 1996, Computational Intelligence PC Tools, Academic Press, Inc., pp. 212-223. [5] Houck, C., Joines, J., and Kay M., 1995, A genetic Algorithm for Function Optimization: A Matlab Implementation, ACM Transactions of Mathematical Software, Submitted 1996

The 28th North American Fuzzy Information Processing Society Annual Conference (NAFIPS2009)Cincinnati, Ohio, USA - June 14 - 17, 2009