pso

30
Particle Swarm Optimization (PSO) … and its applications…..

Upload: rishijhajsr

Post on 21-Nov-2014

116 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: PSO

Particle Swarm Optimization (PSO) … and its applications…..

Page 2: PSO

The Inventors (1)

Prof. Russell EberhartPurdue School of Engg. & [email protected]

Page 3: PSO

James [email protected]

Graduated in Psychology.

Not at all aprofessional researcher.

The Inventors (2)

Page 4: PSO

PSO emulates the swarm behavior of insects, animals herding, birds flocking and fish schooling where these swarms search for food in a collaborative manner.

Each member in the swarm adapts its search patterns by learning from its own experience and other members’ experience.

Particle Swarm OptimizationParticle Swarm Optimization

Page 5: PSO

Comparison with other Evolutionary Computation Techniques

Unlike in genetic algorithms, in PSO, there is no selection operation.

• All particles in PSO are kept as members of the populationthrough the course of the run

• PSO is the only algorithm that does not implement the survival ofthe fittest.

• No crossover operation in PSO.

• Velocity updating resembles mutation in EP.

• Balance between the global and local search is achieved by using an inertial weight factor (w) in velocity updatingequation.

Page 6: PSO

• Each candidate solution is called PARTICLE and represents one individual of a population

• The population is set of vectors and is called SWARM

• The particles change their components and move (fly) in a space R2

• They can evaluate their actual position using the function to be optimized

• The function is called FITNESS FUNCTION

• Particles also compare themselves to their neighbors and imitate the best of that neighbors

Particle Swarm OptimizationParticle Swarm Optimization

Page 7: PSO

Search space D-Dimensional

Xi = [xi1, …, xiD]T = ith particle of Swarm

Vi = [vi1, …, viD]T = Velocity of ith particle

Pi = [pi1, …, piD]T = Best previous position of the ith particle

Initialization

Page 8: PSO

• Swarm of particles is flying through the parameter space and searching for optimum

• Each particle is characterized by• Position vector….. xi(t)

• Velocity vector…...vi(t)

xi(t)

vi(t)

Particle i-400

-200

0

200

400

-400

-200

0

200

400

-5

0

5

x 105

113

PSO Contd …PSO Contd …

Page 9: PSO

Swarming :Swarming :

Page 10: PSO

Bingo !

Swarming Contd …Swarming Contd …

Page 11: PSO

Why Social insects ?Why Social insects ?

Problem solving benefits include:

– Flexible

– Robust

– Self-Organized

– Having no clear leader

– Can use past memory

– Swarming nature

– colonial life

Page 12: PSO

• Each particle has – Individual knowledge pbest

– its own best-so-far position

xi(t)

vi(t)

Particle i. pbest

General PSO Algorithm:

Page 13: PSO

• Each particle has – Individual knowledge pbest

– its own best-so-far position

– Social knowledge gbest – pbest of its best neighbor

xi(t)

vi(t)

Particle i. pbest .gbest

General PSO Algorithm Contd …

Page 14: PSO

• Velocity update:

vi(t+1)= w×vi(t) +

+ c1×rand× (pbest(t) - xi(t)) +

+ c2×rand× (gbest(t) - xi(t))

Xi(t)

vi(t)

pbest_i(t)

gbest_i(t)

General PSO Algorithm Contd …

Page 15: PSO

• Velocity update:

vi(t+1)=w×vi(t) +

+ c1×rand× (pbest(t) - xi(t)) +

+ c2×rand× (gbest(t) - xi(t))

Xi(t)

gbest_i(t)

vi(t)

pbest_i(t)

General PSO Algorithm Contd …

Page 16: PSO

• Velocity update:

vi(t+1)=w×vi(t) +

+ c1×rand× (pbest(t) - xi(t)) +

+ c2×rand× (gbest(t) - xi(t))

Xi(t)

vi(t)

gbest_i(t)

pbest_i(t)

General PSO Algorithm Contd …

Page 17: PSO

• Velocity update:

vi(t+1)=w vi(t) +

+c1*rand *(pbest(t) - xi(t)) +

+ c2*rand*(gbest(t) - xi(t))

• Position update:

xi(t+1)=xi(t) + vi(t+1) Xi(t)

vi(t)

gbest_i(t)

pbest_i(t)

General PSO Algorithm Contd …

Page 18: PSO

• Velocity update:

vi(t+1)=w vi(t) +

+c1*rand*(pbest(t) - xi(t)) +

+c2*rand*(gbest(t) - xi(t))

• Position update:

xi(t+1)=xi(t) + vi(t+1)

Where

w > (1 / 2) (C1 + C2) – 1

0 < w < 1

Gbest(t)vi(t)

Xi(t) vi(t+1)

pbest_i(t)

General PSO Algorithm Contd …

Page 19: PSO

Start

Initialize particles with random position and velocity vectors.

For each particle’s position (p) evaluate fitness

If fitness(p) better than fitness(pbest) then pbest= p

Loo

p u

nti

l all

p

arti

cles

exh

aust

Set best of pBests as gBest

Update particles velocity and position

Loop

unt

il m

ax it

er

Stop: giving gBest, optimal solution.

Flow chart depicting the General PSO Algorithm:

Page 20: PSO

Maximum Velocity ?

Maximal velocity

• Velocity must be limited

• Prevention of swarm explosion

• Vmax – if velocity of particle is greater than Vmax or less than

–Vmax, its set to Vmax

• Vmax is saturation point of velocity

Comments on the Inertial weight factor: A large inertia weight (w) facilitates a global search while a small inertia weight facilitates a local search. By linearly decreasing the

inertia weight from a relatively large value to a small value

through the course of the PSO run gives the best PSO performance compared with fixed inertia weight settings.

Larger w ----------- greater global search abilitySmaller w ------------ greater local search ability.

Page 21: PSO

PSO with linearly decreasing inertia Weight, w

I

kwwwkw

*)(*)( 10

0

4.0

9.0

..max

1

0

w

w

iterationsofnoI

numbersearchk

The linearly decreasing inertia weight, w is given by

where

Proposed by Shi and Eberhart.

Page 22: PSO

PSO with constriction factor

Introduced by Clerc anf Kennedy.

The expression for velocity has been modified as

))()((*)(2*))()((*)(1*)(*)( 21 dxdpdrandcdxdpdrandcdVKdV igiiiiii

,42

22

K ,21 cc 4

is set to 4.1which gives and = 0.729. ,05.221 cc K

As a result each of the (pi-xi) terms is calculated by multiplying 0.729 * 2.05= 1.49445.

Constriction factor guarantees the convergence and improves the convergence velocity.

Page 23: PSO

zwRcxywRbxxRavwv tkkk ...)ˆ.(..)ˆ.(.. 3211

Repulsive PSO

RPSO is a global optimization algorithm, belongs to the classof stochastic evolutionary global optimizers, a variant ofparticle swarm optimization (PSO).

Where R1, R2, R3 : random numbers between 0 and 1ω : inertia weight between 0.01 and 0.7 :best position of a particleŷ : best position of a randomly chosen other particle from within the swarmz : a random velocity vectora, b, c : constants

Page 24: PSO

Comprehensive Learning PSO

Overcomes premature convergenceConventional PSO– Learns simultaneously from pbest and gbest.– Possibility of trapping in local minima.CLPSO– Learns from gbest of swarm, pbest of particle andpbest of all other particles at different dimension.

– If D is the total number of Dimension of optimizingproblem then, m- dimensions are randomly chosen tolearn from the ‘gbest’, out of the remaining D- mdimensions some learn from a randomly selectedparticle’s ‘pbest’ and the remaining dimensions learnfrom its own ‘pbest’.

Page 25: PSO

)(. ,,, dibestdindi XgrandVwV

)(. ,,,, didbestfdindi XprandVwV

Initialization of all the particle velocity and position randomlySet the initial position as pbest (pi,d) and Find gbest (pg,d);For n= 1 to the max. Number of generations,For i=1 to the population size,For d=1 to the problem dimensionality,Update velocity according following to equation;If ai (d) ==1

Else if bi (d) ==1

Else)(. ,,,, didbestdindi XprandVwV

Page 26: PSO

Where ai (d) and bi (d) are the deciders of randomly chosen dimension for learning for random particles.And gbest is global best value, pbest,d is personal best of ith particle in dth dimension and pbestf,d is the personal best value of fth particle in dth dimensionEnd ifLimit VelocityUpdate Position according to equationXi,d=Xi,d + Vi,d

End- of-d;Compute fitness of the particle;Update historical information regarding pbest i and gbest i if needed ;End-of-i;Terminate if meet the requirements;End-of-n;

Page 27: PSO

The best features of PSO and Clonal Selection Principle of Artificial Immune System are combined.

The local and global best positions of particles are not directly used in deciding their new positions. Rather velocity of each particle is updated from the knowledge of the previous velocity and the variable inertia weights.

The location of each particle is then changed in the next search using its updated velocity information. The fitness values of all particles are then evaluated and the overall best location is selected. In the next search, each particle following clonal selection principle occupies the best position achieved so far and then they diversify .As the search process continues all particles proceed towards the best possible location which then represents the desired solution.

Clonal PSO

Page 28: PSO

)(*)()1( kvkHkv idid

I

kHHHkH

*)(*)( 10

0

The velocity update equation of CPSO becomes

4.0

9.0

..max

1

0

H

H

iterationsofnoI

munbersearchk

Position update: x(k+1)=xi(k) + vid(k+1)

Where

Page 29: PSO

Comparison of CPU time used during training operation

Nonlinear system PSO (in second)

GA(in second)

Example-1 trained at -30dB 0.3200 13.5090

Example-1 trained at -10dB 0.3300 12.4080

Example-2 trained at -30dB 0.3500 12.7990

Example-2 trained at -10dB 0.3700 13.8390

Page 30: PSO

Conclusion

The PSO based nonlinear system identification involves :

(1)Low computational complexity

(2)Faster convergence during training operation compared to GA based approach

(3)The present method avoids local minima during training of weights (derivative free training)