Font Size: a A A

A Modified Particle Swarm Optimization Algorithm And Its Applications In Discrete Problems

Posted on:2006-12-11Degree:MasterType:Thesis
Country:ChinaCandidate:R GaoFull Text:PDF
GTID:2168360155952940Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
1 A Modified PSO adapt to discrete NP problems 1.1 Fundamental Ideology At present, arithmetic solving discrete problems with PSO can be counted on one's fingers. These algorithms amend a local part of PSO merely according to characteristic of each problem so they are not representative. They are only available to some special problems and are not extended to other discrete problems. Because of above reason, In the first chapter, I advance a modifiable PSO that can come down to numerical value count ultimately: ·Based on basis operation formula of PSO, come down to numerical value count ultimately. ·define coding of particle and velocity according to characteristic of each problem. ·define encoding and implement transform from particle to issue solution. ·rectify operator rule, preserve it as mathematics operator ·define object function and fitting function as numerical value function. ·preserve operation flow of original PSO. The modifiable PSO easily extended to other discrete problems. It provide a new method for solving discrete problems in future. 1.2 Fundamental Process of Modified PSO Step 1. initialize particle swarm, randomly set initial position and velocity following their coding definition. Step 2. encode particle following their encoding definition, calculate fitting function . Step 3. to each particle, compare its present fitting with one of its history best position Pi. if better , update Pi Step 4. to each particle, compare its present fitting with one of swarm history best position Pg. if better , update Pg. Set w=0 and y=y+1,if y=y1, go to Step 12;else y=0 go to Step 7. Step 5. adjust location and velocity following new operation symbol rule. Step 6. when reach stated iterative times or achieve satisfying result ,end; else Step 2. Step 7. if not evolution , not evolution times w=w+1,if w= w1,go to Step 8;if w=w2,go to Step 9;w=w3,go to Step 10,if w=w4 ,go to Step11 Step 8. perform local search according to own define, go to step 2 Step 9. use "stretching"function with different parameter according to characteristic of each problem, go to Step 2 Step 10. set mode and probability mutation ,go to Step 2. Step 11. use unit permutation with uncertainty length memory, go to Step 2. Step 12. use dynamic parameter adjust operation, go to Step 2. 1.3 Advanced Optimization Means 1.3.1 Local search strategies randomly search a element of a particle, transform a part of its key by some probability. randomly search n elements of a particle, transfer their keys by some probability. transfer n particles m times and m is directed ratio with iterative times. At last, substitute original solution with the solution of best key . randomly search two particles of swarm, exchange their elements in the same position. 1.3.2 Unit permutation with uncertainty length memory Add some best memories to storeroom in each particle. Memory length and probability of using memory will be set by users. If one's one new fitting is better than the worst fitting one of prevenient memories, the new fittingkey will be added to memory storeroom. If PSO has not evolution in stated iterative times ,randomly select some particles from memory storeroom. 1.3.3 Advanced Stretch Method "Stretch"function is a available optimization method in continuous problems. Through modifying we successfully apply it in discrete problems. G ( x)= f(x)+γ1 ||x?x||(sign(f(x)?f(x))=1) (1.3.3.1) tanh((()()))H ( x)= G(x)+γ2 sign(μf(Gx) ?xf?( Gx))x+1 (1.3.3.2) It enforce one perform with two phases and assist program against local minimum value. 1.3.4 Dynamic parameter adjust operation It can offer better parameter corresponding to characteristic of each problem and is more credibility than experience value. In order not to affect speed of program, we use following formula dynamic parameter adjust one time after stated iterative times. ()()()1t21id0 ?= ×?t?idtidfXωc fX-fX 2()1( (1)())1ididtidfPcfXfP×η= ×?? 2()2( (1)())2gdgdtidfPcfXfP×η= ×?? rand() = 2;λ=1 1.3.5 Mutation From GA import available optimization method —mutation. After reaching stated iterative times, alter or reverse a part of some elements in aparticle (for example only alter integer of real number),it can avoid local optimize. 2 Solving GTSP with RKPSO It is for PSO's application on GTSP and unification of solution on GTSP and TSP problem, that a new coding mode is brought forward based on improved PSO. It's random-key coding mode. The particle is defined to be a vector (r0,r1,…,rm-1) with m dimension. ri is on [0,|Vi|). The integral part of ri denote the spokesman of group Vi, so it can't be more than |Vi|. The order of group Vi depends on decimal part of ri. Basic algorithm in linear algebra is used. But the result will be modeled with maximum of city number in each group. The adapting function is: 0()(((,),()))()|(((,)))tiVttporttpijEkMandfXtttpijAandijiiijjknjii≥?∈?≥?≥?∈?∈=?≥?∈ The object function is: min( f(X)) When PSO doesn't have achievement in scheduled times, local searching will be used. Find a group in which the number of cities is more than 1 in the swarm randomly, ransack the integral part from 1 to maximum. Keep the best one. Find two elements in each particle and interchange their decimal part. Keep the best one. Improved stretch algorithm and dynamic tuning of parameter are used to optimize it. Finally, the result of experiment denote that RKPSO can solve GTSP effectively. 3 Solving JSSP with MRPSO It is for PSO's application on JSSP and unification of solution on each kind of scheduling problem, that a new coding mode is brought forward based on improved PSO. It's matrix random-key coding mode.
Keywords/Search Tags:Optimization
PDF Full Text Request
Related items