Optimization technology is based on mathematics and can solve various combinatorial optimization problems. Many problems possess a set of parameters to be optimized, especially in the fields of engineering technology, scientific research and economic management. Optimization is to look for a set of parameters in definite restriction with the aim of minimizing or maximizing the objective function. According to quality of objective function and restrict condition and scope of variable, optimization problem can be divided into lots of types. For example, if objective function and restrict condition are both lineal expression, this problem belongs to linear programming problem, if not, it belongs to nonlinear programming problem. Different methods have been presented to sovle different kinds of problems, such as Newton's method, conjugate gradient method, Polar-Ribiere's method, Lagrange Multiplier Method etc. These methods can nicely find local extreme in different problems.However, with the development of human living space and the scope of understanding and transforming the world, people have found that because of the complexity, binding, nonlinear, modeling difficulties characteristic, it is not easy to find a satisfying analytic solutions. It's necessary to find a optimization algorithm suiting for large-scale parallel Operation with smart features.Modern evolution methods such as artificial neural networks, genetic algorithms, Taboo search method, simulated annealing, and ant colony algorithm etc., reflect a strong potential in solving large-scale problems. They can approximate the better feasible solution for the optimization problem within a reasonable period of time. The Genetic Algorithm and ant colony algorithm are known as intelligent optimization algorithm, and their basic idea is to construct stochastic optimization algorithms by simulating the behavior of the natural world.In recent years, another kind of intelligent optimization algorithm - PSO algorithm (particle swarm optimization, or PSO) increasingly accesses to the concerns of scholars. PSO algorithm is proposed by American social psychologist James Kennedy and electrical engineer Russell Eberhart in 1995, and it is inspired by bird populations' social behavior and uses the biological group model of biologist Frank Heppner. It uses particles without quality and volumes individuals, provides simple social rules of conduct for each particle, and searches the optimal solution to the problem through individual collaboration among populations. The algorithm converges fast, needing less parameters.Also it is easily achieved, and can effectively solve complex optimization problems. It has been widely used in function optimization, neural network training, graphic processing, pattern recognition as well as some engineering fields.Although the PSO algorithm has developed for more than 10 years, it has not been widely used in theory or practice. PSO algorithm, like other global optimization algorithm, has many shortcomings, such as easily falling into the local maximum, having slow convergence speed in the late evolutionary, having poor accuracy. How to accelerate the particle swarm algorithm for the convergence rate and improve its convergence accuracy are topics that most of the researchers have focused on. There are mainly two measures in accelerating the convergence speed. One is how to choose the best parameters, and the other is to combine with other optimization algorithm to amend the main framework. For improving accuracy and avoiding the common defect of premature convergence, some measures need to be taken that the diversity of the population should be maintained and the mechanism of jumping out of the local maximum should be introduced. There have been some methods for improving PSO algorithms, such as fuzzy adaptive PSO algorithm (FAPSO), the hybrid PSO algorithm (HPSO), discrete binary PSO algorithm, coordination PSO algorithm and immune PSO algorithm etc.In this paper, the PSO algorithm is summarized, the existing literatures are analysised, then, two improved algorithm are separately presented for continuous and discrete problems.In the continuous problems, the improved algorithm uses a random and unconditional mutation strategy which substitutes for previous velocity, and considers the effect which the best position of neighbor particle has conditionally on the particle behavior. It efficiently increases diversity of particles and improves the performance of algorithm. In this paper, as an example to function optimization, it is tested to illustrate the effectiveness of the algorithm through several classic functions, such as the Sphere, Rosenbrock and Girewank.At present, particle swarm optimization algorithm (PSO) has been applied to continuous problems for several years, but it is still an attempt to use the algorithm in discrete problems. An improved particle swarm optimization algorithm is proposed to solve the packing problem of rectangles, which belongs to discrete problems. This algorithm modifies the mode of decoding and combines the crossover and mutation in genetic algorithm in order to speed the optimization process.Finally, several improvement strategies have been found in this paper by adopting two improved PSO algorithm .Whether the issue is continuous or discrete problem using these strategies can be better optimized. The improved strategies are as follows:To consider the effect which the best position of neighbor particle has conditionally on the particle behavior can improve the diversity of particles.To increase mutation. To increase mutation for each new generation of particles and different mutation strategies can be used.To use different updated strategy updates the particle through defining a threshold in the different generation.In short, in this paper, a more comprehensive and in-depth analysis is discussed on particle swarm algorithm, and several improvements strategies are made for discre- te and continuous problems. Finally, the whole research contents are summarized, and further research directions are indicated. |