The Evolutionary algorithm has become one of the important tools for solving complex optimization problems, because of its intelligence, widely used and global search ability. In this dissertation, studies are mainly focused on single-objective optimization and multi-objective optimization problems. The main contributions of this thesis are summarized as follows:1. A bi-objective particle swarm optimization for constrained single-objective optimization problems is proposed. The technique treats constrained optimization with any number of constraints as a bi-objective optimization. Then, in order to keep the diversity of the swarm and escape from the local optimum easily, when the optimum is not improved in several successive generations, it is perturbed by using an improved simplex crossover operator. In doing so, better particles will be found. The simulation results indicate that the proposed algorithm is effective.2. A fuzzy particle swarm optimization is proposed for solving complex constrained single-objective optimization problems. Firstly, a new perturbation operator is designed, and the concept of fuzzy personal best value and fuzzy global best value are given based on the new operator. Particle updating equations are revised based upon the two new concepts to discourage the premature convergence. Secondly, a new comparison strategy is proposed based on the concept of infeasible threshold value, the constraints are treated one by one in this strategy, so some particles with good property will be preserved. In doing so, the infeasible solutions will evolve toward the feasible ones. Finally, the simulation results show that the proposed algorithm is effective, especially for the problems with high dimensions.3. A multi-objective memetic algorithm based on particle swarm optimization is proposed for solving unconstrained multi-objective optimization problems. Firstly, the problem is converted into a constrained single-objective optimization problem. The rank values of all particles are regarded as the constraints, and the measure of the uniformity of the solutions is regarded as the objective function. Secondly, for the converted problem, a new comparison strategy based on the constraint dominance principle is proposed. Thirdly, a simulated annealing based weighted-sum method is used to perform local search. The simulation results show that the proposed algorithm is effective.4. A fuzzy particle swarm optimization is proposed for solving unconstrained multi-objective optimization problems. Firstly, a new perturbation operator is designed, the perturbed particle will lean to the particle with smaller rank value or the particle located in the sparse region of the objective space. Based on these, the particle updating equations are revised. Secondly, the new population is produced through the improved particle swarm optimization and genetic algorithm. The experimental results show that the proposed algorithm can generate a set of widespread and uniformly distributed solutions.5. A new model based multi-objective memetic algorithm is proposed. Firstly, the unconstrained multi-objective optimization problem is converted into a constrained single-objective optimization problem. For the converted problem, a new comparison strategy is proposed: the objective space is divided into some regions, then the particles located in the sparse regions are preferred regardless of their rank values. In doing so, a set of uniformly distributed and widespread nondominated solutions are found. Secondly, the new algorithm combines the genetic algorithm with the simulated annealing algorithm by introducing the C-metric to improve the global search ability, then the better offspring will be generated. The simulation results show that the new algorithm is effective.6. A hybrid particle swarm optimization is proposed for solving constrained multi-objective optimization. Firstly, in order to keep some particles with smaller constraint violations, a threshold value is designed, the comparison strategy of particles is revised based on it. In doing so, the infeasible solutions will evolve toward the feasible ones. Secondly, in order to find a set of diverse and well distributed Pareto-optimal solutions, a new crowding distance function is designed for bi-objective optimization problems. It can assign larger crowding distance function values not only for the particles located in the sparse regions but also for the particles located near to the boundary of the Pareto front. Thirdly, a new mutation operator is proposed. The total force is computed first, then it is used as a mutation direction, searching along it, better particles will be found. In order to guarantee the convergence of the algorithm, the second phase of mutation is proposed.7. An infeasible elitist based particle swarm optimization for constrained multi-objective optimization is proposed. Firstly, an infeasible elitist preservation strategy is proposed, which keeps some infeasible solutions with smaller rank values at the early stage of evolution regardless of how large the constraint violations are, and keep some infeasible solutions with smaller constraint violations and rank values at the later stage of evolution. In this manner, the true Pareto front will be found easier. Secondly, a new crowding distance function is designed. It can assign larger crowding distance function values not only for the particles located in the sparse regions but also for the particles located near to the boundary of the Pareto front by using less computational cost. Thirdly, the mutation operator in the hybrid particle swarm optimization is revised. the particles whose constraint violations are less than the threshold value will be used to compute the total force, so that the computational cost is reduced in this step. The comparative study shows that the proposed algorithm can generate widespread and uniformly distributed Pareto fronts and outperforms those compared algorithms.8. For solving the problem that the linear decreasing weight can not exactly reflect the search process, two improved particle swarm optimization is proposed: a dynamical particle swarm optimization and a simple particle swarm optimization. In the first algorithm, the inertia weight is changed with the accumulation factor and the velocity factor. In the second algorithm, the inertia weight is set to zero, and the position of the particle whose evolution has stopped is produced by smooth scheme and line search. The simulation results show that the two algorithms are effective and outperform the compared one. |