Group search optimizer (GSO) is a new novel optimization algorithm by simulating animal behavior. It has been applied to Articial Neural Network (ANN) training, disease classification problem, mechanical design optimization problems and computation of optimal power ?ow. However, the research and application on GSO is still in primitive stage. There are still many problems need to be improved and solved. In orde to improve the performance of GSO, we take two views in this paper, and propose Interactive Dynamic Neighborhood Differential evolutionary GSO (IDGSO).Firstly, we analyze the foraging strategy in the GSO, and we find out it uses uniformly-spaced foraging strategy, which easily miss the real food source, and accelerate the possibility of trapped into local optimum. From this view, we choose dynamic step sampling instead of uniform sampling to avoid the premature and improve the algorithm's global optimization ability. In this paper, Differential evolutionary GSO(DGSO)is proposed with dynamic steps,the Scrounger's difference evolution equation is changed into differential model. We use the single step method ,such as Euler method,improved Euler method and fourth-order Runge-Kutta method, to solve differential equation, and then we obtain the corresponding differential evolution models. The experimental results show the improved GSO effectively avoids the premature convergence problem.Secondly, we analyze the network topology of GSO, it use the Gbest topology structure, which leads to rapid exchange of information among particles. So, it is easily trapped into local optima when dealing with multi-modal optimization problems. In this paper, inspiration from the Newman and Watts model, an improved group search optimizer with interactive dynamic neighborhood (IGSO) is proposed. Simulation results show that the IGSO algorithm accelerates the convergence rate and improves its accuracy. IGSO algorithm is better than the basic GSO, especially on high-dimensional complex problems. Finally, In order to improve the population diversity and avoid the premature, we draw both the dynamic step and dynamic neighborhood into the basic GSO, Experimental results show the algorithm can not only effectively avoid the premature, but also significantly improves the global optimization ability and the convergence speed. |