Optimization problems widely appear in many fields including economic, finan-cial, management, engineering, national defense and so on. optimization methods with better performance will not only drive the development of the optimization theory, but also will promote the progress of engineering and technology. Therefore, many scholars and engineers are focus on studying the characteristics of optimization problems and seeking the calculation methods of the solution. Due to the classes and nature of optimization problems, many optimization methods are proposed. In general, these methods can be divided into two major categories:traditional optimization methods and intelligent optimization methods.The traditional optimization methods have great limitations because of its computing architecture. Due to the shortage of the traditional numeric optimiza-tion methods and the enhancement of the requirement of optimization method, ant colony algorithm, genetic algorithm, particle swarm algorithm, differential evolution algorithm, artificial neural network and other intelligent optimization methods are proposed. These emerging intelligent optimization methods are looser to the require-ment of objective function, and have strong adaptability to each kind of complex optimization problem. Thus, it is significance to study the intelligent optimization methods in theory and practice.As a kind of bionic intelligent algorithm based on population evolution, differen-tial evolution algorithm has less controlled parameters, its procedure is more simple and its performance for solving complex optimization problems is more prominent. Therefore, it has been widely used in many fields.Duo to the search efficiency and optimization performance of evolutionary algorithm are influenced by control param-eters and mutation strategy. This paper further studies the differential evolution algorithm and proposes an improved differential evolution algorithm. New improved algorithm employs a new variation strategy by introducing adaptive weights for two variation strategies such that their respective advantages can be used dynamically. The test on benchmark functions show that the new algorithm can avoid premature convergence, improve the convergence speed and has higher searching ability.As intelligent optimization algorithm, because of the massively parallel process-ing, distributed storage and highly fault-tolerant ability, artificial neural networks is considered as an effective way to solve large-scale linear and nonlinear optimiza-tion problems and has been applied to solve various optimization problems. In the past ten years, the neural networks solving LVI and related constraint optimization problem have been deeply researched. But most of the neural network models don’t consider delay factors or their stability conditions require that matrix M is posi-tive semidefinite or positive definite. A new neutral-delay projection neural network to solve linear variational inequality(LVI) problems is proposed in this paper. By constructing Lyapunov-Krasovskii function and using the theory of functional differential equation, some sufficient conditions are provided to ensure the exponen-tial stability and global asymptotic stability of the proposed neutral-delay projection neural network.The stability conditions don’t require the monotonicity of the consid-ered problems. Conclusion ensures that the proposed network can be used to solve nonmonotone LVI problems. Since the obtained results are expressed in the form of linear matrix inequalities(LMIs), they can be easily checked by Matlab toolkit. The numerical simulations illustrate the effectiveness of the proposed projection neural network. |