| In modern warfare,cognitive electronic warfare has become one of the most important combat methods on the battlefield.In cognitive electronic warfare,jamming strategy optimization technology determines whether we can choose the optimal jamming strategy to protect our targets in the face of enemy search and tracking,which is undoubtedly a key link.However,the traditional jamming strategy optimization technology has been unable to cope with the modern intelligent war.At present,the academic research on jamming strategy optimization is only limited to jamming strategy optimization itself,there is no jamming strategy optimization model with complete closed-loop logic,and there is no feedback basis in the training process of jamming strategy optimization.In order to solve this problem,In this paper,the optimization of jamming strategy is further studied based on intelligent algorithm,including the following contents.Firstly,a new and improved intelligent optimization algorithm is proposed to recognize the working mode of radar.When using the traditional neural network to identify the radar working mode,there is a problem that the weight threshold can not be adjusted adaptively,which leads to the reduction of the recognition accuracy of the working mode.In this paper,a new improved sparrow algorithm of fire algorithm optimized BP neural network(Improved Sparrow Algorithm of Firefly Algorithm Optimized BP Neural Network,FA-SSA-BP)model based on firefly algorithm is proposed.The algorithm introduces a dual optimization module to correlate radar samples and working mode types,making the selection of weight threshold more intelligent,So as to improve the recognition accuracy of radar working mode.Finally,the effectiveness of the proposed algorithm in improving the accuracy of radar working mode recognition is verified by simulation experiments and comparison with classical algorithms.Secondly,aiming at the lack of feedback basis in the optimization training of jamming strategy,a game feedback model between radar and jammer based on game theory is proposed.The model is composed of interference strategy dynamic game model and interference signal game model.It can fully consider the influence of many factors including environment,and overcome the problems of single index and insufficient accuracy in the traditional game model.The reward matrix is constructed according to the game results to provide training feedback for the optimization of interference strategy.Finally,aiming at the problems that the convergence of interference strategy optimization is easy to fall into local optimization and slow convergence speed,an improved interference strategy optimization method of Q-learning algorithm is proposed.The algorithm proposes two improvements: one is to add an adaptive factor to adjust the strategy of Q-learning algorithm by estimating the size and dimension of input samples of training model.Second,a one-step learning process is added on the basis of the original algorithm,which can find obstacles and targets in advance and select actions,so as to converge to the optimal value faster.Finally,simulation experiments and comparative experiments with classical reinforcement learning methods verify the effectiveness of the proposed algorithm in solving the problem of interference strategy optimization falling into local optimization and improving the convergence speed. |