Font Size: a A A

Research On Resource Sharing Algorithm In Complex Dynamic Combat Environment

Posted on:2023-09-11Degree:MasterType:Thesis
Country:ChinaCandidate:S J ZhaoFull Text:PDF
GTID:2532306911486234Subject:Engineering
Abstract/Summary:PDF Full Text Request
In modern war,as the carrier of information transmission,electromagnetic spectrum is the key resource for the performance of modern information equipment.However,the sharp increase in the number of equipment using spectrum resources for information transmission has led to a high demand for electromagnetic spectrum resources.Due to the limited resources,serious conflicts have been caused in the use of spectrum resources.In order to occupy the advantages of electromagnetic spectrum control in the future electromagnetic spectrum war,we need to study the efficient resource sharing algorithm.The main work of this paper are as follows:In order to evaluate the electromagnetic spectrum resource sharing algorithm more scientifically and reasonably,this paper constructs a mathematical model for dynamic combat environment.Firstly,the dynamic combat environment is realized by introducing the time-varying characteristics of enemy interference and scene noise.In addition,a threedimensional objective function combining time-frequency energy is proposed.Finally,through the design of balance factor,the emphasis on different operational mission objectives can be realized.In this paper,a resource sharing algorithm based on parallel interaction of genetic algorithm and discrete particle swarm optimization(DPSO)is proposed to solve the problem of interference between devices under the condition of limited resources.The algorithm achieves the maximization of the benefit of the combat target by iterating in the solution space.The algorithm adopts the improved genetic algorithm and the improved discrete particle swarm optimization algorithm for parallel search.On the basis of parallel search,the optimal individual double-scale mutation and sharing interaction are carried out to accelerate the global optimization ability of the algorithm.In the part of genetic algorithm,the combination of adaptive and local search is used to expand the ability of global search and local search.In the part of discrete particle swarm optimization algorithm,the adaptive inertia weights and local search are used to improve the algorithm and improve the ability of particles to jump out of the local optimal solution.In the simulation environment of this paper,the average benefit of this algorithm in a single time slot is better than the improved genetic algorithm and the improved discrete particle swarm optimization algorithm,and the global optimization ability is improved.At the same time,it can quickly converge to a better value,has an advantage in time complexity,and has a certain practical value.In order to realize dynamic intelligent decision-making,this paper proposes an intelligent resource sharing algorithm based on the sharing model of limited resources,and achieves the dynamic sharing of limited resources through the collaborative training of deep reinforcement learning units.The algorithm takes the time-varying interference information as the state space,which provides the feasibility for resource sharing in the dynamic environment.The dimension of action space is expanded through the cooperative decision of multiple deep reinforcement learning units.The algorithm solves the problem that a single network is difficult to train.Finally,the cooperative action decision of multiple deep reinforcement learning units after training is taken as the resource sharing scheme of the system.Simulation results show that the algorithm network is basically convergent,and realizes the optimization and improvement of resource sharing benefits in the dynamic interaction process.It has significant advantages in time complexity,and has the research value of resource sharing in the dynamic environment.
Keywords/Search Tags:Dynamic Combat Environment, Resource Sharing, Genetic Algorithm, Discrete Particle Swarm Optimization, Deep Reinforcement Learning
PDF Full Text Request
Related items