Font Size: a A A

Research On UAV Networking Technology Based On Machine Learning

Posted on:2024-04-10Degree:MasterType:Thesis
Country:ChinaCandidate:Y ZhuFull Text:PDF
GTID:2542307079475684Subject:Electronic information
Abstract/Summary:PDF Full Text Request
As an emerging technology,UAV(Unmanned Aerial Vehicle,UAV)has unprecedented applications in the military field,civilian field,and military-civilian coordination.The current trend in this technology is from small-scale,large-scale remote-controlled drone networks to large-scale,small autonomous drone network swarms that cooperate to complete complex tasks.One of the important challenges is to develop efficient sensing,communication,and control algorithms for the requirements of highly dynamic UAV networks with heterogeneous mobility levels.However,because the Flying Ad-hoc Network(FANET)has the characteristics of very fast node movement,frequent topology changes,rapid changes in application requirements,and uneven link quality,the multi-hop routing of FANET is currently an unresolved problem.To date,no actual standard for FANET routing has been identified,and most existing approaches consider variants of widely recognized protocols.These protocols were originally designed for Mobile Ad-hoc Network(MANET).The research focus of these variant protocols is how to use the learning ability of reinforcement learning to select the optimal path based on more accurate perception of network topology,link status,user behavior,traffic mobility,etc.Based on this,this thesis combines reinforcement learning to conduct research on routing protocols suitable for FANET.For the UAV cluster network communication scenario in FANET,this thesis uses the combination of Q-Learning and Deep Reinforcement Learning(DRL)technology to optimize the routing algorithm in the small UAV cluster communication scenario.This thesis optimizes a routing selection mechanism based on Q-Learning on the basis of Smart Robust Routing,which prompts UAV nodes to choose paths with relatively stable link quality and shorter hops.On the basis of Robust and Scalable Routing(Robust and Scalable Routing),a routing and forwarding mechanism based on the proximal strategy optimization algorithm is optimized.The UAV node judges whether the link is stable through the routing and forwarding mechanism to choose whether to conduct network exploration.To achieve the purpose of balancing network overhead and node exploration capabilities.Finally,the simulation is carried out on the NS3 simulation platform.Compared with the robust and scalable routing and the optimized link state routing protocol(OLSR),the optimized routing algorithm in this thesis has a higher transmission success rate and lower end-to-end delay,smaller network overhead.For the joint communication scenario between UAV swarms and ground base stations in FANET,this thesis uses Deep Q Network(DQN)technology for relay node selection routing.This thesis adopts the clustering algorithm based on geographic location to divide the scene into grids,and the communication within the cluster can be reached with a single hop.The inter-cluster network uses DQN-based relay node selection routing,including an adaptive relay node selection module and a DQN-based node mobility selection module.The DQN-based node movement selection module selects whether the node moves to the ground base station by judging the current environment state and task urgency,so as to better select the relay node to enhance network communication performance.Finally,the simulation is carried out on the NS3 simulation platform.The routing algorithm used in this thesis is compared with Movement Assisted Delivery(MAD),which has a higher transmission success rate,lower end-to-end delay,and higher cumulative Delivery ratio,but with some penalty in terms of mobile overhead.
Keywords/Search Tags:reinforcement learning, routing algorithm, drone swarm
PDF Full Text Request
Related items