Font Size: a A A

Research On Reinforcement Learning Based Routing Approachesin Wireless Ad Hoc Networks

Posted on:2022-08-22Degree:MasterType:Thesis
Country:ChinaCandidate:S S JiangFull Text:PDF
GTID:2518306338469964Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
With the popularization of intelligent mobile terminals and the improvement of network infrastructure,the application of wireless self-organizing network is more and more extensive.Vehicular Ad Hoc Networks(VANET)and Underwater Sensor Networks(UWSNS)are special applications of wireless Ad Hoc Networks in different scenarios.In recent years,with the deepening of the research on intelligent transportation and the urgent need for Marine resource survey,both VANET and UWSNs have received more and more extensive attention.The efficient routing technology is the basis to ensure the effective transmission of information between nodes in the network,and it is also the key to determine whether it can be put into practical application.Q-Learning,as a classic reinforcement learning algorithm,can help the main body of the algorithm,namely the node,to complete the learning of relevant knowledge through interaction with the environment and enhance the intelligence of node information transmission.Therefore,it is widely used in the design of routing mechanism.However,the existing routing mechanisms based on Q-learning algorithm have some shortcomings in the aspects of the convergence of Q-value table and the evaluation of state transition probability.Based on Q-learning algorithm,two new routing algorithms are designed in this paper,which are respectively applicable to VANET with rapid topology changes and UWSWS with limited node energy.The innovation of this paper is mainly reflected in the following two aspects:(1)A new Q-learning based UAV-assisted adaptive geographic routing approach(QAGR)for VANET is proposed.In the aerial routing part of QAGR,the UAV uses fuzzy logic algorithm to calculate the global optimal path,which helps the vehicles on the ground with information transmission request to filter out the deviated or congested neighbors when selecting the next hop node.In the ground transmission process,QAGR quantifies the maximum transmission distance and the maximum number of adjacent nodes to construct a stable state space.Compared with the state space constructed by neighbor nodes,the Q-value table constructed by QAGR is more stable and has a longer service life.In addition,QAGR also accelerates the convergence of Q table by building and sharing Q table within the region.The simulation results show that compared with the existing routing protocols,QAGR can reduce the transmission delay while ensuring the transmission rate.(2)A power adaptive routing mechanism based on reinforcement learning is proposed for underwater sensor networks.Due to the particularity of underwater environment,energy consumption is the most important problem faced by UWSNs.In QPAR,through constructing the node energy transfer within the scope of evaluation model,the node transmission power adaptive control model and the data transmission direction selection model,and combined with the Q-learning algorithm,helping the nodes according to their own and the surrounding environment to choose the most suitable for data transmission of the next jump,and using the best power forward data.The simulation results show that compared with the existing schemes,QPAR can effectively reduce and balance the energy consumption of nodes in the network,and prolong the life of the network.
Keywords/Search Tags:wireless ad hoc network, VANET, UWSNs, Q-Learning, routing mechanism
PDF Full Text Request
Related items