Font Size: a A A

Research On Autonomous Vehicle Planning Strategy Based On Improved Deep Reinforcement Learning

Posted on:2024-09-24Degree:MasterType:Thesis
Country:ChinaCandidate:H ZhangFull Text:PDF
GTID:2542307088994509Subject:Master of Mechanical Engineering (Professional Degree)
Abstract/Summary:PDF Full Text Request
With the continuous increase in car ownership,safety and environmental issues caused by vehicles are becoming increasingly serious.With the development of network communication,artificial intelligence,and other directions,the widespread application of autonomous driving has become possible.In this context,China proposes to guide the transformation of the automotive industry with the "new four modernizations" of intelligence,electrification,networking,and sharing.Intelligence is an important development direction for assisted driving and autonomous driving in automobiles.This article divides autonomous driving planning into two parts: global planning and local planning.Global planning refers to planning an optimal path for vehicles from the starting point to the end point,while local planning refers to accelerating and Specific operations such as braking and steering ensure driving safety and efficiency.The deep reinforcement learning algorithm represented by Deep Q Network(DQN)has the characteristics of exploratory learning and autonomous decision-making,fully adapting to complex and ever-changing traffic environments.However,it has the problem of overestimating and seriously damaging the performance of the strategy.In response to the above issues,the main research content of this article is as follows:(1)Suppress overestimation in DQN algorithm and propose Suppress Q Deep Q Network(SQDQN)algorithm.As information entropy is a measure of credibility,the proposed SQDQN algorithm introduces the concept of information entropy to solve the overestimation problem in the DQN algorithm,and uses information entropy to evaluate the updating process of the DQN,so as to suppress the overestimation in the DQN.(2)Establish a global planning strategy based on SQDQN,and conduct simulation verification on the SUMO simulation platform for the SQDQN based global planning strategy.Describe the global planning process through Markov decision-making,construct a deep reinforcement learning environment in the global planning process,define the states,actions,and rewards in the interaction process between the strategy and the environment,and based on this,construct a SQDQN based global planning strategy.Building a simulation map environment on the SUMO simulation platform,comparing global planning strategies based on SQDQN,DQN,and Dijkstra global path planning algorithms,the adaptability of SQDN based global planning strategies to changing traffic environments was verified,as well as the suppression of overestimation in DQN,thereby improving strategy performance.(3)Establish a local planning strategy based on SQDQN,and conduct simulation verification on the CARLA simulation platform for the local planning strategy based on SQDQN.Describe the global planning process through Markov decision-making,construct a deep reinforcement learning environment in the local planning process,define the states,actions,and rewards in the interaction process between the strategy and the environment,and based on this,construct a local planning strategy based on SQDQN.Build a simulated traffic environment on the CARLA simulation platform,and compare the local planning strategies based on SQDQN,DQN,and expert decision-making.The local planning strategy based on SQDQN can make intelligent decisions in different traffic environments,demonstrate adaptability to changing traffic environments,and perform better than the local planning strategy based on DQN.Therefore,the proposed SQDQN algorithm suppresses overestimation in the DQN algorithm while maintaining intelligent decision-making and autonomous learning.
Keywords/Search Tags:Autonomous driving, Deep reinforcement learning, Planning strategy, Intelligent decision
PDF Full Text Request
Related items