Font Size: a A A

Vehicle Running Trajectory And Energy Saving Optimization Based On Double-layer Deep Reinforcement Learning

Posted on:2024-03-22Degree:MasterType:Thesis
Country:ChinaCandidate:Y ShenFull Text:PDF
GTID:2542307103990559Subject:Transportation
Abstract/Summary:PDF Full Text Request
With the continuous development of automobile industry,automatic driving technology and new energy technology will gradually be widely popularized,which is also an important technical means to support our country to achieve "carbon peak" and "carbon neutrality" at an early date.At present,automatic driving technology is in its infancy,and it still needs some time and technical support to achieve L4,L5 level and even full automation level of automatic driving technology.At the same time,with the continuous development of new energy technologies,hybrid electric vehicles,electric vehicles,fuel cell vehicles gradually come into people’s vision.Among them,fuel cell vehicles have attracted much attention from scholars because of their characteristics such as zero emission.However,fuel cell energy management strategies currently have shortcomings such as low precision and weak adaptability.Using artificial intelligence algorithm to apply autonomous driving technology to new energy vehicles can not only efficiently deal with energy management problems,but also achieve certain technological breakthroughs at the level of autonomous driving.Therefore,this paper takes hydrogen fuel cell bus as the research object and uses deep reinforcement learning to carry out research on vehicle running trajectory and energy-saving optimization.The main work contents are as follows:(1)In the actual driving process,following,lane change and free driving are three basic driving behaviors.In the process of lane change,the driver needs to pay attention to the vehicles running on the current lane and the target lane at the same time,so as to make a decision and make a safe,comfortable and efficient lane change.Thus,compared with following and free driving,lane changing of vehicles is more complicated.To solve this problem,this paper divides the lane change process into two stages: lane change decision-making and lane change movement.After considering the relationship between the two stages,a two-layer deep reinforcement learning architecture is proposed to complete the complete control of the vehicle lane change process.The upper structure uses DQN algorithm to control lane change decision,and the lower structure uses DDPG algorithm to control lane change motion.In addition,after the lane change process,the position information of vehicles before and after lane change is fed back,and the collaborative optimization is carried out to the DQN algorithm.The results show that compared with the traditional rule-based lane change decision and trajectory planning based on quintic polynomial,the proposed two-layer deep reinforcement learning architecture can increase the average speed of the agent vehicle in the road section by 2-5%,and reduce the average lateral speed and lateral acceleration in the process of lane change by 12.5% and 12.2%,respectively.And whether there is collaborative optimization also has 34.64% optimization effect on the choice of lane change timing of the whole two-layer architecture.(2)As a key technology of fuel cell bus,energy management strategy is to improve the driving efficiency and energy consumption economy of the vehicle through rational distribution of fuel cell and power cell power.In this paper,Python modeling was carried out for fuel cell buses,and a deep reinforcement learning algorithm energy management strategy based on high-speed traffic scenarios was proposed.First,a deep reinforcement learning energy management framework was established in order to optimize energy consumption and economy and give consideration to the durability of fuel cells.The results show that the proposed method can achieve the goals of reducing energy consumption and economy and improving fuel cell durability in high-speed traffic scenarios,and complete the model construction for the following in-depth research.The energy management strategy can effectively reduce hydrogen consumption by 4.21% and power fluctuation by 17.2%,which can effectively improve the durability of fuel cells.(3)Combined with the rapidly developing Internet of vehicles technology,the machine learning method can rapidly deal with vehicle running problems and energy management problems at the same time in complex traffic scenarios.Therefore,this paper proposes a hydrogen fuel cell bus energy management method based on double layer depth deterministic strategy gradient(DDPG)in an urban expressway containing multiple sets of ramps.SUMO traffic simulation software is used to build an urban expressway containing three sets of ramps with different traffic characteristics for research.Based on DDPG,the characteristics of continuous motion space can be processed.Improve control accuracy to improve optimization effect.In the upper DRL,the intelligent agent carries out reasonable speed control on the hydrogen fuel cell vehicle by processing the driving influence brought by the vehicle in front of the road and the vehicle in the on-ramp entrance and exit,so as to keep its driving gentle and stable,and minimize the energy loss caused by frequent speed changes.In the lower DRL,after receiving the speed output from the upper layer,the intelligent agent distributes the power of the fuel cell and the power battery.While reducing the energy consumption,the power fluctuation of the fuel cell is taken into account to improve the durability of the fuel cell.In terms of speed control,7.9% of acceleration and 19% of acceleration change rate are effectively reduced,and the hydrogen fuel cell bus is always within the optimal safe following distance through verification.In the aspect of energy management,the equivalent hydrogen consumption of hydrogen fuel cell bus is effectively reduced,reaching 93.25% of the DP global optimal effect.
Keywords/Search Tags:fuel cell bus, Deep reinforcement learning algorithm, Autonomous driving technology, Energy management strategy, Internet of vehicles technology
PDF Full Text Request
Related items