Font Size: a A A

Research On Energy Management Strategy Of Power-split Hybrid Electric Vehicle Based On Deep Reinforcement Learning

Posted on:2022-01-04Degree:MasterType:Thesis
Country:ChinaCandidate:B Y ChenFull Text:PDF
GTID:2492306536979799Subject:Engineering (vehicle engineering)
Abstract/Summary:PDF Full Text Request
Hybrid electric vehicles(HEV)have significant advantages in reducing air pollution and alleviating energy crisis.As the key technology of HEV,energy management can coordinate the energy distribution of multiple power sources to make the system components operate efficiently.Therefore,the energy management strategy combining with deep reinforcement learning algorithm for power-split HEV is carried out.The main contents are as follows:(1)Analyze the working mechanism of the power-split hybrid electric vehicles,and establish the model of engine,motor,battery and other system components.Formulate the rule-based energy management strategy,and optimize the range of three power threshold parameters in logic threshold combined with PSO algorithm.Because the rule-based control strategy is widely used in real vehicles and the DP strategy is globally optimal,the above two strategies are used as the basis for the follow-up strategy research.(2)The Q-learning algorithm is applied to the hybrid energy management strategy.The parameters of Q-learning,such as state variables,control variables and reward function,are set.The optimal economic curve of engine is adopted to reduce the control variables of the system,and analyze the implementation process of Q-learning algorithm.Use the method of action space optimization to eliminate the action that does not meet the system constraints and optimize the initial value of Q-table combining with ECMS strategy.Comparative analysis shows that the two optimization methods can effectively improve the application efficiency of Q-learning.(3)Combining Q-learning and deep network,a hybrid energy management strategy based on DQN is proposed.This strategy realizes end-to-end decision-making control with speed,acceleration,engine speed and battery SOC as input and the toque of motor1 as output.The learning process of DQN algorithm is analyzed in detail,and the deep network structure,loss function and algorithm super parameters are set.Combined with double DQN,different network parameters are used to select and evaluate the actions in the target Q value,and the problem of over estimation in Q-learning and DQN is solved.The method of prior replay is used to extract data from the experience pool,which gives more sampling weight to the samples with high learning efficiency.It is proved that this method effectively speeds up the convergence process of the algorithm.(4)A typical driving cycle database is constructed and divided into several cycle fragments.Through principal component analysis and correlation analysis among various characteristic parameters,four representative characteristic parameters are selected and Kmeans clustering analysis is used to divide the driving cycle segments to 3 typical driving cycles.The three typical driving cycles are used for the training of driving cycle recognition classifier based on GRNN network and the off-line optimization of Qlearning and DQN strategies.Finally,the off-line Q-learning and DQN strategies are applied to similar driving cycle online combining with GRNN driving cycle recognition classifier.The self-adaptive control of Q-learning and DQN strategies to the changes of driving cycle is realized.
Keywords/Search Tags:Hybrid Electric Vehicles, Energy Management Strategy, Q-learning, DQN
PDF Full Text Request
Related items