Font Size: a A A

Research On Multi-factor Anthropomorphic Free Lane Change Decision Making Method

Posted on:2024-05-23Degree:MasterType:Thesis
Country:ChinaCandidate:Z K CaoFull Text:PDF
GTID:2542307064483224Subject:Vehicle Engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of intelligent vehicle research,highway autonomous driving system is one of the important systems that can be industrialized at present.Improving the safety and efficiency of free lane change is one of the key technologies for the rapid application of the highway automatic driving system,which is a hot topic in the research of scholars at home and abroad.At present,expert rule-based or reinforcing-based learning methods are mostly used in free lane change decision-making.Expert rule-based methods have low generalization ability due to limited factors,while reinforcing-based methods have low learning efficiency,strong data dependence and other problems.To solve the above problems,this paper proposes a free lane change decision method combining reinforcement learning and rules,studies the lane change decision model based on DQN algorithm considering the driver’s driving style,and the free lane change safety restriction rules considering the road coefficient,and realizes the free lane change decision considering multiple factors and personification.Firstly,the influencing factors of free lane change were analyzed.Analyze the process of free lane change,simplify the scene of lane change,analyze the influencing factors of lane change from the perspective of safety and income,including traffic flow,driving style and weather,and study the characteristic quantity that can represent these factors.Secondly,the free lane change decision method based on DQN reinforcement learning and safety restriction rules is studied.Based on the theoretical basis of DQN method,the deep Q network structure is designed.Then analyze the design state space,the action space and the reward function considering the driving style: the continuous state space mainly includes the relative speed and distance of the surrounding vehicles,the vehicle speed,etc.The discrete action space includes left lane change,right lane change and lane keeping.In addition,by comparing the lane number of the intelligent vehicle and the allowable lane number,certain restrictions are taken on the target lane to avoid vehicles entering the lane outside the allowable range.Meanwhile,this simple rule limitation also avoids the model from learning lane change strategy inefficiently.The designed reward function includes three parts: high efficiency reward,safety reward and comfort reward.By adjusting the setting of reward function,the design and construction of anthropomorphic free lane change DQN model is finally completed.In addition,a safety rule model considering different weather was designed,mainly considering the difference in lane change safety distance between the intelligent vehicle and the car in front of the target lane and the car in front of the current lane on the road surface with different road coefficients.Finally,the input and output data processing of the model is completed,and the DQN model is trained in the VTD/Mat Lab training and verification environment.Finally,in the verification stage,different parameters in the reward function are designed to train agents with different lane changing habits,aiming at the influence of driving style on free lane changing and considering the characteristics of high contribution rate to driving style classification.The three models are verified in different traffic flow density environments.The results show that lane change models with different driving styles can exhibit corresponding lane change characteristics.Compared with neutral and conservative models,aggressive models have higher lane change frequency,faster average speed and shorter average following distance at lane change time.Meanwhile,lane change models with three driving styles can also complete lane change behavior in different traffic density environments.In addition,in view of the insufficient consideration of the current lane change model on the impact of weather,the safety lane change rule considering the road coefficient is verified.The results show that,considering the road coefficient,the smart cars with different driving styles will reduce the lane change frequency and reduce the accident risk in bad weather.Anthropomorphic and multi-factor verification show that by changing the training parameters of the model which are strongly related to the driving style,the decision-making models of different driving styles can be trained,and the anthropomorphic design of lane change decision-making can be realized in the intelligent vehicle rather than a single lane change style.By considering various factors,the migration ability of the model in different scenarios can be improved and the application scope of the model can be expanded.The combination method of reinforcement learning and rule restriction can have the advantages of both.On the basis of generalization ability of reinforcement learning,expert knowledge can be used to improve security and reduce learning complexity,because we hope that the decision model can ensure security through rules when learning different lane change strategies.
Keywords/Search Tags:Autopilot, Free lane changes, Reinforcement learning, Driving style
PDF Full Text Request
Related items