Font Size: a A A

Research On End-to-End Autonomous Driving Based On Driving Scene Transfer

Posted on:2024-01-29Degree:MasterType:Thesis
Country:ChinaCandidate:W ZhouFull Text:PDF
GTID:2542307103499264Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
Autonomous vehicles can effectively alleviate traffic congestion and reduce the probability of traffic accidents caused by drivers’ lack of concentration by loading intelligent devices to make driving behavior decisions.End-to-end autonomous driving is one of the core technologies of autonomous driving,which can directly obtain the driving decisions and actions of the vehicle through the raw information input by the sensor.However,most of the existing end-to-end autonomous driving methods are limited to similar training scenarios,resulting in significant performance degradation when applied in other target scenarios.Scene migration can migrate the scene features of the source domain of the training set to the target domain scene,reducing the domain offset between the training and application scenarios.Therefore,aiming at the characteristics of driving scenes,this paper first proposes a scene transfer model based on space-time cycle consistent generation confrontation,which improves the image quality generated by driving scene migration and reduces the frequency of flicker artifacts.In addition,this paper introduces scenario migration into end-to-end autonomous driving,designs a new end-to-end autonomous driving method,and improves the cross-domain decision-making performance of end-to-end models.The dissertation work mainly includes the following two parts:In the first part,in order to improve the image quality of continuous driving scene sequence scene transfer,this paper proposes a scene transfer model based on space-time cycle consistent generation confrontation.In this model,the cyclic generative adversarial network is first studied,and on this basis,the spatial semantic consistency constraint and time consistency constraint methods are proposed,and the scene transfer network for continuous driving scenarios is implemented based on these two methods.The network implements spatial semantic consistency constraints by keeping the image category information unchanged before and after scene migration to ensure that sufficient semantic information is retained during scene migration.At the same time,the time prediction module is introduced to construct the future cycle reconstruction of the current driving scene to realize the time consistency constraint,so as to improve the continuity of the generated driving scene sequence.To optimize the model parameters,train the model using adversarial thoughts.During scene migration,the driving scene sequence is input into the generative model to obtain the target domain driving scene sequence.Finally,the model proposed in this paper is trained and tested in the dataset collected from the Carla simulator.Experimental results show that compared with the comparison method,the proposed model can generate high-quality driving scene sequences with high continuity,more semantic information retention.In the second part,in order to solve the problem of model performance degradation during end-to-end autonomous driving cross-domain decision-making,an end-to-end autonomous driving model based on scenario migration is proposed.In this paper,the imitation learning method is first studied,and on this basis,the image feature extraction module and speed fusion module for driving scenes are designed,and the end-to-end autonomous driving model is realized based on these two modules.Then,a method combining scene migration and end-to-end autonomous driving is designed,and the domain offset is reduced by using the proposed scene migration model to migrate the driving scene before and after cross-domain,and the target domain driving scene generated after migration is input into the end-to-end autonomous driving model for decision-making.Compared with the existing end-to-end autonomous driving model,the proposed method reduces the domain shift between the source domain and target domain data of the training set by introducing scene transfer,and effectively improves the cross-domain decision-making performance of the end-to-end autonomous driving model.In this paper,a scene transfer model based on space-space cycle consistent generation adversarial is proposed by designing spatial semantic information consistency constraints and temporal information consistency constraints for cyclic generation adversarial networks,which can improve the image quality generated by scene migration,reduce the frequency of flicker artifacts,and retain more scene features for end-to-end autonomous driving feature extraction.In addition,this paper integrates the scenario transfer method into end-to-end autonomous driving,effectively improves the cross-domain decision-making performance of the autonomous driving model,and provides a theoretical basis and technical support for training a cross-scenario autonomous driving method.
Keywords/Search Tags:scene transfer, generative adversarial networks, spatio-temporal cycle consistency, cross-domain decision making, autonomous driving
PDF Full Text Request
Related items