Font Size: a A A

Dual Sensor Information Fusion For Target Detection And Attitude Estimation In Autonomous Driving

Posted on:2020-03-17Degree:MasterType:Thesis
Country:ChinaCandidate:P CaoFull Text:PDF
GTID:2392330590973314Subject:Electronic and communication engineering
Abstract/Summary:PDF Full Text Request
Autonomous driving is an important direction of current artificial intelligence research.People’s appeal for intelligent transportation makes this research direction of great practical value.Self-driving vehicles are often equipped with many types of sensors,such as driving recorders and laser radars.The different characteristics of various sensors determine that a single sensor can only provide part of the information of the environment.Therefore,in order to obtain more detailed environmental information around the vehicle,it is necessary to integrate multi-sensor data.Target detection is one of the key technologies of autonomous driving research.Accurate and rapid detection of obstacles around the vehicle can provide effective obstacle avoidance information for the autonomous vehicle to ensure safe driving.With the development of computer vision technology,target detection technology has gradually evolved from the past two-dimensional detection to three-dimensional space target detection.In the field of automatic driving,high-precision three-dimensional target attitude estimation can further improve the safety of autonomous vehicles.Based on this,this paper studies the three-dimensional object pose estimation based on dual sensor information fusion.Firstly,this paper studies the target detection algorithm based on RGB image.RGB images are a common and important source of information on autonomous vehicles,and target detection techniques based on RGB images are more mature.Based on the introduction and analysis of three types of target detection algorithms,this paper selects the regression-based target detection algorithm to improve.The algorithm before and after the improvement is carried out on the public dataset and the locally collected data.The experimental results fully reflect the superiority of the improved algorithm.Secondly,this paper studies the target location method based on RGB image and LiDAR point cloud data fusion.For the current advanced RGB image and LiDAR point cloud fusion algorithm,the deficiencies are analyzed and the cone-bevel PointNet target behavior algorithm is proposed.The target location experiment of the algorithm is carried out on the public dataset.The results show that the proposed algorithm has better accuracy in target location.Experiments on the local collection dataset show the practical value of the target location algorithm.Finally,the target’s attitude parameters are predicted and visualized.Target pose parameters include size,orientation,and position.The target size and direction parameter prediction methods are studied,and the target parameters are visualized in the three-dimensional space in combination with the position parameters of the previous chapter.The experiment uses the detection rate of the 3D target to measure the prediction accuracy of the target attitude parameter,and gives the visual attitude of the target in the driving scene.
Keywords/Search Tags:autonomous driving, target detection, multi-sensor fusion, attitude parameter prediction
PDF Full Text Request
Related items