Font Size: a A A

Research On Environment Perception Method Based On RGB Image And Point Cloud Fusion

Posted on:2022-07-27Degree:MasterType:Thesis
Country:ChinaCandidate:G F XieFull Text:PDF
GTID:2492306563978619Subject:Mechanical and electrical engineering
Abstract/Summary:PDF Full Text Request
With the comprehensive promotion of artificial intelligence and 5G technology,autonomous driving technology shows a trend of rapid development.In order to cope with the complex and changeable environment,it is required that autonomous vehicles have better environmental perception ability.At present,camera and lidar are the core sensors for autonomous driving environment perception.However,each sensor has its own limitations.Monocular camera cannot range.On the other hand,lidar lacks color information of objects,and its resolution power is much lower than that of cameras.It makes lidar difficult to identify information such as lane lines and traffic signs.Fusion of image and point cloud can accurately obtain environmental information with depth.Therefore,in order to realize the environment perception to meet the requirements of autonomous driving,the main content of this paper is the research on the environment perception method based on the fusion of RGB image and point cloud.In this paper,the environmental perception task is divided into two parts: traffic rules signs and 3D target detection.For the task of traffic rules signs detection,a kind of row selection based method and an improved YOLOv3 algorithm are used to detect lane lines and traffic signs respectively in the image.For the row selection based method,the grid line anchor classification algorithm is proposed to greatly reduces the calculation amount.At the same time,a shape loss function based on the continuity of lane lines was designed to solve the problem that lane lines were partially occluded and difficult to detect.For the row selection based method,the detection accuracy reached 90.7% in the Normal data set of CUlane.For the improved YOLOv3 traffic sign detection,the shallower feature map in Darknet-53 was extracted for target positioning to solve the problem of difficult detection of small targets.Then,a weakened non-maximum suppression mechanism was used to ensure that two adjacent traffic signs could be output normally.For the improved YOLOv3,the detection accuracy reached 98.7% in CCTSDB data set.After the detection of traffic rules signs is completed in the image,the joint calibration of radar and camera is used to project the traffic rules signs to the point cloud,so that the detection results get depth.For the 3D target detection task,this paper proposes a target detection method based on the fusion of image and point cloud.Firstly,the flat truncated body corresponding to the detection box generated by the two-dimensional detector is used to filter the original point cloud to reduce the amount of data.Then,an improved voting model network based on Generalized Hough Transform is proposed to extract multi-scale features and solve the problem of non-uniform density of point clouds.The output multi-scale features are input to an RPN network composed of FCN,detection heads and regression heads for classification and regression.Finally,the 2D DIOU loss function is extended to 3D to improve the consistency of the generated box and the target box,thus optimizing the accuracy of classification and regression.A large number of experiments on the KITTI dataset show that compared with the previous baseline,the3 D detection accuracy of our algorithm is improved by 0.71%,to 89.73%,and the BEV(Bird eyes vision)detection accuracy is improved by 7.28%,to 97.51%,indicating the effectiveness of the improved algorithm.To verification the traffic rules signs and 3D object detection algorithm,this paper also designs a set of ROS system based on the fusion of image and point cloud.The ROS convenient message mechanism is used to complete the communication and information integration between multiple processes.Finally,the real-time traffic rules signs and 3D object detection are realized and unified to the same framework...
Keywords/Search Tags:environment perception, sensor fusion, detection
PDF Full Text Request
Related items