| Environmental perception is a key stage in autonomous driving.Comprehensive and accurate perception of the surrounding environment information is an important guarantee for the driving safety of autonomous vehicles.The single perception system detection algorithms have the problems of poor robustness and insufficient perception ability in dealing with complex road environments with dense obstacles,and most of the existing multi-sensor fusion algorithms are difficult to accurately detect small objects such as pedestrians.In order to achieve accurate object detection,this paper conducted research on obstacle detection methods for autonomous driving based on Li DAR and camera multi-sensor fusion.The research contents are as follows:(1)Calibration is the core of multi-sensor spatial synchronization and cooperative work,and it is also the premise for the perception algorithm to accurately estimate the obstacle pose.The principle of perspective imaging in monocular camera and the joint calibration model of plane target based on lidar and camera are analyzed,and the internal reference calibration experiment and joint calibration experiment are designed on this theoretical basis.With the help of software platforms such as ROS and MATLAB,the internal and external parameter matrices of the sensor are solved,and the spatial registration and alignment of the laser point cloud and the image data is completed,which verifies the validity of the joint calibration results.(2)In order to make up for the insufficiency of a single sensor in the perception and detection task and achieve more accurate classification of obstacles,an obstacle detection scheme based on lidar-based region of interest extraction and vision fusion is proposed.Research on obstacle point cloud detection based on lidar is carried out,mainly including point cloud filtering,RANSAC ground point cloud segmentation based on angle constraints,obstacle point cloud clustering based on KD-Tree spatial index and adaptive parameters,and graph-based 3D bounding box pose estimation of the envelope;Using the projection transformation matrix between the two sensors,the obtained obstacle point cloud bounding box is projected to the image plane,and its corresponding region(Region of Interest,ROI)in the image is obtained,and the ROI region on the image is appropriately enlarged.The overlapping region merging strategy based on greedy algorithm is adopted to obtain the final ROI region;Drawing on CSPNet and SPP,the original YOLOV3 is improved to complete feature extraction and detect obstacles in the ROI area.The fusion detection algorithm proposed in this chapter is compared with the detection only based on vision.The results show that the AP of car,pedestrian and cyclist reached 95.98%,82.36% and 88.33% respectively.(3)In order to achieve accurate obstacle detection in 3D space and improve the detection accuracy of small objects such as pedestrians and cyclists,a 3D object detection network based on self-attention mechanism for point cloud feature extraction and image fusion is proposed.Improve the Faster-RCNN algorithm to form a 2D bounding box of obstacles,and use the projection transformation matrix between the lidar and the camera to back-project the bounding box in the image to form a frustum,reducing the computational scale and spatial search range of the point cloud.A Self-Attention Point Net network based on self-attention mechanism is proposed to segment the original point cloud in the frustum range.Use bounding box regression Point Net and lightweight T-Net to predict the 3D bounding box parameters of the target point cloud,and add a regularization term to the loss function to improve detection accuracy.Validated on the KITTI dataset,the results show that the detection accuracy of cars and pedestrians is better than the original model,especially the detection accuracy of cyclists is significantly improved.In the three difficulties,it increased by 12.25%,8.08%,and 7.23% respectively.(4)In order to verify the effect of the algorithm proposed in the actual scene,a real vehicle test is carried out on the campus by building an intelligent vehicle test platform.The first experiment is designed for the fusion algorithm proposed in the third chapter of the paper.Firstly,point cloud filtering,ground segmentation,clustering and bounding box estimation algorithms are verified,and the results show that the proposed algorithm has good performance.Secondly,three experimental scenarios are designed: pedestrian occlusion,vehicle occlusion,and simultaneous occlusion of pedestrians and vehicles.The proposed fusion algorithm can output the correct category information,and the detection accuracy reaches 93.74%.A second experiment is designed for the fusion algorithm proposed in Chapter Four.First,two test scenarios are designed to test the accuracy of the results of frustum cutting and target candidate region extraction.The results show that the algorithm can achieve cutting completely and accurately.Secondly,the effect of 3D target detection is verified,and the results show that the detection accuracy of the proposed algorithm exceeds 85%,and the performance is good. |