| In recent years,autonomous driving technology is gradually becoming the research focus of the domestic and foreign automotive industry.Autonomous driving technology is mainly composed of environment perception,path planning and decision-making control,and the industry knowledge involved includes vehicle structure,automatic control principle,sensor principle and computer technology.Environment perception is a key component of autonomous driving system and a prerequisite for autonomous driving vehicles to make planning decisions.However,the complex road traffic environment in the real scene brings great challenges to the perception task.The perception algorithm based on a single sensor has low perception accuracy and poor robustness.The multi-sensor fusion algorithm can integrate the advantages of each sensor,and its algorithm accuracy is relatively high while providing a certain degree of redundancy for the perception system.However,the existing multi-sensor fusion algorithm has a single detection category and poor real-time performance,which is difficult to be deployed in engineering.Relying on the platform of autonomous driving vehicle,this paper designs a multi-sensor target layer fusion algorithm.The algorithm has a good performance in detection accuracy and real-time performance,and provides reliable target information of surrounding obstacles for autonomous vehicles.The main research contents are as follows:(1)Design of visual object detection algorithm.In this paper,an improved visual target detection algorithm is proposed.In order to improve the detection accuracy of the model,DIoU is used as the regression loss function of the target frame based on YOLOV3 to improve the positioning accuracy of the target frame of the algorithm.The DIoU-NMS post-processing algorithm is used to improve the detection effect of the algorithm on occlusion targets.In addition,in order to improve the running speed of the model,this paper uses the lightweight network as the backbone feature extraction network of the model,which greatly reduces the calculation amount of the model.Experiments show that compared with the YOLOV3 algorithm,the detection accuracy of the proposed algorithm is improved by 4.12%,and the detection accuracy of the proposed algorithm can reach 94.20% in the category of cars.The algorithm’s running time is only54.13% of that of YOLOV3,reaching 0.0223 s.(2)Design of lidar target detection algorithm.This paper first uses point cloud filtering to remove the noise points in the original point cloud,uses a combination of raster map and plane model to remove the interference of ground point cloud data.Then uses subregional clustering to perform point cloud clustering on non-ground points operate to further reduce the amount of point cloud data and retain targets point cloud data as much as possible.Finally,use the PointNet network to distinguish the target background points and extract the target point cloud features,and use a single area suggestion network for the foreground target point cloud to obtain target category information and targets bounding box information.The algorithm in this paper combines the characteristics of clustering point cloud algorithm with high target recall rate,fast processing speed and accurate classification of point cloud neural network,which guarantees the real-time performance of the algorithm while the detection accuracy is high.The experiments show that the algorithm is compared with the Complex-Yolo and Voxel-Net algorithm accuracy is increased by22.92% and 3.43% respectively,algorithm running time is 0.091 s,meet the real-time requirements of self-driving vehicles.(3)Calibration of multiple sensors.This paper first uses the camera internal parameter calibration method to obtain the camera internal parameter matrix.Then extracts the calibration plate plane and the calibration plate plane normal vector and solves the normal vectors of multiple pairs of point cloud images to obtain the external parameter matrix of the lidar and the camera.Obtain the external parameter matrix of millimeter wave radar and lidar by extracting the corner reflector and calculating the position information of the corner reflector of multiple pairs of millimeter wave radar and lidar.Take the lidar coordinate system as the center coordinate system,use hand-eye calibration to obtain the coordinate conversion relationship between the three sensors,and complete the spatial synchronization between the sensors.The lidar data timestamp is used as the fusion reference timestamp to realize the time synchronization between sensors.The millimeter wave radar data and the lidar data are projected to the image and visualized,and a good calibration effect is obtained.(4)Fusion detection algorithm design.In the fusion detection part,the raw data of the millimeter wave radar is filtered first,and the target is filtered according to the driving situation and road conditions.In the target matching part,the algorithm in this paper projects the lidar target point cloud onto the image to obtain the 2D detection bounding box of the point cloud and compares it with the visual target.Using the intersection ratio and the distance between the targets as the two metrics of the KM algorithm,Obtain the point cloud and image fusion target output.Then,the target fused with point cloud and image is matched and fused with millimeter wave radar data again to obtain the fusion output of three sensor targets.Finally,the confidence level of the fusion target is revised to improve the accuracy of the detection algorithm and update the status information of each target.Real vehicle experiments show that the detection accuracy and realtime performance of the algorithm in this paper have good performance. |