At present,environmental perception technology,which can quickly detect the surrounding environment and accurately identify target information through sensors,has become one of the key research directions in the field of intelligent driving.Among them,the technology of fusing data from sensors such as lidar and visual cameras is regarded as It is an effective method to improve the accuracy of identifying target information.Therefore,this paper proposes a feature-level sensor fusion method based on the coincidence of detection frames,which combines the precise spatial detection capabilities of lidar and visual cameras and the ability to restore real images.The accuracy of the identified target.The main research contents and results of this paper are as follows:(1)Research on the target detection method of lidar: First,based on the laser point cloud preprocessing method,according to the characteristics of large number of point clouds and many invalid point clouds,the point cloud is processed by voxel filtering,thereby reducing the number of points.The cloud density is within a reasonable range to improve the operating efficiency of the clustering algorithm;in addition,the filtered point cloud is segmented on the ground to reduce the interference of a large number of invalid point clouds,thereby improving the accuracy of the subsequent clustering algorithm.In addition,by analyzing various factors that affect the effect of the Euclidean clustering algorithm,a dynamic threshold Euclidean clustering optimization method based on point cloud spacing and density is proposed.The distance threshold of the point cloud is changed accordingly,which can reduce the under-segmentation rate of the Euclidean clustering algorithm.(2)Research on the fusion method of lidar and visual camera: on the one hand,according to the mutual conversion relationship between the coordinate systems of various sensors in the vehicle body,the visual camera and lidar are jointly calibrated,and the lidar coordinate system to pixel coordinates are calculated.The transformation matrix of the system can be used to achieve the effect of projecting a 3D laser point cloud to a 2D image.On the other hand,based on the above-mentioned Euclidean clustering optimization algorithm,the accurate output of the 3D detection frame can be improved,and a feature fusion method of the coincidence degree between the lidar detection frame and the camera detection frame is proposed.The coincidence of the detection frames is used as the fusion benchmark to determine the accurate target information to reduce the influence of calibration and hardware errors on the sensor fusion effect;at the same time,the lidar clustering and camera target detection results are set as target depth information and category information respectively.and output it to the perception system as the detection result of the same target,providing the parameter basis for target perception for the next system decision-making module.(3)Experimental comparative analysis of lidar target detection algorithm and sensor fusion perception method: After analyzing the experimental results of point cloud preprocessing,lidar target detection,multi-sensor joint calibration,sensor fusion perception,etc.: Compared with the traditional algorithm,the Euclidean clustering optimization algorithm proposed in this paper reduces the under-segmentation rate by9.23%,and the number of clusters can be increased by about 3.58.At the same time,the constructed sensor fusion method can effectively combine the perception results of lidar and camera images.Compared with the traditional fusion method that does not consider calibration and hardware errors,the missed recognition rate can be reduced by about19.1%.The clustering optimization algorithm and sensor fusion perception method proposed in this paper are expected to provide a certain reference for enriching and improving the environment perception technology.At the same time,this paper only conducts experiments on low-speed scenes and closed roads. |