Font Size: a A A

Real-Time Dynamic Multi-Target Detection Based On LiDAR And Camera Fusion

Posted on:2024-01-10Degree:MasterType:Thesis
Country:ChinaCandidate:Z B XiaoFull Text:PDF
GTID:2568307157475334Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
Social development has exacerbated the shortage of human resources,and people tend to use mobile robots equipped with visual sensors to improve production efficiency.At present,due to the shaking motion characteristics of legged robots during walking,as well as changes in the speed or acceleration of dynamic targets,the robot’s dynamic target detection speed is poor,accuracy is insufficient,and the missed detection rate of occluded and small-scale targets is high.In response to these issues,this article uses the fusion method of LiDAR and camera sensors to identify targets and predict collision time,overcoming the shortcomings of single sensor target detection.While ensuring real-time performance,it improves the accuracy of robot target recognition,positioning,and collision prediction.The main research contents are as follows:(1)For LiDAR,the Voxel Net model is mainly improved by soft voxelization instead of hard voxelization for target detection.The detection process first filters out the ground based on the slope ground extraction algorithm,and then uses the random sampling consensus algorithm to aggregate the point cloud.class,and finally fit a rectangular box bounding box based on principal component analysis.The test results show that the improved Voxel Net model can effectively alleviate the performance jitter caused by hardware voxelization.Compared with the original model,the detection accuracy is 3.2%,but it is much worse than the camera detection accuracy.(2)For camera target detection,analyze the camera imaging principle and the conversion between several coordinate systems to calibrate the internal reference of the camera,and use the YOLOv3 model to detect the target bounding box and confidence,and extract and match the feature points of the image to predict Collision time,the experiment shows that using the camera to detect the target bounding box is still very reliable,and its detection speed and accuracy are very high,but the accuracy of the detection position is not so high that the predicted collision time is inaccurate.(3)For LiDAR and camera fusion recognition targets,first the image data is detected by YOLO to detect the bounding box,and then the joint calibration algorithm based on the key points of the calibration board is improved to the 3D-3D joint calibration algorithm,and the LiDAR with smaller error is obtained To the transformation matrix of the camera,time synchronization is performed through ROS,and the point cloud is projected to the image in real time.Based on the field of view(FOV)division technology,the DBSCAN clustering algorithm is optimized to cluster the point cloud to obtain a more accurate bounding box,and the final fusion The bounding box detected by the camera and the bounding box detected by the LiDAR and predict the collision time.Experimental results show that fusion detection has higher accuracy than detection using only LiDAR or camera sensors,and the FPS value can also meet the requirements of real-time detection.For a robot equipped with a real-time recognition system,the program in this paper can enable the real robot to better detect static and dynamic targets in real time,predict collision time and avoid obstacles.Experimental verification shows the effectiveness and reliability of the proposed algorithm.
Keywords/Search Tags:Robot, Camera, LiDAR, Multi-sensor fusion, Object detection
PDF Full Text Request
Related items