Font Size: a A A

SLAM Technology Of Search And Rescue Robot Based On The Calibration Of Lidar And Camera

Posted on:2021-01-03Degree:MasterType:Thesis
Country:ChinaCandidate:R JianFull Text:PDF
GTID:2428330614470453Subject:Biomedical engineering
Abstract/Summary:PDF Full Text Request
Search and rescue robots have widely used in urban search and rescue after earthquake,coal mine accident,Chemical accident and so on.They do Simultaneous Localization and Mapping through carried sensors,and provide useful information for the search and rescue work.Up to now,the Simultaneous Localization and Mapping technology is widely used in auto-driving area,where the environment is always smooth and structural.However,the function of SLAM is limited in complicated situation like the surroundings of the battlefields and the earthquake's debris.On one hand,it is hard to meet the demand for single sensor in reality,because of the structural performance.On the other hand,the accuracy of SLAM will be influenced by dynamic objects(like the moving rescue robot,the instability objects and the moving rescue persons)in the search and rescue environment.Consequently,a series of researches on SLAM based on the combine of camera and lidar have carried on.We make full use of camera data and lidar data to eliminate the dynamic noisy in SLAM.As a result,we want to improve the accuracy of the SLAM system in dynamic environment and the ability of rescue robot,which has important theoretical significance and practical value.Firstly,we carry out the calibration of camera system and lidar system work.We provide a novel method using five cameras and a 3D lidar to make calibration.Calibration work of camera system and lidar system based on a planar checkerboard pattern have carried on.We use the laser points lying on the checkerboard pattern and the camera point of the calibration plane to estimate the geometric constraint on the rigid-body transformation between the camera and laser system.We use the GaussNewton method to get the result.And then we project the laser camera data to laser scan to get the colored laser points.The transform matrix and the colored laser points are the result of calibration and data fusion.Secondly,we introduce a CNN based deep learning method to get rid of the dynamic noisy.We improve the ability of the famous 3D lidar algorithm.We introduce a CNN based deep learning method PSPNet to do semantic segmentation of camera data.And we make use of the robust SURF method to make the camera data background compensation before the moving detection procedure.We use the different frame method to detect the moving objects.We take the advantage of semantic segmentation and moving detection to make sure the true moving objects.We project moving objects to laser points,and get rid of the moving laser points.Thirdly,we improve the performance of SALM algorithm.To improve the efficiency and instantaneity,corner feature points and plat feature points are chosen to register through feature-based line and feature-based plat.Then we achieve motion estimation and environment reconstruction to gain point cloud map and octomap.Lastly,we do some experiments to prove the useful our algorithm.We set up a search and rescue robot,and do further evaluate of our method though experiments.The search and rescue robot contain four main parts: the px4 Open source hardware,the IPC(Industrial Personal Computer,IPC),the lifting poker and the laser-camera system.We evaluate our method on the public KITTI dataset,and the results show that our proposed method have the rotation error in sequence 01,03,07,09 is 0.0120%,0.0185%,0.0115%,0.0151%,while the original method is 0.0135%,0.0193%,0.0121%,0.0160%.The translation error of our method in sequence 03,07,09 is 2.0205%,0.9063%,1.8619%,while the original method is 2.1983%,0.9402%,2.0018%.The result proved that our method can efficient reject the dynamic outlier and improve the performance in terms of accuracy.We do some indoor and outdoor experiments to evaluate our method.The drifting of our method is 0.5168 m at indoor,while the original method is 1.0699 m.The drifting of our method is 1.66490 m at indoor,while the original method is 1.8272 m.Whatever the indoor experiment and the outdoor experiment,our method gain the more accurate result than the original method.The results approved that our method can detect and eliminate the dynamic objects after many qualitative experiments.The accuracy of SLAM will be improved.
Keywords/Search Tags:3D Laser, calibration of laser and camera, data fusion, dynamic noisy, SLAM
PDF Full Text Request
Related items