Font Size: a A A

Research On 3D SLAM Based On Lidar/Camera Coupled System

Posted on:2019-04-26Degree:MasterType:Thesis
Country:ChinaCandidate:S X LiFull Text:PDF
GTID:2428330566470994Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
The smart city is the dominant trend in the future,while the precise location information as well as the visually digital map is the most important support for its achievement.With the constant development of sensor technology and computer technology,various kinds of solutions to simultaneous localization and mapping problem based on different cutting-edge sensors have been presented and applied to many research and application fields,such as autonomous navigation of mobile robot,autonomous vehicles,mobile mapping and construction of battlefield environment.LiDAR and camera have complementary strengths and weaknesses,thus the coupled scheme of them has become one of the hottest topics comparing with other sensors combining schemes.But there are still many problems demanding prompt solutions in the existing research,which include:(1)the trade-off between the real-time performance and estimated precision,(2)the combining scheme is not tight enough and the information fusion scheme does not make the most of information,(3)and the map constructed lacks global consistency.Therefore,research on a real-time,much tighter and more consistent SLAM solution using multi-sourse data fusion approach has important feasible value and great commercial value.Focusing on the LiDAR/camera tightly coupled simultaneous localization and mapping technology,the thesis presents an advanced approach to achieve data fusion in multi-level data processing,including depth data association,map initialization,motion estimation,mapping and pose optimizing,insertion and culling of key frames as well as loop detection.The summary of my works are listed below:1.A detailed research on LiDAR odometry and mapping solution based on a single 3D LiDAR.The system employs a paralleling framework which divides LiDAR odometry and mapping into two threads to keep its real-time capability.To reduce the data dimensional of point cloud and solve the problem of distorted point cloud caused by the unknown LiDAR motion,efficient feature points extracted and distortion de-skewed approaches have been presented.The multi-channel generalized iterative closest point algorithm which adds point cloud intensity information into GICP framework is applied to point cloud registration to estimate the trajectory of LiDAR and build 3D global map.It increases accuracy and robustness of registration by taking local covariance information of each point as well as intensity information into consideration respectively.Finally,the performance of the proposed approach is evaluated and problems including sensibility to fast rotation and the lack of map consistency are pointed.2.A detailed research on depth information aided monocular visual odometry and mapping technology.The system employs a paralleling framework which runs depth information fused feature tracking and 3D environment mapping on two parallel threads to make it can achieve real-time.In feature tracking thread,features are distributed homogeneously on the image using features grid transform approach,and a depth information association method based on local depth map is presented.A motion estimation method using both features with or without depths is applied,which takes fully advantages of image information to perform high frequency motion estimation.After that the camera pose is optimized through a local covisibility graph and is confirmed using the densely geometric information in the neighbour point cloud around feature points,which ensure the veracity of pose estimation.The optimization back-end employs the conventional bundle adjustment that refines the motion estimates by processing a sequence of key frames in a batch optimization.Finally,the performance of the proposed approach is evaluated and problems including cannot be applied in a large-scale environment since the limitation of sensor's measuring range and the lack of map consistency are pointed.3.Based on previous research on LiDAR odometry and depth enhanced monocular visual odometry,an integrated LiDAR/camera coupled simultaneous localization and mapping solution is presented.First,camera itself and mounting parameters between the camera and LiDAR must be calibrated,which provides a specific spatial relationship between image and pint cloud.The system employs a paralleling framework which runs depth information fused feature tracking and 3D environment mapping and loop detection and correction on three parallel threads to make it can achieve real-time performance.In the tracking thread,the previous approaches in LiDAR and visual odometry are applied.The result of visual odometry is utilized as an initial guess for the de-skewing of point cloud distortion,which guarantees the accuracy of point cloud registration.The 3D mapping scheme presented in LiDAR odometry is also ultilized in the mapping thread,and the color information of points enters in consideration to increase the accuracy of point cloud registration between point cloud scan and global point cloud map.Loop detection is achieved using a features bag of words and similarity of point clouds combined method,and once the loop is detected a batch optimization is employed to increase the consistency of global map.Finally,the performance of the proposed approach is evaluated.
Keywords/Search Tags:Simultaneous localization and mapping, Data fusion, Point cloud, Image feature, Iterative closest point algorithm, Bundle adjustment, Loop closure
PDF Full Text Request
Related items