Font Size: a A A

Research On The Localization Method Of Mobile Platform By Integrating Laser And Vision Perception

Posted on:2021-02-28Degree:MasterType:Thesis
Country:ChinaCandidate:P F ZhangFull Text:PDF
GTID:2370330611955125Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
Localization technology is the basis of autonomous navigation for mobile robots and a robust localization algorithm can guarantee robots' stable operation.The wheeled mobile robots are widely used for their low cost and running efficiency,and the thesis focuses on the wheeled mobile platforms.The laser-based localization algorithm is the mainstream for wheeled mobile robots locating,however,due to the limitation of the data size and the weak scene recognition capabilities of lasers,the laser-based localization algorithm behaves poorly in accuracy and robots may lost themselves easily when running in a featureless or degraded environment.Comparing withe the lasers,visual images carry rich texture of the environment,which facilitates the recognition of more feature information in space and bring more observation constraints for laser-based localization.In order to improve the localization ability of mobile robots,this thesis investigates the localization problems combining laser and vision based on the research progress and the state-of-the-art achievements of laser-based and visual localization algorithm.The main contributions of this thesis are as follows.(1)A mapping algorithm for 2D grid map fused with 3D point cloud is proposed.The wheeled mobile robots have three degrees of freedom for planar motion,while the existing visual SLAM algorithm optimizes the camera pose in six degrees of freedom.Over-optimization will lead to the fluctuation of the camera pose,thus resulting in the poor quality of the 3D point cloud generated by the algorithm.In order to solve this problem and construct a map fused with the laser and visual information,this thesis provides a 2D grid and 3D point cloud mapping algorithm.The algorithm obtains the initial value of camera poses by interpolating laser poses,then generates 3D point cloud using polar line search and triangulation between camera frames,finally optimizes the 3D point cloud and camera poses to construct an aligned map with unified scale.The algorithm realizes a SE(2)-constrained bundle adjustment optimization method and a camera pose graph optimization method based on laser pose constraint,the two methods optimize the 3D point cloud and camera poses together.(2)A localization model with fusion of laser and vision is constructed.In this thesis,a probabilistic localization model is derived based on Bayesian estimation,which aims to build a motion and observation model to quantify the pose and uncertainty of the robot in the map.In the fusion model,the chi-square distribution of the 3D map point's reprojection error and the hamming distance between 3D map point's descriptor and image feature point's descriptor are used to represent the visual observation probability together,and the product of the visual observation probability and laser observation probability is considered as the posterior observational probability of the fusion.Based on this model,different forms of Bayesian filter can be carried out to address different localization problems.(3)A global localization algorithm with fusion of laser and vision is put forward.The initial pose of the robot in the map is unknown and random,and the initial localization algorithm need to estimate the pose through multiple states.The Particle filter,as a nonparametric Bayesian filter,is used to solve this problem.In order to improve the efficiency and accuracy of laser-based global initial localization,the thesis proposes a global localization algorithm integrating laser with visual information.The algorithm comprehensively considers the Particle filter,the laser likelihood model,the laser beam model,and the visual observation model,enables the robot to implement more rapid and accurate global localization in the case of a few amount of priori and motion information.(4)A local localization algorithm based on EKF with fusion of laser and vision is proposed.The particle swarm of Particle filter is discrete and random,so the localization based on particle filter lacks of continuity and has large random error.In order to improve the accuracy of pose tracking process,this thesis adopts extended Kalman filter to implement the closed calculation of robot pose estimation,and then proposes a local localization algorithm based on EKF with fusion of laser and vision,which is a realization of Bayesian filter in continuous Gaussian space.The algorithm uses the wheeled odometry motion model to calculate the mean value of the EKF predicted step,and then calculates the Kalman gain by using the results observed by the laser and the camera of the environment,aiming to correct the EKF predicted mean value and complete the whole localization process.After that,MIT STATA data set and experimental platform are used to verify and analyze the proposed algorithm in this thesis.The experimental results validate the effectiveness of the proposed algorithm.
Keywords/Search Tags:Fusion, SLAM, Particle Filter, Bundle Adjustment, EKF
PDF Full Text Request
Related items