Font Size: a A A

Research And Implementation Of Multi-sensor Fusion Based Visual SLAM

Posted on:2020-06-11Degree:MasterType:Thesis
Country:ChinaCandidate:Q Q WangFull Text:PDF
GTID:2428330647967491Subject:Traffic and Transportation Engineering
Abstract/Summary:PDF Full Text Request
SLAM(Simultaneous localization and mapping)is the keys to realize the autonomous walking for intelligent mobile robot.The most commonly used methods are vision based or laser based SLAM.RGB-D camera can obtain image and absolute depth information at the same time,which is widely used in indoor location and mapping scene.The RGB-D camera can obtain rich environmental information.However,the disadvantages e.g.being easily affected by sunlight,a narrow field of view,and large noise led to poor accuracy and robustness in practical applications of RGB-D camera.At the same time,the traditional RGB-D SLAM method did not fully consider the filtering of the redundant field of view,and in practical applications,the redundant field of view does not affect the actual passage of the robot.Lidar was characterized by high precision and strong anti-jamming ability.However,its scanning range is limited into one single plane,which led the Lidar to be incomplete access to environmental information.This paper proposes a low-cost,high-precision and high reliability slam scheme based on Kinect and two-dimensional Lidar.The main contents of this paper are as following:(1)The research on SLAM literature in China and foreign countries is summarized,and the vision,laser,and multi-sensor SLAM frameworks are summarized.At the same time,the problems and methods to be solved in this paper are put forward.(2)The vision slam mobile robot platform with RPLIDAR A2 Lidar and Kinect equipment as the combined sensors was built on the basis of the ROS system.In addition,the robot motion model,Kinect model and Lidar model are also established.Then the traditional calibration method was introduced to complete the calibration of Kinect's RGB lens and infrared lens,thus the camera parameters of Kinect were obtained.(3)Based on Kinect,the depth image that actually affects the range of the robot's passing area is obtained.Benefit from PCL point cloud library,the 3D point cloud is acquired.And using the ORB feature point matching method to obtain the camera's rotation and translation vector,and then to achieve point cloud stitching.Then,the point cloud is sampled down,and the two-dimensional grid occupation data is obtained through the octomap algorithm.At the same time,according to the robot's actual obstacle surmounting ability,the obstacles in the passable area on the ground are filtered out.(4)For lidar,the traditional RBPF algorithm has many problems such as the use of particles and a large amount of calculation,and an improvement scheme is proposed.By combining lidar scan data,ICP feature point matching is used to improve the proposed distribution function and adaptive resampling,which greatly reduces the number of particles used and the amount of calculation.(5)Bayesian reasoning is introduced to help achieving the Map fusion,and the pose optimization and loop detection based on word bag model are also realized on slam system.(6)Experiments on the proposed or improved algorithm in the simulation environment and mobile robot platform are built.The problems of multi-sensor and single-sensor map accuracy,system real-time performance,and the possibility that new sensors may introduce new noise are analyzed.
Keywords/Search Tags:Kinect, Lidar, ROS, SLAM
PDF Full Text Request
Related items