Font Size: a A A

Research On SLAM Of Mobile Robot Based On Fusion Of RGB-D Camera And IMU Sensor

Posted on:2021-02-09Degree:MasterType:Thesis
Country:ChinaCandidate:L DaiFull Text:PDF
GTID:2428330611466519Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
For mobile robots,accurate positioning and precise perception of environment are the basis of normal operation.Simultaneous Localization and Mapping(SLAM)can solve these problems well.Visual SLAM exploiting visual sensors achieves positioning and perception of the environment.For indoor robots,RGB-D cameras can provide rich visual information and depth information to locate the mobile robots,but tracking may be lost in the case of sparse areas or fast movement.By fusing the data from the IMU sensor,the problems mentioned above can be figured out and the better robustness and accuracy are acquired.Consequently,this paper concentrates on the SLAM problem based on the fusion of RGB-D camera and IMU sensor.Firstly,this paper analyzes the solution of the SLAM problem which is essentially a state estimation problem.It can be optimized based on a nonlinear optimization algorithm.Secondly,based upon the pinhole camera model of RGB-D camera,the cause of image distortion is analyzed and calibrated.After correction,the pixels in the image are tracked by the sparse optical flow method,and the reprojection error equation is constructed to optimize the pose and the map points.Also,for the IMU sensor observation model,the error term of the IMU pre-integration equation is obtained by exploring the IMU pre-integration algorithm in continuous time.The differences and commonalities of estimating poses of the two sensors are analyzed and thus the new optimization vectors are constructed.Finally,the new joint objective function in terms of reprojection errors and error terms of IMU pre-integration equations is given.Thirdly,the overall framework of the SLAM system integrating visual and inertial navigation is designed.The system is initialized using the loosely coupled method and visual estimation as a priori,and the key frame screening conditions and map point management methods are formulated.Subsequently,the size of the sliding window is controlled by marginalization,thereby controlling the overall calculation amount of the system and ensuring the real-time nature of the system.Marginalized prior errors are added to the new joint objective function to further improve the accuracy and robustness of the system.Eventually,a loop-back optimization algorithm based on the bag-of-words model is designed to solve the drift problem caused by long-term tracking.Finally,the hardware platform and software platform based on the SLAM system of RGB-D camera and IMU sensor are introduced.The internal reference matrix,distortion coefficient and Euclidean transformation matrix between the camera and the IMU sensor are obtained through calibration,and the is error less than the system requirements.In addition,the SLAM system proposed in this paper is tested with the open source VIO algorithm on a public data set.This SLAM system can better estimate the scale information and get better accuracy.Ultimately,the RGB-D camera and IMU sensor are loaded on the mobile robot to test the positioning effect of the presented algorithm in actual operation.It can also fit a fixed trajectory in actual operation.
Keywords/Search Tags:RGB-D camera, SLAM, Visual-Inertial fusion, State estimation
PDF Full Text Request
Related items