Font Size: a A A

RGB-D Senor Visual Odometry Based On Sparse Feature Point Cloud Registration

Posted on:2019-11-19Degree:MasterType:Thesis
Country:ChinaCandidate:H ZhangFull Text:PDF
GTID:2428330590468705Subject:Aeronautical and Astronautical Science and Technology
Abstract/Summary:PDF Full Text Request
Visual odometry aims to estimate the camera pose in an unknown environment using visual information obtained by camera.Visual odometry plays an important role in applications of robotics,Unmanned Aerial Vehicle,3D reconstruction and Augmented Reality.Serving as the front-end of Simultaneous Localization and Mapping,visual odometry always obtains the frame-to-frame strategy to estimate the pose transformation from two consecutive frames,which results in error drift.To address error drift,some back-end methods are required for Simultaneous Localization and Mapping,such as local pose optimization or global pose optimization,which renders the Simultaneous Localization and Mapping system more complex.Nowadays,visual odometry has a tendency to be simpler and more independent.To deal with error drift,and ensure the robustness and realtime performance of visual odometry,we propose three feature-based methods using RGB-D camera using frame-to-model strategy and demonstrate the method using benchmark dataset.To reduce the error drift caused by frame-to-frame strategy and to get the good feature matching,we propose a frame-to-model based point cloud registration method for pose estimation.We extract the 2D feature point,transform the points into point cloud and store the point cloud into a model.To estimate the pose,we match the feature point of current frame with model and estimate the pose with known point correspondence.We then update the model using Kalman filter,which to some degree reduce the error drift.The experiments from benchmark dataset shows that the proposed method can estimate the camera pose accurately and robustly for different sequences.To ensure the accuracy and real-performance of visual odometry,we propose an improved point cloud uncertainty model based visual odometry algorithm without feature matching.First,the feature points are extracted to form a point cloud model which is weighted by point cloud uncertainty model.Then A-ICP is performed for pose estimation and Kalman filter is adopted to update the point cloud model for drift avoidance.The experiment from the benchmark dataset indicates that the proposed method performs well in certain sequences while the over improvement is limited by outliers and initial guess of A-ICP algorithm.To address the problem of outliers and initial guess,we propose an improved semi-probabilistic trimmed-ICP RGB-D visual odometry algorithm to improve the accuracy and robustness.The algorithm is free of feature matching and needs the Iterative Closet Point algorithm to estimate the pose.To reduce the negative effect of noise on pose estimation and enhance the accuracy of algorithm,we adopt the combination of overlap ratio estimation scheme and trimmed ICP algorithm for outliers rejection.Besides,to deal with large camera motion and enhance the robustness of proposed algorithm,we propose two criteria for transformation strategy which serves to change the pose estimation from frame-to-frame mode to frame-to-model mode.The benchmark dataset demonstrates that the proposed algorithms have their own merit in different conditions and improve the former visual odometry system.
Keywords/Search Tags:RGB-D camera pose estimation, Iterative Closet Point, frame-to-model, structure optimization, visual odometry
PDF Full Text Request
Related items