Font Size: a A A

Reasearch On Kinect-based Visual Odometry

Posted on:2019-06-11Degree:MasterType:Thesis
Country:ChinaCandidate:Y ZhangFull Text:PDF
GTID:2428330545991246Subject:Engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of artificial intelligence technology,mobile robots are gradually becoming more intelligent.The ability to autonomously navigate is a vital part of robotics.The mobile robot needs to build a map for an unknown environment meanwhile determine a direction to go forward.Visual SLAM technology helps robots build maps of unknown environments while judging their direction.However,there are two unresolved issues: the robot needs to locate its own location and build a map of the current environment.Visual odometry have an irreplaceable advantage over traditional odometry.Traditional odometry may have errors during operation due to reduced sensor accuracy,inaccurate encoders and inertial drift.The visual odometry can prevent these errors from being attributed to the fact that it does not require any motion or scene information at work and can only use the vision sensor,making it ideal for unstructured scenes and unconventional platforms.In addition,the visual odometry can provide rich scene features and complete tasks such as identifying obstacles,detecting targets,and dividing accessible areas,which provides full support for real-time navigation of mobile robots.The visual odometry is also called the front end of the visual SLAM,which has broad development prospects and great research significance.The visual odometry estimates the movement status of the robot according to the sensor's image information before and after the frame is acquired.Compared to traditional monocular visual odometry and binocular vision odometry,RGB-D sensor-based 3D visual odometry can directly obtain color and depth information without being limited to calculations.This paper proposes an optimized 3D visual mileage calculation method.The RGB-D sensor used is a Microsoft Kinect second generation sensor.Optimize on the original three-dimensional visual odometry program to improve the accuracy of the visual odometry with reliable robustness and real-time performance.The main research content and innovation of this article are as follows:(1)The depth measurement principle,imaging model,and principle of color camera and depth sensor registration for the Kinect 2.0 sensor used to obtain color and depth information were studied.The iai_kinect2 tool set was used for calibration and registration.The point cloud was compared and analyzed.The relationship between the depth error,distances and the coordinate values of X and Y were studied.(2)An optimized visual odometry scheme is proposed.In the matching of feature points,an improved ORB feature point matching algorithm is proposed.The improved algorithm uses the BRISK algorithm to uniformly sample the extracted feature points and analyzes the scale invariance before and after improvement of the ORB algorithm.When the false match is eliminated,the accuracy of the match is improved by combining the threshold method with the RANSAC algorithm,and the correct match logarithm before and after improvement of the ORB algorithm is analyzed through experiments;When the pose estimation,the ICP algorithm and Pn P algorithm are used to optimize motion estimation.(3)The visual mileage calculation method is applied to the visual SLAM,and the accuracy of the global consistency trace generated by the visual distance calculation method before and after the visual SLAM calculation is evaluated using the TUM public data set.In addition,the visual SLAM using the improved visual mileage calculation method was applied to the Turtle Bot mobile robot to build a real-time SLAM system based on the improved visual mileage calculation method,which verified the real-time and robustness of the improved algorithm.
Keywords/Search Tags:Kinect, visual odometry, SLAM, ORB algorithm
PDF Full Text Request
Related items