Font Size: a A A

Research For The Visual Odometry Algorithms Based On The Fusion Of Two-dimensional Image And Three-dimensional Information

Posted on:2017-07-11Degree:MasterType:Thesis
Country:ChinaCandidate:S XuFull Text:PDF
GTID:2348330488976195Subject:Control engineering
Abstract/Summary:PDF Full Text Request
Visual odometry has been a hot research in recent years. It is a device that using a camera fixed on sports equipment (such as robots, UAV, vehicles, etc.) to get image information in the scene and calculate the camera motion information, generally, those cameras are monocular camera, a stereo camera or three-dimensional camera. Visual odometry could provide position and orientation information for precise position of Autonomous Navigation System, and computing pose for UAV equipment and create three-dimensional maps, etc. Currently, monocular visual odometry information without the aid of external auxiliary motion estimation has no motion estimation spatial scales, process model for solving the scaling factor only get an ambiguous scale. Binocular visual odometry based on camera fixed baseline to obtain depth information, when the object distance in scene relatived to camera is too large, it is similar to monocular vision odometry. Three-dimensional visual odometry can get depth information of scene directly so that it can solving motion estimation in absolute scale of the camera, because of the limited distance of a three-dimensional camera getting depth data result that three dimensional visual odometry is generally used only in a small scene, and the ability to adapt to the environment is poor.The Kinect camera can simultaneously obtain the 2D image and 3D point cloud information, two types of information in motion estimation are highly complementary in the application. Therefore, for the current problems in the development of visual odometry, this paper proposes a new visual mileage calculation method based on the fusion of Two-dimensional Image and Three-dimensional information.Firstly, utilizing two dimensional image to proceed a estimated solution of the three-dimensional motion excessive, and using SURF algorithm to extracting and matching 2D feature points, achieving the elimination of false matching points based on the RANSAC algorithm. After that, the accuracy and reliability of feature matching are improved effectively, and the motion estimation of the camera is solved by the matching of the feature points. Secondly, ICP (Iterative closest point) algorithm is used to solve the motion estimation of absolute scale based on the depth information of 3D camera. Then, based on the motion estimation of absolute scale, this paper proposes a method of RANSAC registration algorithm based on Relationship Coefficient mean square deviation detection, which is used to estimate the motion estimation of 3D motion. Finally, in this paper, an automatic switching algorithm for motion estimation is proposed:according to the characteristics of the scene in the three-dimensional point cloud based on the visual range of the registration and fusion registration based on two-dimensional image of three-dimensional motion estimation between the two ways to automatically switch.Compared with the previous visual odometry, our visual odometry calculation method not only can make up the lack of absolute scale information of the defects in the monocular vision system, but also to avoid the 3D depth data to obtain the limited distance to solve the motion estimation. Algorithm in this paper have used the visual odometry that based on 3D point cloud to run on the small scene with two dimensional features, the fused image registration based on the two-dimensional image of the three-dimensional motion estimation method in the depth of the data loss in the large scene. The experimental results that the method is effective and adaptive to the scene.
Keywords/Search Tags:Visual odometry, 3D camera, 3D motion estimatio, ICP algorithm, Relation coefficients
PDF Full Text Request
Related items