Font Size: a A A

Research On Localization Algorithm Based On Inertial And Vision Sensors Fusion

Posted on:2018-03-24Degree:MasterType:Thesis
Country:ChinaCandidate:T Y ZhaoFull Text:PDF
GTID:2348330536969430Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
Space localization technology plays an important role in intelligent robot,unmanned aerial vehicle(UAV),AR and so on,and its localization accuracy directly affects the performance and users experience.With the rapid development of computer technology,space localization technology based on visual odometry has gradually become a hot topic.But because of the limitations of vision,the localization accuracy only based on visual odometry is not high,therefore it is very important to search for a calculation method of a more reliable spatial pose.For the pure visual odometry localization method is susceptible to environmental,and the location accuracy,reliability and robustness are not high,this paper presents the space pose calculation method of fusing inertial and visual sensors.Optimization model was established and the comparison experiment of the algorithm was implemented.This paper mainly completed the following three aspects:(1)The images feature extraction and matching are the most basic and significate problems in computer vision.Firstly this paper studied three kinds of algorithms-SIFT,SURF,ORB algorithm for image feature extraction,then the experiments analysis of the advantages and disadvantages of the three algorithms,and according to the time efficiency requirements of visual odometry,the ORB algorithm was choosed in this paper.Due to camera moving too fast or the number of the feature points is less,the localization is easy to fail.For this problem,the establishment of the local map,and the key frame method and relocalization method were used to enhance the stability and robustness of the visual odometry by establishing the local map in the paper.Finally,the method of general graph optimization was used to optimize the visual odometry in order to improve the localization accuracy..(2)The visual odometry pose calculation is not accurate,and the error range is wide.In order to solve this problem an optimization method for fusing inertial components into visual odometry was proposed in this paper by adding IMU information between the two consecutive images for constraints.The measurements value of the IMU information in two consecutive frames or two consecutive keyframes were calculated by pre-integration to get the measurements of an approximate Gauss distribution.The measurements were added to the optimization model as two frames constraints in order to optimize the pose with reprojection error of vision.At the same time,the pre-integration gives the Jacobian matrix of IMU bias variation relative to the measurement.So measurement can be directly calculated by Jacobi matrix instead of numerical integral when the IMU bias was corrected.This method can reduce the cost of calculation.In addition,the gravity information was added to the calculation of acceleration and gyroscope so as to enhance the robustness of the algorithm.Then,the optimal model is established by the reprojection error and the IMU error,and the optimized pose was solved by Newton Gauss method.(3)The proposed algorithm was evaluated in the ubuntu14.04 system with ROS system.And the pose was calculated with the kinect2 camera and MPU6050 inertial components in the indoor,describing the practical performance of the algorithm.
Keywords/Search Tags:Feature extraction, general graph optimization, pre-integration, reprojection error, IMU error
PDF Full Text Request
Related items