Font Size: a A A

Research On Inertial Navigation-visual Fusion Positioning Algorithm Based On Tight Coupling

Posted on:2021-04-10Degree:MasterType:Thesis
Country:ChinaCandidate:H Y GuFull Text:PDF
GTID:2428330614458594Subject:Integrated circuit engineering
Abstract/Summary:PDF Full Text Request
With the development of positioning technology,the use of a single sensor for positioning has been unable to meet the needs of high-precision positioning,so multisensor fusion has become the current development trend.Inertial navigation and vision are complementary in positioning principles.Inertial navigation has high short-term accuracy,but long-term stability is insufficient.Visual localization can obtain the camera posture through the image information.Data drift of visual localization is small and the positioning accuracy is high,but it is easily affected by light and is not suitable for fast movement.The fusion of the two methods has always had the problem that the measurement data will diverge,resulting in reduced positioning accuracy and poor stability.Aiming at these technical bottlenecks,this thesis studies the inertial sensors error compensation and fusion algorithm optimization to realize the fusion of tightly coupled between inertial navigation and visual localization,and improve the positioning accuracy of the fusion positioning system.Firstly,data divergence of micro-electro-mechanical system(MEMS)inertial sensors in the coupling are studied,and random errors of MEMS inertial sensors are analyzed and rapidly calibrated through an improved wavelet noise reduction method.Data shows that this method can reduce various errors of the sensors by an order of magnitude,and the processed inertial sensors can be used for tight coupling.Through the establishment of the camera distortion model,the Zhang's calibration method is used to de-distort the binocular camera,and the calibration tool is used to externally calibrate the camera and inertial sensors.Secondly,under the framework of the extended Kalman filter(EKF)algorithm,the inertial navigation and visual information are data-fused,and the multi-state constraint model is used to establish the multi-camera pose constraints on the feature points,reducing the amount of calculation and improving the accuracy of observation,and enhancing the system real-time processing capabilities.Finally,the tightly coupled inertial navigation vision fusion positioning system designed in this thesis is verified through experiments.First,the data set is used to verify the rationality of the fusion algorithm,and compared with the VINS under the optimization framework.The results show that when the positioning performance is similar,the amount of calculation is reduced by 41%.Secondly,the inertial sensors used in this thesis is tested for stability,the experiment shows that the calibration method used in this paper can effectively reduce the divergence of sensors,as a conducive method to improving the overall positioning accuracy of tightly coupled systems.Finally,the tightly coupled system proposed in this thesis is verified in the actual scenario.The data shows that positioning error is less than 0.5%.Related experiments show that positioning accuracy and stability of the system has been greatly improved compared to single inertial navigation and single visual localization,providing a clear direction for engineering applications.
Keywords/Search Tags:inertial navigation, visual localization, sensor fusion, extended kalman filtering, multi-state constraints
PDF Full Text Request
Related items