Font Size: a A A

Research On Multi-sensor Fusion Localization Algorithm Of Improved Vision Front End

Posted on:2022-04-15Degree:MasterType:Thesis
Country:ChinaCandidate:X Y WangFull Text:PDF
GTID:2518306536461774Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
Due to the single data source and easy to receive the interference of external factors,a single positioning and navigation sensor can not accurately and effectively complete the positioning and navigation task in complex environment even under the expensive hardware and excellent algorithm framework.The research shows that using multiple different types of positioning sensors and matching the corresponding fusion algorithm can effectively integrate the advantages of each sensor,and then improve the positioning accuracy;in addition,using a variety of non-interference data sources to complete the observation of the same state can also effectively improve the stability of robot system and unmanned driving system,as well as the ability of autonomous positioning and mapping.In the pure vision localization algorithm,due to the strict requirements of the camera for light and the lack of global positioning observation,the positioning accuracy of the camera in the under textured scene is poor,and the track tracking is easy to lose.Therefore,this thesis combines the INS algorithm,integrated navigation and visual SLAM technology,based on the sampling characteristics of all kinds of sensors,proposes a multisensor fusion positioning algorithm,and realizes the high-precision positioning in the under texture environment.The main work is as follows:Firstly,different parametric modeling methods of 3D space pose are studied,and the influence of different modeling methods on pose optimization is analyzed.Aiming at the influence of sensor error on the accuracy of fusion algorithm,based on IMU error model and pinhole camera model,internal parameters of IMU device and camera are calibrated by Allen variance method and Zhang's calibration method respectively.For the problem that the number of corner points extracted by the camera is small and the accuracy of pose calculation is poor in the under texture environment,the corner extraction rate and matching rate of the feature point method and the optical flow method are studied,and the double matching method is proposed to improve the matching algorithm accuracy of the optical flow.In the work of pose calculation,the accuracy of different Pn P methods is compared and analyzed,and a pose calculation method based on EPn P and BA optimization is proposed,which improves the odometry accuracy of visual front-end.In the fusion of vision and INS,the error transfer equation of pre-integration is derived under the back-end optimization framework of VINS algorithm,and the sliding window edge algorithm is explained.By combining the improved visual front-end and back-end sliding window optimization algorithm,the fusion of IMU and vision is realized,and the robustness of positioning algorithm in harsh working conditions is improved.Finally,in the aspect of global sensor fusion,the state equations of IMU nominal state and error state are derived and the expression of noise error term is simplified,and the theoretical derivation and error calibration of global sensors such as GNSS,UWB and magnetometer are explained.Based on the error state Kalman filter,the fusion of GPS observation data and visual inertial navigation algorithm output pose is realized.Experimental results show that the fusion algorithm can improve the positioning accuracy even without loop detection module.
Keywords/Search Tags:Sensor Fusion, Kalman Filter, Pose Calculation, Monocular Vision, SLAM
PDF Full Text Request
Related items