Font Size: a A A

A Study On The Robustness Of Scenes Of Visual Inertia Combined Navigation Algorithm

Posted on:2022-11-26Degree:MasterType:Thesis
Country:ChinaCandidate:C J WangFull Text:PDF
GTID:2518306764998679Subject:Computer Software and Application of Computer
Abstract/Summary:PDF Full Text Request
As artificial intelligence technology advances by leaps and bounds,new products such as autonomous driving,robotics and drones continue to emerge.Navigation and positioning technology has been the core technology that restricts their development.In the development of navigation and positioning technology,Simultaneous Localization and Mapping has gradually become the focus of research.At present,there are more and more researches on vision sensors in technology.However,vision sensors generally have problems such as low frequency,cannot get spatial scale and poor robustness in open areas with little texture.In contrast,inertial navigation systems can use built-in sensors to achieve the measurement of their own angle and velocity.It can also calculate the scale factor of the environment and complement the function of the vision sensor.In this paper,we adopt the fusion of vision sensor and inertial navigation system to build a highly robust localization and navigation algorithm that includes high accuracy,high real-time and scene applicability.The main work is as follows.1)Image information and inertial data are pre-processed.For the image information part,ORB feature point method is used for feature extraction,and quadtree algorithm is added to homogenize the feature points,and optical flow method is used for feature tracking,and RANSAC algorithm is added to reject the matching abnormal points.For the inertial data part,the IMU pre-integration algorithm is used to obtain the position,velocity,quaternion and measurement residuals of the inertial data.2)The back-end optimization based on data fusion is divided into three modules.The system initialization module,SFM algorithm is used for visual initialization,and then the loosely coupled approach is used for joint initialization of the processed visual information and IMU information.In the nonlinear optimization module,the sliding window filtering is used based on the tight coupling approach to control the number of frames of information processing to a fixed value,and the marginalization algorithm is used to process the data that needs to be deleted to get the priori residual data,then marginalization residuals,visual residuals and IMU residuals are optimized by the beam method adjustment(BA).The end,the loopback detection module is used to determine whether the loopback detection bag-of-words model forms a closed-loop by means of feature extraction,and to perform bitwise optimization.3)A series of experiments are designed to prove that the visual-inertial combined positioning and navigation algorithm proposed in this paper is superior to the existing algorithms in real-time performance and accuracy,and is suitable for a variety of complex environments and has certain scene applicability.Based on the Euroc dataset,the real-time and accuracy of the algorithm are evaluated,and compared with the existing mainstream algorithms and compared with the existing algorithms.Then the scenario applicability of the algorithm is evaluated through real-life testing.Experimental results show that the proposed algorithm performs well in terms of real-time,accuracy and scenario applicability,that is,it has high robustness.
Keywords/Search Tags:Visual-Inertial Integrated Navigation, Feature Extraction Tracking, Joint Initialization, Robustness, Loop Closure Detection
PDF Full Text Request
Related items