Font Size: a A A

Stereo And Inertial Navigation Based Mobile Robot Localization In Indoor Environment

Posted on:2021-03-09Degree:MasterType:Thesis
Country:ChinaCandidate:W Z DongFull Text:PDF
GTID:2518306308483384Subject:Control Engineering
Abstract/Summary:PDF Full Text Request
With the development of cities,urban underground complexes play an increasingly important role.In the context of intelligent manufacturing,inspection robots have greatly improved the level of automation in fire monitoring,flow statistics.The most fundamental problem in inspection robot control is positioning.Therefore,this paper mainly focuses on the problems of the traditional visual SLAM(Simultaneous Localization and Mapping)positioning,such as weak anti-dynamic interference and difficulty in extracting feature points from weak textures.A visual fusion inertial navigation system is designed.The data sets test it and actual physical environment experiment are carried out.The experimental results verify the superiority of the visual fusion inertial navigation scheme designed in this paper compared to the single sensor positioning scheme in the underground complex.First,we build visual positioning module.This article mathematically models the stereo camera model and introduces a distortion model;the front end uses the Harris algorithm combined with the optical flow pyramid strategy to improve the original pure vision ORB(Oriented FAST and Rotated BRIEF)algorithm;the back end establishes the target optimization function and derives the form of optimization variables expression;loop detection inheriting the traditional SLAM framework improves the positioning effect.Secondly,we build inertial navigation module.This paper analyzes various sources of inertial navigation error and simplifies the inertial navigation model;derives the dynamic model of inertial navigation,simulates and compares the discrete effects of Euler method and median method,and selects the shorter time-consuming Euler method;using the non-uniform rational B-spline interpolation method to convert the discrete data of the inertial navigation into a continuous trajectory that can be derived in the second order,which is convenient for alignment with visual information.Then,we build fusion positioning module.The original inertial navigation data in the world coordinate system is transformed into pre-integrated data with higher calculation efficiency;the residual equation of visual and inertial fusion is theoretically established and derived,and the state variables,visual constraints and inertial navigation are determined;compare Gauss-Newton method and Levenberg-Marquardt method to solve the loss value and calculation efficiency of fusion objective function,use robust kernel function to enhance the robustness of the algorithm;the method of sliding window and marginalization effectively controls the increase in the scale of computation.Finally,the experimental environment and platform of the visual fusion inertial navigation system were established;the stereo camera was calibrated,the system error was controlled,and the simulation proved the rationality of simplifying the inertial navigation model;the absolute position error of ORB-SLAM2 is compared on the dynamic interference KITTI data set and the weak texture EUROC data set.Under dynamic interference,the positioning effect of this paper is increased by 70%.Under the weak texture,the effect is improved by 30%.In real scenes,the positioning accuracy before and after fusion is tested,and the fusion is improved77.7%,and proved that the fusion positioning scheme is more robust than the single sensor positioning scheme under dynamic interference and weak texture environments.
Keywords/Search Tags:Robot positioning, SLAM, Visual inertial navigation fusion, Nonlinear optimization, Sliding window
PDF Full Text Request
Related items