Font Size: a A A

Research On Multi-sensor Information Fusion Positioning And Navigation Technology Of Indoor Robot

Posted on:2022-05-03Degree:MasterType:Thesis
Country:ChinaCandidate:L ChengFull Text:PDF
GTID:2518306539480824Subject:Control Engineering
Abstract/Summary:PDF Full Text Request
In recent years,robots have been widely used in people's daily life and become more and more intelligent.However,in the indoor environment with limited Global Navigation Satellite System(GNSS),the autonomous positioning and navigation functions of robots have been greatly challenged.Simultaneous Localization and Mapping(SLAM)technology based on multi-sensor information fusion has become one of the important research directions to solve this problem,and it has also received extensive attention from academic researchers and engineering applications.This paper mainly studies the multi-sensor information fusion positioning and navigation technology in indoor scenes,and proposes and implements two fusion methods.One is a visual inertial navigation fusion positioning algorithm based on tightly coupled optimization,which implements improved ORB feature extraction and nonlinear optimization back-end error correction.The other is an adaptive visual inertial navigation and laser fusion positioning algorithm.The other is an adaptive visual inertial navigation and laser fusion positioning algorithm.By setting the visual tracking threshold,the adaptive switching of the positioning mode in the visual interference environment is realized.The main research content includes the following aspects:Firstly,according to the needs of robot positioning and navigation,a variety of commonly used navigation coordinate systems and associated transformations are established,and the measurement principles and error models of vision cameras,lidar and IMU sensors are analyzed,combined with wheel differential drive models and robot rotation power Learning,established the kinematics model of the mobile robot.Secondly,in view of influencing factors such as changes in illumination,lack of features,and obstruction of obstacles in indoor scenes,the robot uses a single sensor data information,which cannot meet the requirements of real-time accurate positioning.This paper studies the framework of multi-sensor information fusion,and proposes an improved sliding window nonlinear optimization algorithm,which tightly couples vision and IMU data to fuse,and improves the positioning accuracy of indoor robots.At the same time,an adaptive vision/inertial integrated navigation fusion lidar positioning model is proposed,which further improves the robot's perception and decision-making capabilities of the unstructured and complex indoor environment.Finally,the Gazebo simulation environment is used to build a simulation experiment scene and a robot simulation model.Through the predictive tracking model,the simulation and actual robots and visual drones are tracked,and pure vision,visual inertial navigation fusion and visual inertial navigation laser fusion experiments are completed.Combined with the analysis of comparative experiment results,the effectiveness and reliability of the multi-source heterogeneous sensor information fusion system in this paper are verified.
Keywords/Search Tags:indoor navigation and positioning, multi-sensor information fusion, nonlinear optimization, adaptive
PDF Full Text Request
Related items