Font Size: a A A

Research On Data Fusion Algorithm For Robot Vision/Inertial Combined Positioning

Posted on:2022-02-22Degree:MasterType:Thesis
Country:ChinaCandidate:T Q LiuFull Text:PDF
GTID:2518306347473814Subject:Control Engineering
Abstract/Summary:PDF Full Text Request
With the development of automation integration technology,data communication technology and sensor technology,mobile robot develops rapidly.And people have higher requirements for positioning accuracy.Although the Inertial Navigation System(INS)/Global Positioning System(GPS)combined navigation and positioning method can realize navigation and positioning,but for indoor environment,due to the strong interference of wireless signals and the base station may not be able to work properly,the INS/GPS combined algorithm will not be able to meet the requirements of indoor positioning accuracy for mobile robots.In this paper,the advantages and disadvantages of several common indoor localization technologies are analyzed firstly.Among them,Inertial Navigation System(INS)can achieve fully autonomous positioning,but the positioning error accumulates over time.Vision Navigation System(VNS)has no accumulated error and high positioning accuracy,but it is greatly affected by the environment.Therefore,it can be seen that the single positioning method always has its own disadvantages,this paper proposes a vision/inertial integrated positioning system for indoor mobile robot,builds an experimental platform for vision/inertial integrated positioning,and develops a data acquisition software for upper computer,which provides a strong guarantee for the validity of the combined algorithm in the next step of experimental simulation.Aiming at the problem of different camera and IMU sampling frequencies in the fusion scheme.This article first adopts the combined Kalman filter(KF)algorithm,in which two filters are used.When only inertial information is observed,only filter 1 is used,and the INS position is used as the observation in the filter;when inertial information and visual information are observed at the same time,the difference between the position information output by the filter 1 and the position information output by the vision system is made,and the difference is taken as an observation of filter 2,and can be used to calculate the optimal estimation of the position error.Finally,the INS output position information is corrected.Subsequently,this paper studies a multi-frequency Kalman filter data fusion algorithm in view of the complex structure of the above-mentioned combined Kalman filter algorithm.The algorithm is composed of two parts: time update and measurement update: when only inertia information is observed,only time update is performed;when there is image information input,time update and measurement update are performed at the same time.In order to further improve the utilization rate of data,this article improves on the basis of the Multi-Frequency Kalman Filter(MFKF),and designs two measurement equations: when only inertia information is observed,measurement equation 1 is used,and the observation of measurement equation 1 is the position information of INS;when there is image information input,the measurement equation 2 is used,and the observation at this time is the position error.On the basis of the above,aiming at the problem that the tracking of the visual positioning system is easy to lose when turning,this paper proposes an indoor vision/inertial positioning algorithm based on multi-model multi-frequency Kalman filter.A multi-model integration scheme for indoor mobile robots based on Mecanum wheels is studied,which ensures that the position of the robot remains unchanged when it rotates at a fixed point.The data fusion algorithm uses a MFKF to overcome the problem of the long filtering period of the integrated navigation system.Finally,this article selects a vision/inertial combined positioning system based on multi-state constraints for a positioning system that meets the requirements of hardware equipment.The measurement model uses the constraint relationship between the sequence images,and the positioning accuracy is improved.Through the semi-physical simulation experiment,it can be concluded that the positioning accuracy of the combined positioning algorithm is better than that of the single sensor system.The multi-state model can solve the problem that the tracking is easy to lose when the vision system turns.The positioning accuracy of the tightly coupled positioning is higher in practice.
Keywords/Search Tags:Indoor robot combined positioning, Data fusion, IMU, Camera
PDF Full Text Request
Related items