Font Size: a A A

Research On Indoor Location Algorithm Based On Fusion Of Monocular Vision And Inertial Navigation

Posted on:2021-04-20Degree:MasterType:Thesis
Country:ChinaCandidate:J YuanFull Text:PDF
GTID:2428330611498213Subject:Control engineering
Abstract/Summary:PDF Full Text Request
In recent years,with the rapid development of artificial intelligence,technologies related to autonomous mobile robots have received widespread attention.Studying the positioning methods of mobile robots in unknown environments is of great significance for improving their miniaturization,autonomy and intelligence.Among them,the visual sensor has been widely used in the field of mobile robots because of its low price and rich information.However,errors caused by factors such as blurry images,too fast motion,and lack of vision in the positioning method based on vision cannot guarantee its accuracy.In response to the above problems,this paper studies the indoor positioning algorithm of mobile robot based on the fusion of monocular vision and inertial navigation information.The main content of the paper includes the following parts:First,the research background and research status of the fusion of visual information and inertial navigation information are summarized.The basic theories of visual inertial fusion are introduced,including commonly used camera models,coordinate system transformation and pose description,and basic principles of SLAM based on optimization.Secondly,for the pose estimation of mobile robots,a method of monocular vision and inertial navigation fusion is given.The algorithm includes two parts,the front end and the back end.The visual front end uses Harris corners for detection and tracking,and the front end of the inertial unit uses pre-integration to process the gyroscope and acceleration data.At the back end of the algorithm,the tightly coupled IMU measurement residuals,visual measurement residuals,and a priori information are optimized in the sliding window,and the pose information of the mobile robot is estimated.The effectiveness of the algorithm is verified through experiments,and the shortcomings of the algorithm for large rotation error are also found.Then,in order to improve the robustness and accuracy of the algorithm in the indoor environment,the detection and tracking of line features are added to the visual front end.In order to obtain the simplicity of calculation and the compactness of the representation of the three-dimensional space line,the Plücker coordinates and the The orthogonal representation of the line.In the back-end optimization,the integrated IMU error term is combined with the point and line reprojection error term for optimization.Evaluation experiments on public data sets verify the effectiveness and improved accuracy of the improved algorithm.Finally,in order to verify the effectiveness and portability of the visual inertial fusion algorithm,a visual inertial fusion experiment was conducted in an indoor environment.A monocular visual inertial camera,a host computer,and a car are combined to form a set of mobile platforms.Visual inertial positioning experiments are performed on an indoor venue.The experimental results show that the improved algorithm has higher accuracy and robustness,and also proves the algorithm's Versatility and portability.
Keywords/Search Tags:sensor fusion, visual–inertial odometry, tightly-coupled, point and line features
PDF Full Text Request
Related items