Font Size: a A A

A Simultaneous Localization And Mapping Method Based On Fusion Of Vision And Inertial Measurement Unit

Posted on:2021-03-28Degree:MasterType:Thesis
Country:ChinaCandidate:Y D LiFull Text:PDF
GTID:2428330605469241Subject:Engineering
Abstract/Summary:PDF Full Text Request
In recent years,the high and new technology represented by artificial intelligence is developing rapidly,which is spread in every corner of our life.Among them,robot technology has been applied to a variety of fields,gradually freeing people's busy hands.In some dangerous situations,such as fire,traffic accidents,dangerous goods leakage and geological disasters,in order to ensure the safety of people's lives and property,robots can assist rescuers to carry out scientific and effective search and rescue tasks.With this in mind,we can address these issues with SLAM,simultaneous localization and mapping,which is also being used in driverless applications.The follow is the specific content of SLAM.In an unfamiliar situation,the robot can explore freely.Equipped with a variety of sensor modules,the robot can detect the situation and obtain the corresponding location information.After processing the detected data,the map can be built and the navigation task can be completed.The most common SLAM algorithms mainly include laser-based SLAM,visual-based SLAM and visual-inertial fusion SLAM,all of which have their own advantages.This paper mainly studies the fusion of vision and inertia of SLAM.With this method,the stability of the system can be guaranteed to a certain extent,and it also has certain robustness.Based on the mainstream SLAM algorithm,this paper focuses these two aspects.1.Front-end preprocessing.In the aspect of image feature point extraction and matching,a more accurate matching method will make it more accurate than before.At first,the geometric relationship between two image frames is analyzed by using polar geometry,and the real coordinates of feature points are calculated by triangulation.Then,PnP algorithm[16]was used to estimate the continuous change of camera pose.Finally,the data from the front end is sent to the back end for filtering or optimization,and the estimated continuous pose is corrected.2.Back-end fusion.The algorithm of vision and inertia fusion based on nonlinear optimization was studied,and the pure vision odometer was tightly coupled with the value after integration of the inertial unit[17].Firstly,the frames with stable feature tracking are found through the sliding window,and these frames are selected as effective frames to ensure the complexity of the optimization.Secondly,the pose and feature points are optimized by the nonlinear optimization method,and the return loop detection and relocation are carried out.Thirdly,the accurate and stable pose and point cloud data are obtained.From the experimental data,the positioning accuracy of the fusion system of visual and inertial units is relatively high,and it is not easy to lose the positioning information in the process of map construction so it has certain stability and high robustness.
Keywords/Search Tags:SLAM, Computer Vision, IMU, Fusion, Non-linear Optimization, Feature points
PDF Full Text Request
Related items