Font Size: a A A

Research On Map Construction And Relocation Method Based On Visual Inertial Odometer

Posted on:2024-01-16Degree:MasterType:Thesis
Country:ChinaCandidate:Y X YueFull Text:PDF
GTID:2568307157968379Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
Real-time autonomous simultaneous localization and mapping(SLAM)for mobile robots is an important technology that enables robots to explore unknown environments autonomously.This technology can combine data from multiple sensors to allow the robot to estimate its motion state and surrounding environment autonomously.With the booming development of fields such as unmanned aerial vehicles,legged robots,autonomous driving,VR/AR,researchers have applied SLAM technology in these areas and made significant breakthroughs.However,it should not be ignored that SLAM technology is still immature and has a lot of room for development in terms of accuracy and robustness.Additionally,visual sensor failures or rapid camera movements can cause visual-based SLAM systems to produce unreliable or even invalid positioning estimates,a problem known as "robot kidnapping",which is a common issue in practical applications.In this article,a mathematical model analysis of the stereo camera and inertial measurement unit(IMU)based on visual and inertial sensors was conducted,and sensor fusion calibration was performed to obtain the transformation matrix of the camera and IMU,while analyzing the calibration error.Then,using FAST feature corner points and optical flow feature tracking,visual constraints between image frames were established,combined with IMU preintegration,to build a visual-inertial odometer.For the positioning estimation data output by the visual-inertial odometer,a least squares problem was established and solved to optimize the pose graph,while loop detection was used to optimize the global pose accumulation error.The EVO evaluation method was used to verify the positioning accuracy of the constructed visualinertial system,and the experimental results demonstrated its high accuracy and robustness.Secondly,based on the visual-inertial odometer,a 2.5D elevation grid map more suitable for ground mobile robots was introduced as the form for the robot to autonomously construct an environmental map.The local elevation grid map was continuously updated using visual sensor observation data and motion estimation data,and then segmented and incrementally fused into a global consistent elevation grid map by global pose estimation information.In the experiment of constructing a global elevation grid map in an actual environment,the results showed that the elevation grid map constructed by the method in this article had good accuracy and global consistency.Finally,a kidnapping detection-relocation module was designed based on the visualinertial system to address the problem of robot kidnapping.A multi-world coordinate system management method based on the union-find algorithm was proposed to detect whether the robot encounters the problem of kidnapping using image features and to control the start-stop of the visual-inertial system at any time.A pre-trained visual bag-of-words was used to judge the similarity of features before and after kidnapping,and a global pose graph optimization method was combined to recover the global pose of the robot after kidnapping.The experimental results on the indoor environment dataset verified that the module was feasible and effective,could improve the robustness of the visual-inertial system when encountering kidnapping problems.
Keywords/Search Tags:Visual Inertial System, 2.5D Elevation Grid Map, Robot Kidnapping, Relocalization
PDF Full Text Request
Related items