Font Size: a A A

Research And Prototype Realization Of Place Recognition Technology In SLAM In The Case Of Perspective Change

Posted on:2023-07-15Degree:MasterType:Thesis
Country:ChinaCandidate:X H HeFull Text:PDF
GTID:2558307169482774Subject:Engineering
Abstract/Summary:PDF Full Text Request
For intelligent robots,Simultaneous Localization and Mapping(SLAM)is the key to realizing their perception capabilities.In SLAM,relocation,loop closure,and map fusion can be realized based on place recognition.These are the cornerstones for the SLAM system to correctly understand the surrounding environment and build a globally consistent map,which can provide a perception guarantee for the task execution.Therefore,the place recognition technology in SLAM has received extensive attention from scholars at home and abroad.Place recognition is essentially a kind of data association.It associates observations belonging to the same scene,establishes geometric constraints between these observations,eliminates false perceptions caused by infinite corridors and accumulated errors,and builds consistent global perceptions.At present,the mainstream method regards place recognition as feature matching,which can be divided into two steps.The first step is scene description,which extracts features from observations to describe the scene content.And the second step is scene matching,which calculates the similarity between observations based on scene description.The most similar observations are selected.Under ideal conditions,this method can correctly identify the same scene and distinguish different scenes.However,when the viewing angle changes,the observations of the same scene may only be locally similar,and the distortion of visual information may also change the extracted features.These will lead to the failure of place recognition and reduce its rate of accuracy and recall.Based on the above,this paper focuses on the place recognition technology in SLAM under the change of perspective.In general,the work of this paper mainly includes the following three aspects:1.We propose a scene description based on sparse point cloud segmentation and multi-information fusion.According to the beta angle size between spatial points,we segment the sparse point cloud for each observation,separating the same and different parts between observations of the same scene.For each segment,we extract visual descriptors and geometric Descriptors to represent visual semantic information and geometric structure information in the scene.Such scene description can achieve more accurate scene content description,which is beneficial to subsequent successful scene matching.2.We propose a scene matching based on the co-visibility graph and segmented parts.According to the co-visibility relationship between keyframes,we detect adjacent keyframes to help eliminate false positive recognition results.For matching between keyframes,we measure the similarity between segments based on visual descriptors and geometric descriptors.Then we match the same parts between observations of the same scene,discard different parts,achieve correct scene content matching,and overcome the interference of local similarity and feature changes.The similarity score between observations is calculated weighted by the similarity between the matching segmentation parts.This can make the same scene easier to identify and different scenes easier to distinguish.3.We designed targeted experimental scenarios and experimental methods on the public datasets KITTI and Eu Ro C,and analyzed and verified the feasibility and effectiveness.In addition,we implement the proposed method based on ORB-SLAM3,so that ORB-SLAM3 can achieve loop closure and map fusion in many challenging scenarios,providing a perception guarantee,and package it based on the graphical library to form A convenient and easy-to-use visual SLAM system.
Keywords/Search Tags:SLAM, Place Recognition, Sparse Point Cloud Segmentation, Perspective Change
PDF Full Text Request
Related items