Font Size: a A A

Local Tracking And Global Localization Of Mobile Robot Based On Sensor Fusion Of Vision And LiDAR

Posted on:2021-01-30Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y G ChenFull Text:PDF
GTID:1368330602493450Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
In intelligent mobile robotic field and its application,smart mobile devices are expected to be able to understand the outer world,achieve localization and mapping in real time as well as perform intelligent navigation based on the mapping result.Referred to existing work,modeling and localization based on multiple sensors fusion is obtaining more and more attention in these years since the richness information of the feedback.The application of this technology in mobile robot is usually classified as sensor fusion navigation.With the research made by many scholars in mobile robotic community,many representative achievements have been realized in the related areas of autonomous navigation.Especially in terms of the accuracy of self-localization with mobile robots,lots of milestones have been established.Nevertheless,to apply the intelligent navigation technology and localization systems in practical problems,the existing studies still suffer from the lack of robustness.This shortcoming limits the improvement of mobile robot navigation in real world application.Aiming at the global re-localization and local tracking of autonomous navigation problem,the related problems in different aspects of localization is studied in this paper,and puts forward a series of innovative methods of environment modeling,sensor fusion and localization supervision.Compared with the existing related methods,experiments with proposed methods in this paper show higher robustness and global re-localization accuracy,which can also meet the requirements of different situations.In detail,the contribution of this paper lies on the following position of mobile robot autonomous navigation:In the aspect of environment modeling and expression,an environment expression method suitable for vision and LiDAR sensor fusion is proposed.By using the geometric feature feedback and dense data measurement of LiDAR,a geometric feature environment expression form with perspective invariance is constructed to achieve high-precision local tracking.On the other hand,image information is used for key frames collection and topological localization.Thus,the environment texture representation with visual information is constructed to improve the global re-localization ability which is different from geometric features.Furthermore,the two kinds of environmental expressions are fused to meet the requirements of multi-sensor fusion global localization and local tracking for environmental expression,and to improve the practical application requirements of global coarse localization and local fine tracking for mobile robots.Based on the built environment model,key frame matching for global visual relocation with visual information in hybrid sensing algorithm is studied.An efficient and high-precision key frame matching and global relocation algorithm based on key frame clustering is proposed.Different from the existing image matching study,we proposes a key frame clustering-based matching algorithm,which uses K-Means algorithm to classify the global descriptors of the sampled key frames,so as to reduce the number of matching scores calculation and the time-consuming,as well as improving the quality of matching results.On the other hand,by the relationship within clusters,key frame clustering can reduce the loss of image information caused by global descriptor extraction,and improve the accuracy of key frame matching and the reliability of global re-localization.Based on the proposed visual key frame clustering and matching algorithm,the optimal of image global descriptors are studied,and some comparison result are provided.The deep convolution neural network is used to extract the global descriptor of the image,as well as the handcraft descriptors.Different from traditional methods such as GIST and V-BOW,deep convolution neural network has the characteristics of multi-level and large data training,so the global image descriptor extracted by it has multi-level expression information from micro to macro.Thus,image matching can be realized,which is stronger than traditional descriptors and has the invariance of view angle and illumination.Especially,the comparison of different kind of global descriptor is studied,and we provide an evidence for global descriptor selection in variant situations.Finally,in terms of re-localization trigger and error accumulation self-monitoring,a re-localization trigger algorithm based on the assumption of motion continuity of mobile robots is proposed.Based on the continuity of mobile robots,through the continuous detection of multi-sensor feedback and the coherence of continuous key frames in clustering algorithm,the self-detection of high-noise sensor feedback can be realized.On the other hand,through the consistency of multi-sensor feedback,the accumulation error can be cut off and tracking can be re-initialed.Lastly,combining the continuous motion hypothesis and key frame matching score,the confidence curve function of key frame matching is constructed to verify the credibility of global rough positioning.
Keywords/Search Tags:Navigation, Mobile Robot, Image Matching, LiDAR, Sensor Fusion
PDF Full Text Request
Related items