Font Size: a A A

Multi-source Fusion SLAM Technology

Posted on:2024-10-07Degree:DoctorType:Dissertation
Country:ChinaCandidate:H Y XieFull Text:PDF
GTID:1528307334950449Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
SLAM(Simultaneous Localization and Mapping),a core technology in the field of mobile robotics,enables robots to simultaneously localize themselves and incrementally build maps in unknown environments.This technique allows robots to freely navigate and perform tasks in unfamiliar surroundings by utilizing sensor data and motion information.In recent years,SLAM technology has found widespread applications in various industrial and commercial sectors,including healthcare services,autonomous driving,logistics AGV,and virtual reality.Despite the significant progress made in SLAM technology,practical applications still face numerous real-world challenges,such as extreme weather conditions,environmental degradation,dynamic interference,sensor noise,and the construction and maintenance of large-scale scene maps.To address the aforementioned issues and challenges,the aim of this article is to explore the integration of multi-source information to enhance the stability and efficiency of localization and mapping algorithms in complex environments.The multi-source information that can be integrated in this article includes:various feature primitives,scene characteristic information,heterogeneous sensors data,and data collected individually by multiple robots.The main content and research findings are as follows:(1)In scenarios with low texture or missing texture,we explore a real-time visual SLAM method that integrates point and line features using a semi-direct multi-map approach.Firstly,a multi-threaded parallel visual SLAM framework is established,incorporating tracking,local mapping,loop detection,and multimap data association.Secondly,a non-keyframe motion state estimation is performed based on a pixel alignment method using the fusion of points and line segments.Point and line features are extracted on keyframes,and an optimization of camera motion trajectory and point cloud map is carried out using a reprojection error model based on the fusion of point and line features.This enhances the efficiency of the system while ensuring its operational stability.Lastly,a visual bag-of-words model is employed to achieve loop closure and sub-map fusion.This reduces the accumulation error in camera motion.It also optimizes the global map along with the camera motion trajectory.(2)Addressing challenges such as changes in illumination,weak textures,dynamic occlusions,and smoke,this article investigates the integration of multiple feature primitives and scene feature information within the visual SLAM framework.This integration aims to enhance the robustness of visual SLAM methods in such complex environments,allowing for high-precision and real-time 6Do F(Degrees of Freedom)pose estimation.For disturbances caused by dynamic objects,this article employs YOLO V4-tiny combined with motion consistency analysis to eliminate dynamic objects.Then,a multi-feature primitive fusion strategy based on point and line features is used to track camera motion.In the face of degraded image gradients in hazy environments,we propose a deep learning-based method for synthesizing smoke datasets.By utilizing image enhancement techniques for preprocessing,we improve the robustness of SLAM algorithms in hazy scenarios.Finally,with the aid of YOLO V3’s 2D semantic analysis and a 3D point cloud segmentation method based on graph cuts,we achieve 3D semantic annotation for small-scale indoor scenes.(3)In response to the need for accuracy and robustness in multi-robot cooperative localization and mapping in dynamic indoor environments,this article proposes a method that amalgamates learning-based features.Firstly,an object tracking algorithm that combines YOLACT and Deep SORT is employed to identify potential dynamic objects.Then,a single robot SLAM system is constructed,which estimates camera poses based on a tightly-coupled visualinertial odometry and establishes sub-maps.Finally,sub-maps are reconstructed on the server side.Leveraging loop closure detection within the sub-maps,the stitching of sub-maps and integration of camera trajectories are completed.With the application of pose graph optimization algorithms,a globally consistent map construction is achieved.(4)Addressing the challenges of multi-robot cooperative environment exploration for heterogeneous robots in large-scale dynamic scenarios,this article investigates an environment exploration strategy and a data association method for multi-robot systems.Initially,GPS data from a campus environment is used as prior information,a heuristic algorithm is adopted for robot workspace partitioning and navigation.Then,a factor graph framework that integrates Li DAR odometry factors,IMU inertial factors,and loop closure factors is constructed.Single robot SLAM is realized based on factor graph optimization methods.Finally,by combining 3D point cloud descriptors with GPS data,geometrically consistent data association for multiple robots is achieved.In response to the core issues and research findings mentioned above,this article has conducted comprehensive experimental evaluations and validations on various types and scenes of real experiments and open-source datasets.The experimental results fully affirm the reliability and superiority of the proposed methods,providing theoretical exploration and experimental verification for achieving long-term reliable localization and environmental model construction of mobile robots in real-world scenarios.
Keywords/Search Tags:simultaneous localization and mapping, visual localization, multi-modal features, multi-sensor fusion, multi-robot systems
PDF Full Text Request
Related items