Font Size: a A A

Research On SLAM Based On LiDAR/Visual Fusion (LV-SLAM)

Posted on:2021-09-05Degree:DoctorType:Dissertation
Country:ChinaCandidate:S B ChenFull Text:PDF
GTID:1480306290984289Subject:Photogrammetry and Remote Sensing
Abstract/Summary:PDF Full Text Request
Positioning is one of the core technologies of location services,internet of things(Io T),and artificial intelligence(AI),and will play a pivotal role in the upcoming of superintelligence era.With the coming of the "new technology revolution",human beings have put forward unprecedented new requirements for positioning services: high accuracy,high real-time performance,high robustness,and high availability.The current research on positioning technologies are divided into outdoor localization and indoor localization according to the scene.The main solutions include Global Navigation Satellite System(GNSS),Intertial Navigation System(INS),smart phone indoor positioning,visual odometry(VO),LiDAR odometry(LO)and so on.However,these positioning methods based on a single sensor have certain limitations.There is a growing contradiction between the growing demand for positioning services and the limitations of positioning solutions based on a single sensor.Multi-sensor fusion technology can achieve the high-precision,high real-time,high robustness,and highavailability of the navigation localization and mapping,becoming a trend of future research.The advantages of multi-sensor information fusion are mainly in three aspects:information redundancy,information complementarity,and low cost of information processing.Simultaneous localization and mapping(SLAM)has carried out a large number of engineering practices and application promotion in many fields,such as intelligent robots,autonomous driving,mobile mapping,and AR/VR.Study on multi-sensor fusion SLAM has important research value and great practical significance.The diversity of sensors and the differences in application scenarios lead to the diversity of technical solutions.The LiDAR and visual sensors can complement each other,but there are some problems in the existing SLAM solutions based on LiDAR/visual fusion.The main problems include:(1)the calibration accuracy of the hardware sensor is not high,and the spatial reference is not uniform;(2)The contradiction between positioning accuracy and real-time processing efficiency;(3)Insufficient information fusion and insufficient use of sensor characteristics.Therefore,this paper mainly focuses on research on SLAM based on LiDAR/visual fusion(LV-SLAM).The solution makes full use of the advantages of both,and focuses on solving some key problems,realizing accurate,real-time,robust,and universal location services.The main works are as follows:(1)Based on introduction of the theoretical knowledge of physics and mathematics,we build the core technology architecture of a complete LiDAR/visual SLAM with "hardware-frontend-midend-backend".First,from the perspective of sensor hardware,the Euclidean spatial transformation,sensor model,and sensor time synchronization technology are introduced and focused on a unified spatiotemporal reference.Second,from the perspective of basic theoretical mathematics,the gradient descent and the method for solving the nonlinear least squares problem,are described in detail around the topic of nonlinear optimization.Finally,a core technology architecture of a complete SLAM is constructed,with the hardware-end of sensor calibration,the frontend of LiDAR direct odometry,the mid-end of local feature adjustment,and the backend of loop detection and graph optimization.(2)For the problem "the calibration accuracy of the hardware sensor is not high and the spatial reference is not uniform",a high-precision external calibration method of a LiDAR/camera system with infrared images is proposed,and the calibration accuracy reaches the level of a laser footprint.First,the importance of high-precision external calibration of the LiDAR/visual system is explained,the shortcomings of traditional calibration methods are analyzed,the basic concepts and advantages of infrared photography are introduced,and the feasibility of infrared images for the calibration is demonstrated.Secondly,the basic concepts and principles of external calibration of LiDAR/visual sensors are explained,a new external calibration method using infrared images is proposed,a high-precision external calibration model is established,and the implementation process and technical details of the method are introduced in detail.The experiment comprehensively evaluated and analyzed the accuracy of the method from three different aspects.(3)Aiming at "the contradiction between positioning accuracy and real-time processing efficiency",a method with LiDAR direct odometry based on weighted NDT and its local feature adjustment is proposed.First,based on the detailed introduction of the feature-based method and the direct method of the visual odometry,the featurebased method and the direct method of LiDAR odometry are summarized and introduced.And then the direct odometry and its local feature adjustment(DO-LFA)are explained as the LiDAR SLAM front-end(&mid-end).Secondly,a LiDAR direct odometry based on weighted NDT is proposed,adding the distance and surface characteristic weights of each voxel to the classic NDT matching.Lie algebra derivative and the key frame selection strategy improve the accuracy of the odometry,solving the contradiction between positioning accuracy and real-time processing efficiency.Then,we describe the feature extraction,feature connection,and pose adjustment of the LFA in detail.The key steps the complete technical solution of DO-LFA are presented.Finally,the accuracy and efficiency of this solution were evaluated and analyzed through detailed experiments of the KITTI benchmark and WHU-Kylin backpack data.(4)Aiming at the problem of "inadequate information fusion and under-utilization of sensor characteristics," a loop detection technology based on visual BOW similarity and point cloud rematching was proposed,and a LiDAR/visual SLAM solution based on loop detection and global graph optimization was constructed.First,the significance of loop detection is introduced.Considering the geometry-based and appearance-based methods,loop detection is performed with visual BOW similarity and point cloud rematching.Then,the theory of construction and optimization of global pose graph is explained in detail.Based on DO-LFA,global pose graph optimization further improves the accuracy of key frame positioning and the consistency of the global graph,realizing the full fusion of measurement information.The experiments on the two different platforms comprehensively analyze and discuss the accuracy and robustness of the algorithm,from the aspects of positioning trajectory accuracy,map consistency,comparison with Cartographer.
Keywords/Search Tags:Simultaneous localization and mapping, SLAM, Sensor fusion, LiDAR scan matching, Sensor calibration, Loop detection, Global graph optimization
PDF Full Text Request
Related items