Font Size: a A A

Research On Key Technologies Of Robot Localization And Mapping In Large-Scale Scenes Based On Multi-Source Sensor Fusion

Posted on:2024-05-03Degree:MasterType:Thesis
Country:ChinaCandidate:S Y HuangFull Text:PDF
GTID:2568306944970749Subject:Communication engineering
Abstract/Summary:PDF Full Text Request
With the progress of localization,navigation,computer vision,machine learning and other technologies,unmanned driving system has also been fully developed.The unmanned driving system includes modules such as perception,localization,mapping,path planning and decision making.At present,SLAM(Simultaneous Localization and Mapping)technology becomes the key technology in localization and mapping of unmanned vehicle system.SLAM is an integration of a series of technologies used to solve the problem of robot localization in unfamiliar environments and complete environment mapping.The SLAM system can use different types of sensors,including 2D/3D Lidar,camera,IMU(Inertial Measurement Unit),and odometry to localization and mapping.However,SLAM systems based on a single sensor have different performance differences in solving SLAM problems in different environments,and have different degrees of sensitivity to different environments.For example,cameras are easily affected by light and lidar sensors are prone to drift in empty environments.At the same time,in the large-scale scene,the robot will face the problems of large cumulative error of the odometry,data explosion and low efficiency of map storage.To solve the above problems,based on three kinds of sensors,such as camera,Lidar and IMU,this paper integrates the information of three heterogeneous sensors,combines traditional geometric methods and deep learning methods,and realizes stable and reliable localization and mapping in large-scale scenes.Finally,experimental verification is carried out in the open standard data set and the actual campus environment.This paper mainly completed the following three aspects of work:1.Aiming at the problem of posture shift caused by the robot’s inability to accurately position itself in degraded scenes,this paper proposes a method to convert 3D lidar data into 2D gray image and then perform lidar odometry estimation.Firstly,the degradation module designed in this paper is used to screen the degradation frames of lidar data.Secondly,the L2I(Lidar to Image)model proposed in this paper is used to convert 3D lidar data into 2D grayscale images.Then,based on the gray image,SuperGlue of the deep learning network is used for pose estimation,and finally,the results of high-precision lidar odometry in degraded environment are realized.The proposed method is validated on KITTI,a public standard data set,and compared with Lego-LOAM.Experimental results prove that,in seven typical unmanned Trajectory scenarios,the proposed algorithm not only achieves the highest ATE(Abosulte Trajectory Error)optimization of 1.18%,but also has a processing speed better than 20Hz.2.Aiming at the high failure rate of loop closure detection in large-scale complex environment,a loop closure detection method based on two-stage thread is proposed in this paper.The method is divided into real-time thread and background thread.The real-time thread runs fast and can perform loop detection and pose correction in real time.Background thread is the second loop closure detection of historical data to reduce the probability of error loop closure detection of real-time thread,but the real-time performance is poor.Therefore,background threads are complementary to real-time threads.In this paper,a new Lidar data coordinate system coding method is proposed,and the reflection intensity of lidar data is integrated,so as to solve the translation invariance problem of traditional Scan-Context,and finally the real-time thread is constructed.Background thread is to solve the false loop in the actual similar environment.In this paper,the deep learning network MinkLoc++is used to extract the global descriptor of lidar and image fusion data,and the Faiss library is combined to realize the screening of candidate frames.On this basis,the detection of loop frames is realized through interframe constraints.In this paper,the proposed method is verified on KITTI and compared with traditional Scan-Context.Experimental results show that the method proposed in this paper has a 20%improvement in the harmonic mean(F1-Score)of accuracy and recall rate.3.In order to verify the overall performance of the above methods proposed in this paper,the low-speed unmanned vehicle verification system was built by integrating 16-wire 3D Lidar,RGBD camera,IMU and wheel speed meter sensors on the chassis of the four-wheel drive vehicle.In the process of building the system,this paper solved the problems of map loading,map space occupation,multi-sensor data communication confusion and so on in large-scale scenes.Finally,the system is tested in the campus of Beijing University of Posts and Telecommunications.Experimental results show that compared with Lego-LOAM,the method proposed in this paper achieves 100%success rate of loop closure detection in five campus experiment scenes,while the corresponding Lego-LOAM only achieves 20%success rate of loop closure detection,which proves that the system designed in this paper has better positioning and mapping performance.
Keywords/Search Tags:unmanned driving, SLAM, large-scale, odometry, loop closure detection
PDF Full Text Request
Related items