Font Size: a A A

Research On Simultaneous Localizaiton And Mapping Based On Multi-sensor Fusion

Posted on:2022-10-22Degree:MasterType:Thesis
Country:ChinaCandidate:G S ZhaoFull Text:PDF
GTID:2518306545490604Subject:Control Engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of the Internet and artificial intelligence technology,mobile robots have gradually moved towards the direction of intelligence.Simultaneous Localization and Mapping(SLAM)technology is the basis for mobile robots to realize intelligence.However,the monocular visual SLAM system has been unable to meet the accuracy and real-time performance of the system due to its inherent insufficiency.Therefore,this paper studies multi-sensor fusion technology to improve the accuracy and timeliness of the monocular visual SLAM system.This paper mainly conducts research from the following four aspects:(1)In-depth study of the motion estimation problem of monocular visual SLAM.The classic monocular visual SLAM framework can be divided into four parts: visual front end,back end,loop closing and mapping.Aiming at the phenomenon of uneven distribution and stacking of ORB feature extraction algorithm in the front-end monocular visual odometry,an improved ORB feature uniform extraction algorithm was proposed.This algorithm is combined with pyramid LK optical flow tracking,feature optimization and random sampling consensus algorithm,which can improve the accuracy and efficiency of feature matching.(2)Aiming at the problem of unfixed trajectory and map size caused by the non-absolute scale of monocular visual SLAM itself,and the problem of motion blur and target loss caused by the violent shaking and rapid movement of the camera,this paper will use the inertial measurement unit(Inertial Measurement Unit,IMU)information is fused with visual information to restore the absolute scale of the camera,and with the help of the high-frequency measurement characteristics of the IMU to assist the rapid movement of the camera,reducing tracking loss.Aiming at the uncertainty of the deviation of IMU,this paper proposes an IMU initialization method based on maximum posterior estimation,which uses loose coupled to estimate the initial value of the fusion of visual and IMU data.(3)In view of the large amount of computational consumption caused by the simultaneous state estimation of multiple keyframes,a sliding window is introduced and combined with sparse marginalization.At the same time,the residual items of visual,IMU and marginalization a priori are constructed into a cost function.The state estimation is completed by the tightly coupled nonlinear optimization algorithm based on dynamic weights,and the processing efficiency of the system is improved.The loop closing is introduced to reduce the cumulative error of its one-way transmission,and the pose graph is optimized to construct a globally consistent pose graph and sparse point cloud map,finally a complete visual inertial SLAM system is constructed.(4)Under the Ubuntu16.04 system based on ROS,the Euroc dataset is used to test and analyze the research content of this paper.First,verify the extraction efficiency and uniformity of the proposed uniform feature extraction algorithm;then,complete the comparison experiment of pyramid optical flow tracking coarse-fine matching and brute-force matching;secondly,the proposed IMU single initialization method is then loosely coupled and combined initialization.Tests were performed to verify the convergence speed of the initial values and the robustness of the algorithm in the joint initialization;finally,the localization and mapping experiments of the system verify the importance of adding loop closing in different environments,and compare it with the groundtruth of the dataset and the error of the existing OKVIS system,which improves the locating accuracy of the system.
Keywords/Search Tags:Simultaneous Localization and Mapping, multi-sensor fusion, feature extraction, IMU initialization, tightly coupled
PDF Full Text Request
Related items