Font Size: a A A

Research On Multi-sensor Information Fusion Based Monocular Visual SLAM Algorithm

Posted on:2022-06-20Degree:DoctorType:Dissertation
Country:ChinaCandidate:M X QuanFull Text:PDF
GTID:1488306569984239Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Monocular visual simultaneous localization and mapping(SLAM)is a technology that uses the measurements from onboard monocular camera sensor to simultaneously locate robot and build map in unknown environment without prior information.Monocular visual SLAM is widely used in small low-power mobile robot platforms due to its low computational cost,but it is less robust and scale ambiguous.Therefore,fusing the measurements from monocular camera and other sensors to achieve accurate and robust localization has become a research hotspot in recent years.On the basis of sufficiently considering the characteristics of each sensor,this dissertation carries out research on multi-sensor information fusion based monocular visual SLAM algorithm.The main research work of the dissertation can be divided into the following sections:Firstly,we proposed a monocular visual-inertial SLAM(VISLAM)algorithm based on the complementary framework of extended kalman filter(EKF)and graph optimization,which achieved low computational cost and high precision localization for mobile robots moving in 3D space.The algorithm performs EKF based monocular visual-inertial odometry(VIO)for each frame to provide less time-delay motion estimate.Then based on the selected keyframes,the algorithm constructs global map and performs visualinertial graph optimization and loop optimization to optimize global map in parallel threads,thus globally consistent map is constructed.Finally,we proposed a global map assisted EKF feedback mechanism,and performed the feedback mechanism for keyframes to complete the motion estimate of EKF based monocular VIO,which improved its localization accuracy.Experimental results demonstrate that comparing to EKF based monocular VIO algorithm,the proposed algorithm achieves higher localization accuracy with similar localization computational cost.Secondly,we proposed a monocular VISLAM algorithm based on the inertial-aided visual point feature tracking method,which improved the localization accuracy of mobile robots moving in 3D space by improving the tracking length and accuracy of visual point features.Firstly,we proposed an inertial-aided visual point feature motion compensation method,which improved the robustness of our visual point feature tracker to fast camera motion and increased the tracking length of visual point features,so that monocular VISLAM makes better use of the environmental geometry information.Then,we proposed a multi-reference and multi-level image patch based visual point feature alignment method,which improved the localization accuracy of monocular VISLAM by improving the tracking accuracy of visual point features.Experimental results demonstrate that comparing to monocular VISLAM algorithm that uses the classical visual point feature tracking methods,the proposed monocular VISLAM algorithm achieves better localization accuracy by providing better visual point feature tracking results.Thirdly,we proposed a monocular visual SLAM algorithm that fuses wheel odometer and gyroscope information in graph optimization framework,which improved the localization accuracy and robustness for ground robots.Firstly,we proposed a combined wheel odometer and gyroscope preintegration model on manifold,which effectively solved the problem of computational cost increase induced by repeated wheel odometer and gyroscope integration.Then based on the preintegration model,we introduced a combined wheel odometer and gyroscope preintegration error term,and tightly integrated it into the visual optimization framework.Next,we proposed a simple map initialization method to rapidly bootstrap the subsequent motion estimate.Finally,we proposed a complete motion estimation mechanism to maximally exploit monocular visual point feature,wheel odometer and gyroscope information,which improved the accuracy and robustness of system.Experimental results demonstrate that compared with the state-ofthe-art monocular VISLAM and visual-odometer SLAM algorithms,the proposed algorithm provides more accurate and robust motion estimate for ground robots.Lastly,we proposed a monocular visual SLAM algorithm that fuses visual point feature,3D line feature,ground line feature and wheel odometer information in graph optimization framework,which improved the localization accuracy of ground robots in low-textured scenes,and built the structural map.Firstly,we proposed two parameterization methods and the corresponding geometric computation methods for lines on ground,which solved the problem of estimation uncertainty increase of lines on ground caused by using the over-parameterized 3D line parameterization method.Then,we constructed the graph optimization method that tightly integrates monocular visual point feature,3D line feature,ground line feature and wheel odometer information,which employs different parameterization methods for 3D lines and lines on ground.Finally,we proposed to process lines on ground differently from 3D lines in all modules of the system,so both 3D lines and lines on ground are exploited optimally during localization and mapping.Experimental results demonstrate that comparing to the corresponding algorithms with visual point features or with visual point and line features,the proposed algorithm achieves better localization accuracy in low-textured scenes,and constructs more structural map.In conclusion,based on monocular visual SLAM algorithm,this dissertation proposed a variety of multi-sensor information fusion schemes for mobile robots moving in 3D space and ground robots,which effectively solved the problem of scale ambiguity and poor robustness of monocular visual SLAM,improved the localization accuracy of algorithm,and constructed the structural environmental map.Therefore,this dissertation has great practical significance.
Keywords/Search Tags:monocular visual SLAM, sensor fusion, IMU, visual point feature tracking, wheel odometer, visual line feature
PDF Full Text Request
Related items