Font Size: a A A

Research On SLAM Method Based On Multi-Camera Visual-Inertial Fusion

Posted on:2020-01-15Degree:DoctorType:Dissertation
Country:ChinaCandidate:C F ZhangFull Text:PDF
GTID:1368330572478947Subject:Detection Technology and Automation
Abstract/Summary:PDF Full Text Request
Because of the progress of society and the rapid development of artificial intelligence technology,autonomous mobile robots have drawn extensive attention.The study of the environment perception technique of Simultaneous Localization and Mapping(SLAM)is of great significance for improving the autonomy and intelligence of mobile robots.In recent years,visual sensors have been widely used in the field of robots,attributed to their rich image information and high cost performance.Currently,visual SLAM methods mostly focus on the feature point based monocular method.However,it is difficult for them to perceive the environment information in indoor complex environments including illumination change,motion blur,and weak texture.Also,the motion degradation phenomenon is a thorny problem,leading to the poor accuracy and robustness of these SLAM methods.Aiming to meet the requirement of high robustness and precision of the environment perception in complex environment for indoor robot,inspired by the multi-camera visual-inertial fusion,this paper proposes a framework for feature point based multi-camera visual-inertial SLAM method,and introduces the research on the methods for multi-sensor calibration,point cloud densification,parallel acceleration,etc.A multi-camera visual-inertial SLAM method system is then established,which integrates hardware synchronization,system self-calibration,state estimation and dense map.The main contributions of this thesis are as follows:Firstly,in order to describe the environmental image information collected by multiple cameras in the multi-camera visual-inertial SLAM,a multi-view visual inertial model is established.By adopting the Taylor polynomial projection model to extend the classical collinear equation,and introducing both the virtual rigid frame which represents the multi-camera cluster pose and the IMU inertial coordinate system,a unified observed equation is established,which effectively describes the surrounding picture information captured by a couple of cameras.To obtain the parameters of the established model,the calibration methods of multi-sensor was studied,including the improved feature descriptor based internal calibration method for single camera,the mapping method based external calibration method for multi-camera systems with non-overlapping regions,and the improved camera-IMU(Inertial Measurement Unit)online self-calibrating external calibration method,the effectiveness of the studied calibration methods were verified by experiments.Afterward,in order to effectively improve the accuracy and robustness of SLAM in indoor complex environment,the nonlinear pose optimization method is used to deal with tightly coupled multi-camera visual inertial information,managing to estimating the SLAM state precisely.And the monocular and feature point-based ORB SLAM is extended to the multi-camera visual inertial system,by achieving the multi-camera visual inertial fusion based SLAM initialization and visual odometry,and by using the TSDF(Truncated Signed Distance Function)method to construct a dense map,based the combined RGBD depth information and the pose information provided by the odometry.Experiments show that,the above SLAM method is significantly improved in the accuracy and robustness,and outperforms the state-of-the-art one,VINS-Mono.Finally,with the purpose of increasing the computational efficiency of the above multi-camera visual-inertial based SLAM,a SLAM acceleration algorithm based on CUDA(Compute Unified Device Architecture)is proposed.Using the high computational performance of GPU(Graphics Processing Unit),and the cooperative processing acceleration strategy of GPU and CPU(Central Processing Unit),the calculation speed of the most time-consuming feature extraction and feature matching in the multi-camera visual inertial based SLAM were accelerated by the CUDA parallel computation,and the problem of computational load imbalance was solved by using the CPU-based multi-thread pipeline method.The validity of the proposed algorithm is verified by experimental analysis.
Keywords/Search Tags:indoor complex environment, feature based visual SLAM, multi-camera visual inertial fusion, tightly-coupled nonlinear optimization, parallel acceleration
PDF Full Text Request
Related items