Font Size: a A A

Robust And Intelligent Multi-source Fusion SLAM Technology

Posted on:2022-09-15Degree:DoctorType:Dissertation
Country:ChinaCandidate:X X ZuoFull Text:PDF
GTID:1488306332491934Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Simultaneous localization and mapping(SLAM)is among the core technologies in mobile robotics applications and has received extensive attention in both academic and industrial communities.In recent years,this field has flourished and seen many amazing advances,but there are still serious challenges in achieving robust SLAM applications.On the one hand,complex external environments,such as bad weather,rapidly changing illumination,fast motion,textureless,degraded scene structure,etc.,can lead to significant degradation of the algorithm performance or even failure.On the other hand,the robot platform itself has special requirements on the computational intensity,accuracy,consistency of state estimation,and limitation period of the algorithm.In addition,SLAM should transition from focusing only on the low-dimensional geometric features to be able to understand and leverage higher-level information more intelligently.To address the above issues and challenges,this thesis investigates how to improve the robustness,sensing capabilities and accuracy of the localization and mapping algorithm by fusing multiple sources of information,while maintaining its real-time performance.The information sources,which are utilized to be fused in this thesis,include multimodal sensors,high-level geometric features,physical information from the environment or robot platform,and inferences from deep neural networks.The main contents and contributions are as follows:(1)A real-time Visual-Inertial Odometry(VIO)aided by a priori LiDAR map is proposed,which tries to fuse the heterogeneous information from multimodal sensors in order to achieve accurate pose estimation with low-cost sensors.Based on the standard MSCKF(Multi-State Constraint Kalman Filter)VIO framework,low-cost stereo cameras and IMU sensors are used online to perform local pose estimation with sparse visual feature observations and IMU measurements.This method recovers a semi-dense visual point cloud from stereo images and registers it with a high-cost a priori LiDAR point cloud map to obtain constraints from the map.The registration result is used as a global measurement to update the estimated state in the filter,which can effectively suppress the pose drift in the odometry.The experimental results demonstrate that fusing a prior LiDAR map can significantly improve the pose estimation accuracy of the VIO.(2)In order to cope with challenges from rapid illumination changes,fast motion,textureless,and structural degradation,etc.,this thesis fuses multimodal sensors and multiple geometric features in a filter framework,and proposes a tightly-coupled lightweight robust LiDAR-IMU-camera odometry for high-precision 6DoF pose estimation.The proposed method calibrates the spatio-temporal extrinsic parameters between sensors online and estimates both visual point features and LiDAR plane features,which extends the applicability of the proposed algorithm.In order to perform reliable data association on the LiDAR points while avoiding the sluggish iterative updates,an efficient and reliable sliding-window planar feature tracking algorithm with probabilistic outlier rejection of the data associations is proposed.Detailed observability analysis of the proposed LiDAR-IMU-Camera system is conducted,especially for the LiDAR-IMU subsystem,and the degenerate cases make the extrinsics unobservable are identified.The proposed method is demonstrated to provide highly accurate and consistent state estimates,and shown to be highly robust.(3)For high-precision pose estimation of ground robots,a pose estimation method that incorporates motion manifold from the environment is proposed,and the parametric representation and estimation methods of the motion manifold are investigated.The motion manifold is a kind of physical information and used to model the surface on which the ground robot is moving.In the optimization-based sliding-window estimator framework,a customized high-precision pose estimator for ground robots is designed,which fuses measurements from wheel encoders and exteroceptive sensors to estimate navigation states and motion manifold jointly.In contrast to the conventional methods that only 3DoF planar motion integration can be performed using wheel encoder measurements,this thesis proposes a method to perform 6DoF motion integration using only wheel encoder measurements.Besides,this thesis also proposes the method of periodically reparameterizing the motion manifold,which greatly reduces the pose estimation error.(4)In order to improve the robustness and accuracy of the localization and mapping algorithm applied to skid-steering robots,and to reduce the risk of degradation of the algorithm performance caused by deformation of the machinery,terrain variations,changes in the tire pressure,changes in the center of the robot mass,wheel slippage,etc.,this thesis incorporates the kinematic characteristics(a type of physical information)of the robot platform itself and proposes a kinematic and pose estimation method for skid-steering robots.We leverage a robot kinematic model based on the instantaneous center of rotation,and the kinematic parameters and 6DoF poses of the robot are estimated jointly by fusing wheel encoders,monocular camera,and optional IMU in an optimized-based sliding-window estimator.Regarding different sensor and design configurations,this thesis also conducts an in-depth observability analysis to provide theoretical support for achieving a reasonable algorithm design.The experimental results demonstrate that the incorporation of the kinematic model significantly improves the robustness and accuracy of the algorithm,which is beneficial for the robot to perform long-term tasks.(5)In order to address the problem of missing depth sensors in some applications,a visualinertial localization and mapping method incorporating neural network inference is proposed.In this thesis,we leverage a neural network with inferred dense depth,and tightly couple it with VIO for real-time localization and mapping.To address the problem that the true scale of dense depth cannot be recovered from a monocular image,the proposed method leverages a lightweight conditional variational autoencoder(CVAE)taking input of the scaled sparse depth of visual features in the VIO and monocular image to predict the scaled dense depth map.Besides,to quickly refine the dense depth,CVAE encodes the high-dimensional dense depth into a low-dimensional latent variable(called depth code).In the filter-based VIO,the proposed method uses only sparse measurements to simultaneously update the depth codes and navigation states,which improves the quality of the inferred dense depth.To ensure real-time capability,this thesis proposes a finite-difference-based Jacobian computation method for the neural network,which is an order of magnitude faster than the chain-rule-based Jacobian computation method,and uses only the first-estimates Jacobian(FEJ)without re-linearization.In addition,the posed method has strong generalization capability over completely unknown datasets because the deep neural network is fused with the state estimator that does not suffer from overfitting.The core problems and contributions mentioned above are extensively experimentally evaluated and verified in this thesis on real-world datasets and in simulations.The experimental results show the effectiveness and superiority of the proposed theory and algorithms.Some of the proposed algorithm modules have been applied in commercial products,showing significant practical values.The innovative research in this thesis helps to improve the robustness and intelligence of the robot localization and mapping techniques,and has positive implications for their practical deployment in complex real-world environments.
Keywords/Search Tags:Simultaneous localization and mapping, State estimation, Robotics, Multi-sensor fustion, Observibiliy analysis, Visual-inertial navigation, LiDAR odometry, Kinematics, Deep neural network
PDF Full Text Request
Related items