Font Size: a A A

3D Environment Reconstruction Technology For Unmanned Ground Vehicle In GNSS Denied Environment

Posted on:2020-01-16Degree:MasterType:Thesis
Country:ChinaCandidate:H Z XueFull Text:PDF
GTID:2518306548492814Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
The capability of accurate and reliable local three-dimensional(3D)environment modeling is the premise of autonomous driving.There are two main problems with the environment perception system of unmanned ground vehicle(UGV):(1)It is difficult to estimate the accurate pose for UGV in the Global Navigation Satellite System(GNSS)denied environment;(2)It is difficult to realize the complete reconstruction of the environment around the UGV since the information of the single frame data output by sensors are insufficient.To solve these two problems,this paper combines the idea of heterogeneous information fusion and multi-frame information fusion.At first,the heterogeneous information output from Lidar,wheel-encoder and inertial navigation system(INS)are fused together to estimate the ego-motion of the UGV in the GNSS-denied environment.Then,based on the accurate pose estimation,the information of multi-frame point clouds will be fused together to realize complete and accurate reconstruction of the 3D environment around the UGV.The main contents and innovations of this paper are shown as follows:Firstly,this paper goes into the problem of spatial calibration between multi-source heterogeneous sensors and proposes a spatial calibration method between Lidar,INS and vehicle.This method consists of three parts: calibration between the INS and the vehicle,hand-eye calibration between the Lidar and the INS based on the inter-frame registration,and the real-time motion compensation of the point cloud assisted by the INS.This method can represent the data output by the Lidar and the INS in an unified space coordinate system and solves the problem of intra-frame point cloud data distortion caused by the movement of the vehicle,which could ensure the spatio-temporal consistency between multi-source heterogeneous sensors,laying the foundation for heterogeneous information fusion.Secondly,an ego-motion estimation system which fuses the heterogeneous information output by Lidar,INS and wheel encoder is designed.This method firstly establishes the vehicle motion model of the UGV based on the wheel encoder and the INS information.Then,based on the vehicle motion model,an accurate Lidar odometry algorithm is proposed.The pose information output by the vehicle motion model and the Lidar odometry are fused in an extended Kalman filter framework.Experiments show that this system can output accurate,high-frequency,and stable ego-motion estimation in the GNSS-denied environment.Finally,a local 3D environment reconstruction method based on multi-frame information fusion is proposed.This method utilizes the result of the aforesaid ego-motion estimation system to effectively fuse the information contained in the multi-frame point cloud data into the 3D occupied grid map.Meanwhile,in order to overcome the negative influence of the dynamic target,a novel ray casting algorithm is used to update the state of the 3D grid in real time.Experiments show that this method could realize real-time,accurate and stable reconstruction of the environment around the UGV in the complex dynamic environment.
Keywords/Search Tags:Unmanned Ground Vehicle, 3D Environment Reconstruction, Ego-motion Estimation, Hand-eye Calibration, Multi-frame Information Fusion, Heterogeneous Information Fusion
PDF Full Text Request
Related items