Font Size: a A A

Autonomous Localization And Mapping For UAV Based On Vision Lidar Fusion

Posted on:2022-06-29Degree:MasterType:Thesis
Country:ChinaCandidate:G H XieFull Text:PDF
GTID:2532307154976289Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
In recent years,quadrotor UAVs have been increasingly used in various fields.And positioning and environmental awareness are key technologies for UAVs to accomplish various functions.The actual operating environment is complex and changing,and UAVs often encounter scenarios in which GPS signals are poor or even completely lost.In this case,SLAM technology,which only relies on its sensors to achieve autonomous positioning and map building,is usually used to position the UAV.Among SLAM systems,vision SLAM and Li DAR SLAM are the mainstream directions of research.However,vision sensors are prone to tracking failure when they are in an environment with drastic changes in illumination or lack of texture information.Similarly,LIDAR also suffers from localization failure in environments lacking geometric features.To solve the above problems,this paper investigates the autonomous localization method with vision LIDAR fusion and validates it using a quadrotor UAV platform.The research in this paper includes.(1)Since vision can only directly obtain two-dimensional image information,to recover three-dimensional spatial information requires the use of triangulation and other means to recover depth,which has a large estimation error.In contrast,Li DAR can accurately measure the distance information of the surrounding environment,so that the 3D point cloud information measured by Li DAR can be used to assist vision in 3D depth recovery.The 3D point cloud information here can be derived not only from Li DAR but also from sensors that can observe depth such as depth cameras.Therefore,we propose a depth-enhanced visual-inertial autonomous localization system that uses 3D point clouds measured by other sensors to assist vision for localization,thereby improving the accuracy of visual localization.(2)Due to the problem that vision may fail to localize in light changing or low-texture feature environments,we consider the design of an autonomous localization scheme based on Chapter 2 with loosely coupled vision and Li DAR.We take the output of vision odometry described in Chapter 2 as the initial value and further optimize the localization results using LIDAR point cloud matching.Considering the possible localization failure of vision,we use IMU prediction to determine whether the visual localization fails,and discard the visual localization output value if the visual localization fails,and the system degenerates to a loosely coupled IMU-Lidar system.In addition,for the LIDAR low geometric feature degradation scenario,this paper designs a back-end optimization strategy using predicted values to complement the degradation direction.(3)Usually tightly coupled algorithms can achieve better results than loosely coupled algorithms.Therefore,this paper designs an autonomous localization scheme based on vision and LIDAR tightly coupled methods.Visual observation,LIDAR observation,and IMU observation are unified into a single factor graph optimization problem,and the joint optimization of multiple sensor constraints can avoid the single sensor degradation problem.In addition,to efficiently compute the factor map optimization problem,this paper derives an incremental solution to the factor map optimization problem to avoid resolving the whole problem when new observations are added,which effectively reduces the computation time of the positional solution process and improves the real-time performance of the system.
Keywords/Search Tags:Aerial Robot, SLAM, Nonlinear Optimization, Factor Graphs Optimization
PDF Full Text Request
Related items