Font Size: a A A

Research On Pose Estimation And Mapping Technology Of Mobile Robot Based On Laser And Visual Fusion

Posted on:2022-02-08Degree:MasterType:Thesis
Country:ChinaCandidate:J L WangFull Text:PDF
GTID:2518306731487474Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Positioning and mapping are important guarantees for autonomous unmanned systems to perform intelligent tasks.Around the two targets of positioning and mapping,real-time positioning and mapping technology(Simultaneous Localization and Mapping,SLAM)[1]came into being to solve the unknown.The problem of mobile robot positioning and map perception in the environment.This dissertation focuses on the real-time positioning and mapping system based on the integration of vision and laser for outdoor scenes.The main contributions are as follows:(1)Data support.On the one hand,the laser sensor has the advantages of long detection distance,can perceive high-precision three-dimensional environment,and is not affected by the intensity of light,but the measurement data is relatively sparse;on the other hand,the visual sensor can collect rich texture and color information in the environment,But lack of depth information,the advantages and disadvantages of the two sensors complement each other.Therefore,this paper uses a calibration algorithm to register 3D laser depth data and 2D image color information to provide data support for subsequent outdoor robot positioning and map creation.(2)Localization.First,the observation characteristics of laser and vision are used to present two vision-based and laser-based odometry(positioning)methods;in addition,to solve the problem of low positioning accuracy and low robustness based on a single sensor,innovatively proposed The high-efficiency and tight coupling method of laser and vision fusion optimizes the same set of state vectors through observations based on two sensors to improve the accuracy and robustness of a single sensor's pose estimation.(3)Map creation.Using the data support and pose information provided above,a color dense three-dimensional point cloud map is generated by fusing the visual and laser information at adjacent moments.However,due to the existence of accumulated errors,this chapter adopts a closed-loop detection method to store key-frame point clouds through dimensionality reduction,and match the point cloud of the latest frame with the stored compressed point cloud data,successfully achieve a robust closed-loop detection,and build a global map The objective function is optimized,and the constraints generated by the historical key frames are optimized together with the closed-loop constraints to obtain a globally consistent pose,which is used to perform color three-dimensional mapping in a large scene.Experiments were carried out on the self-built experimental platform and public data sets to obtain high-precision color point cloud maps.
Keywords/Search Tags:sensor fusion, SLAM, visual odometery, laser odometery
PDF Full Text Request
Related items