Font Size: a A A

Multi-Sensor Based Mobile Robot Navigation

Posted on:2020-02-21Degree:MasterType:Thesis
Country:ChinaCandidate:G X ZhengFull Text:PDF
GTID:2518306350976089Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
Autonomous exploration of mobile robot in unknown indoor environments is one of the important research fields of robotics.In order to realize autonomous exploration of mobile robot in complicated indoor environment,mobile robot must have Environmental Perception Modeling and Localization,motion planning and autonomous decision-making.Sensor fusion has become a new development trend.Based on mobile robot platform as the research background,the related technology of autonomous exploration is studied,and a robot platform and software system integrating pose estimation,autonomous exploration and autonomous obstacle avoidance are developed.Combining with the information of multi-sensor fusion,real-time exploration target points,guidance path planning and obstacle avoidance are given.The main contents of this paper are as follows:Firstly,an autonomous navigation system based on multi-sensor fusion is designed and implemented,and the hardware selection and debugging of multi-sensor data fusion are completed.The sensors used in this paper include a RGB-D camera,an inertial measurement unit and a 2D laser scanner.Three hardware schemes for multi-sensor data fusion have been selected successively:ASUS RGB-D camera Xtion and inertial measurement unit xsense MTi-28A53G35;integrated IMU Xiaomi camera;Intel Real Sense ZR300.Subsequently,the above scheme is tested experimentally.Then,a time synchronization scheme of camera IMU data is designed.Intel Real Sense ZR300 is used to realize the time synchronization of camera IMU data in hardware and software to provide reliable input data for pose estimation algorithm of multi-sensor data fusion.In addition,the architecture of distributed autonomous navigation system based on ROS is designed.The system architecture is divided into client and server,the client is mainly responsible for data collection,and the server is mainly responsible for data processing and display.The distributed structure not only improves algorithm computation speed,but also reduces robot's computational pressure.Secondly,a pose estimation method based on multi-sensor fusion is proposed.In the front-end feature point tracking part,based on RANSAC,the Pyramid Lucas-Kanade optical flow method is improved.On the basis of feature point pairs obtained by frame tracking,8 pairs of points are randomly selected to calculate the basic matrix.Then,the matching points are tested by using the corresponding polar constraints of the basic matrix,which satisfies the set threshold as the interior point,and further improves the tracking accuracy of optical flow.In the back-end optimization part,RGB-D camera is used to construct depth residuals by introducing a priori of feature point depth.Then,tight coupling method is used to minimize IMU measurement residuals,visual re-projection errors and depth residuals,and the problem is constructed into a least squares problem,which is solved by Gauss Newton iteration to obtain the optimal solution of the system state.In one step,sliding window and marginalization technology are used to constrain the computational complexity without losing the constraints of historical information.The method proposed in this paper makes use of the depth information of RGB-D camera to accelerate the convergence of the depth of feature points,make the depth estimation of feature points more accurate,and improve the localization accuracy of the system.By comparing the latest visual inertial navigation fusion algorithm with the experimental results in real environment,the effectiveness of the proposed pose estimation algorithm is verified.Thirdly,a frontier exploration method combining breadth-first search is proposed.Current frontier exploration methods can not guarantee perfect operation in real environment.The main reason is that the calculated target points may be located in unknown,occupied or hard to reach open grids.When these methods are used in complicated environment,they will actually reduce the efficiency of robot exploration.In this paper,a frontier search method combined with breadth-first search is proposed,which can quickly obtain the optimized frontier center,and the effectiveness of the method is verified by experiments.Finally,an improved autonomous obstacle avoidance method and a multi-target point navigation system are proposed.Robot equipped with 2D lidar can only detect obstacles on a certain level.During autonomous navigation,robot is likely to hit obstacles.In order to solve this problem,this paper combines the 3D sensing data of RGB-D camera with the data of single-line lidar,so that mobile robot can safely explore in complex indoor environment.In addition,multi-target point navigation of mobile robot in known environment is designed and implemented,and multi-target point navigation interface and related nodes are designed.At the same time,the effectiveness of the improved autonomous obstacle avoidance method and the performance of multi-target navigation system are verified.
Keywords/Search Tags:Autonomous exploration, Multi-sensor fusion, Depth image, Tight cou-pling, Autonomous obstacle avoidance, Multi-Objective Navigation
PDF Full Text Request
Related items