Font Size: a A A

Visual Inertial SLAM Algorithm Research Based On Embedded Parallel Processing

Posted on:2019-11-11Degree:MasterType:Thesis
Country:ChinaCandidate:J Y ZhangFull Text:PDF
GTID:2428330566498283Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
SLAM(simultaneous localization and mapping)is called immediate positioning and map reconstruction.With the rapid development of robotics,computer vision and other fields,higher requirements have been put forward for environmental awareness technologies.As an important component of environmental awareness,SLAM has always been a research hotspot in robotics.The most mature technology in the field of SLAM is laser radar.Based on the principle of laser ranging,it can obtain point clouds,position and pose estimations,and map creation.With the increasing demand for image feature accuracy and algorithm real-time and accuracy in many fields,the research of visualbased VSLAM technology has become the focus of research at home and abroad.This topic proposes a VI-SLAM algorithm based on monocular vision and inertial preintegration,and implements real-time VI-SLAM algorithm for transplanted embedded devices using parallel processing.In order to solve the problems of monocular visual feature points,frame loss,and scale drift,a visual and IMU data fusion method is proposed.The monocular vision front end is tracked by the optical flow method,and the feature points are simple corner detections.After matching,the eight-point method is used for calculation.The IMU uses the pre-integration algorithm to obtain the angular velocity and acceleration integration results of the gyroscope and accelerometer output,and solves the problem of the cumulative error in the world coordinate system during the integration process.The backend uses nonlinear optimization to obtain the optimal pose estimation,and the global map adds relocation and loopback detection functions to optimize the pose estimation of the mobile robot in the global map.For the real-time requirements of embedded computing,GPU parallel processing multi-thread operations are used to estimate the depth of image feature points in each frame.For feature points in key frames,multi-threads sample different depths,obtain virtual planes with several depth values,and then obtain the cost blocks integrating all depths after back projection,and estimate the depth by optimizing the global energy function.Local depth images are blended using TSDF to provide a global dense map that is used directly for trajectory planning.Using the NVIDIA CUDA parallel processing computing framework,assigning tasks such as image frame depth estimation to large amounts of computation,and migrating embedded device NVIDIA TX1 operation VISLAM algorithm.In order to verify the performance of embedded parallel processing VI-SLAM algorithm,relevant experiments were designed to verify.In the embedded computing capability verification experiment,the ORB-SLAM operates smoothly in NVIDIA TX1.It takes about 13 ms to extract 500 ORB feature points in the image,and 33 ms less than the ORB-SLAM requirement to meet real-time requirements.In the pose estimation experiments,the root mean square error of RMSE is kept below 0.2m in simple scene datasets and 0.2~0.4m in complex scenes compared with OKVIS and MSCKF algorithms,which is generally better than the other two fusion algorithms..In the depth recovery experiment based on parallel processing,the number of VI-SLAM feature point clouds is abundant,and a dense point cloud map,a depth image,and a depth-optimized grid map can be established.
Keywords/Search Tags:monovision, inertial measurement unit, pre-integration, tight fusion, embedded
PDF Full Text Request
Related items