Visual Synchronous Location and Mapping(VSLAM)is an algorithm for real-time pose estimation and environmental mapping using images as input.With the rapid development of mobile robots,VR/AR,and autonomous driving,this technology has become one of the research hotspots.Currently,service robots,sweeping robots,and AGVs in indoor environments mostly use the method of single line laser to achieve autonomous mobile obstacle avoidance and navigation,and their positioning and mapping use the method of single line laser SLAM.Although the above method is mature enough,it can only construct a two-dimensional occupied grid image of a single line laser plane,and cannot obtain more obstacle information.This requires the robot structure to be as flat as possible or the environmental scene to be simple enough.With the development and widespread use of RGBD cameras,the advantages of directly obtaining image information and depth information using RGBD cameras have gradually received attention.VSLAM using RGBD cameras can obtain dense point cloud maps,effectively accomplish synchronous positioning and mapping tasks in indoor environments,and help robots perform more precise movements.This paper studies RGBD SLAM in indoor environments,using texture and depth information of images to achieve visual odometer(VO),using structural and texture features in indoor environments to obtain more accurate posture,and constructing a complete visual SLAM system that can solve SLAM problems in real time and effectively.The main research content of this article can be divided into three parts:(1)A visual odometer method based on point line fusion is proposed.Using LSD algorithm to extract line features,LBD method calculates line descriptors,and adds line features to positional pose estimation;A key frame selection strategy for integrating LK optical flow tracking is proposed,and the point feature optical flow tracking quality is added to the key frame judgment;In common frame tracking between key frames,the LK optical flow method is used for feature tracking and pose estimation,which reduces the amount of computation caused by point and line feature extraction and descriptor calculation,and improves the efficiency of pose estimation.(2)A nonlinear optimization method for point line fusion is proposed.In this paper,line features are added to nonlinear optimization,and a graph optimization model combining point and line features is constructed;According to the 3D 2D matching relationship,the distance from the endpoint of a line segment to the projection line is used to construct the line feature re projection error,and its Jacobian matrix is derived;The nonlinear optimization of point line feature fusion based on graph optimization model and Jacobian matrix is completed.This method is used for pose estimation between key frames,local BA,and global BA,providing more constraints for visual SLAM systems,and improving the accuracy of pose estimation.(3)Design and construct a complete visual SLAM scheme that includes visual odometer,nonlinear optimization,loop detection,and mapping.Use the RGBD dataset of TUM to test the proposed point-line visual SLAM scheme based on RGBD cameras.Comparison with the ORB-SLAM2 algorithm verifies that the proposed method effectively improves the pose estimation accuracy,system robustness,and operational efficiency of visual SLAM. |