In recent years,Simultaneous Localization and Mapping(SLAM)plays an important role in intelligent fields such as autopilot and aircraft.It is used to solve the problems of self localization and map construction in unknown environment.Visual SLAM and Lidar SLAM technology have attracted more and more scholars’ attention,but there are still some problems of positioning and mapping in monocular Visual SLAM,as well as the problem of real-time positioning in Lidar SLAM.This thesis focuses on the following problems: 1)in Visual SLAM,the feature point method based on monocular camera is prone to low positioning accuracy and missing positioning in a single environment.At the same time,the feature point method can only construct sparse point cloud map,which loses three-dimensional structure details.2)In Lidar SLAM,how to use structural features for localization and use less computing resources to achieve real-time positioning effect.The work done is as follows:(1)We propose the RAFT-SLAM algorithm,which uses deep learning RAFT optical flow method to improve the matching strategy of feature points in ORB-SLAM2 system algorithm.The deep learning RAFT optical flow method is integrated in the tracking thread to calculate the optical flow information of the feature points in the non-key frames,and the extracted feature points can be matched more accurately.(2)A method of constructing semi-dense point cloud map based on deep learning RAFT optical flow method under the condition of monocular camera is proposed.By selecting key frames and dynamically selecting appropriate reference frames,matching image feature blocks are extracted from the key frames.The key frames and reference frames are sent to the RAFT optical flow thread to calculate the pixel displacement change of the matching image feature blocks,and the semi-dense point cloud map corresponding to the matching image blocks is obtained through the depth filter,The global semi-dense point cloud map is obtained by splicing the matched image feature blocks of all key frames.(3)A fast localization algorithm based on 2D lidar is proposed.This method extracts line segment features from point cloud,selects prediction line segments,calculates the average value of angle difference and displacement between prediction line segments and corresponding matching line segments,and iteratively updates the current pose.This method has good accuracy and low time complexity,and is suitable for embedded devices with low computing power on robot platform.In the Visual SLAM problem,this study verifies the feasibility of the algorithm by using the official TUM dataset to test the improved camera pose estimation algorithm RAFT-SLAM and the semi-dense map construction algorithm for many times.Experimental results show that the proposed algorithm is more accurate than ORB-SLAM2 camera pose estimation,and can effectively construct semi dense point cloud map.In the Lidar SLAM problem,by testing the lidar localization algorithm in indoor scenes for many times,the experiments show that the algorithm can locate stably in real time and has high robustness.At the same time,compared with the mainstream methods,the calculation time is greatly shortened. |