3D reconstruction technology is a technique that uses digital means to convert physical entities such as objects and scenes in the real world into computer data,allowing for more in-depth research and analysis on these objects and scenes with the computer.Among the 3D reconstruction algorithms applied to robots,simultaneous localization and mapping(SLAM)has been widely used in recent years.However,traditional visual SLAM methods are based on the assumption of a static scene and typically only consider the geometric characteristics of the scene,making it difficult to adapt to complex dynamic scenes.This thesis focuses on visual SLAM technology in 3D reconstruction and addresses the shortcomings of traditional SLAM algorithms in dynamic scenes by introducing semantic information from image segmentation networks into the SLAM system,for studying and improving the SLAM algorithm.The following is the main research and contribution of this thesis.1.Aiming at the poor localization and mapping accuracy and tracking loss of traditional SLAM algorithms in dynamic scenes,the classical visual SLAM algorithm ORB-SLAM2 is improved in this thesis by introducing Mask R-CNN,a deep learning semantic segmentation network,as a semantic information extraction module.Semantic segmentation results are used to distinguish between dynamic and static targets in the image and to determine the motion status of dynamic targets using epipolar geometry constraints.Feature points of objects in motion are removed and not used in subsequent calculation processes to improve system stability and localization accuracy.The improved algorithm is tested on dynamic scene image sequences from the TUM dataset,and the results demonstrate that it significantly outperforms the original ORB-SLAM2 algorithm in high dynamic scenes,but the loss of feature points in low dynamic scenes leads to a certain degree of performance degradation.2.To address the issue of feature point loss caused by removing dynamic objects from the scene,Line Segment Detector(LSD)features are introduced into the SLAM system in this thesis to compensate for the loss of feature points and improve localization and mapping accuracy.Firstly,the LSD line feature extraction algorithm is improved in this thesis to better adapt to the SLAM algorithm.A line feature-based reprojection error calculation method is used to estimate the camera pose and do subsequent optimization,and line features are used for system initialization when feature points are insufficient.The improved algorithm is tested on dynamic scene image sequences from the TUM dataset,and the results show that it can maintain good localization and mapping accuracy in high dynamic scenes and achieve a certain degree of performance improvement compared to algorithms that only use feature points in low dynamic scenes.3.A mobile unmanned vehicle platform is designed and built in this thesis.The proposed algorithm is deployed on the mobile platform and tested in various actual scenarios.The results demonstrate that the system has good localization performance and engineering practicality in both typical indoor scenes and challenging indoor scenes. |