With the rapid development of machine vision,Simultaneous Localization and Mapping(SLAM)technology based on vision sensors is gradually applied to autonomous driving,service robots and industrial robots.However,Traditional visual SLAM systems are designed for static environments and are susceptible to interference from dynamic objects when applied in dynamic environments,resulting in significant errors in the SLAM system’s initial pose estimation and reducing the overall robustness of positioning and mapping effects.This study is based on the ORB-SLAM2 algorithm and combined with the YOLOv5 algorithm to study visual SLAM systems based on object detection in indoor dynamic scenes to solve the problem of poor robustness of traditional visual SLAM systems in indoor dynamic environments.The following are the main research content and innovation points:To improve the positioning accuracy of the ORB-SLAM2 algorithm in indoor scenes,the number of feature points extracted from the ORB-SLAM2 visual odometer was stabilized,and the problem of feature points being too uniform was addressed.On the one hand,A FAST corner extraction algorithm with variable threshold is proposed in this study to improve the stability of the number of feature points extracted under the condition of ambient illumination changes;on the other hand,an improved quadtree algorithm is proposed to improve feature point extraction speed,retain more image information,and avoid feature point dispersion.Finally,on five indoor static sequences from the TUM dataset,comparative experiments were performed between the improved ORB-SLAM2 algorithm and the original algorithm.The experimental results show that the improved ORB-SLAM2 algorithm outperforms the original algorithm in terms of positioning accuracy.Moblie Net V3 was chosen to lightweight YOLOv5 in order to deploy object detection algorithms in mobile devices,improving portability while ensuring accuracy.Furthermore,the lightweight algorithm was trained using self-created indoor datasets and some COCO datasets to improve the accuracy of object detection algorithms in recognizing indoor objects.The YORB-SLAM algorithm is proposed to improve the robustness of ORB-SLAM2 in indoor dynamic scenes by combining the improved ORB-SLAM2 with a lightweight YOLOv5 self-training model.To achieve real-time communication between the visual SLAM system and the object detection algorithm,increase the thread for removing dynamic feature points,develop rules for removing dynamic feature points in dynamic objects,and use the IPC socket interprocess communication mechanism.Finally,a comparison experiment between YORB-SLAM,ORB-SLAM,DS-SLAM,and Dyna SLAM was performed on five indoor dynamic sequences in the TUM dataset.The experimental results revealed that YORBSLAM outperforms ORB-SLAM2 in terms of positioning accuracy.YORB-SLAM has the advantage of real-time performance over DS-SLAM and Dyna SLAM,while the accuracy difference is not significant.Finally,the effectiveness of YORB-SLAM in indoor real scenes was tested using fixed cameras and mobile robot platforms equipped with cameras.The results demonstrated the feasibility of YORB-SLAM in real-world scenarios,resolving the issue of poor positioning and mapping robustness in visual SLAM in indoor dynamic environments and providing a technical reference for future research into visual SLAM systems in indoor dynamic environments. |