Font Size: a A A

Research On Semantic Map Construction Of Indoor Environment Based On Visual SLAM

Posted on:2022-10-26Degree:MasterType:Thesis
Country:ChinaCandidate:H QuFull Text:PDF
GTID:2518306545495224Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
In recent years,as mobile robots play an important role in many fields,increasingly complex tasks put higher requirements on the level of robot intelligence.A core performance of mobile robot intelligence lies in the exploration and perception of unknown environment.Visual SLAM technology can be used to accurately estimate the robot's moving trajectory and position,accurately construct the environment map,and complete the mobile robot's representation of the external environment's geometric information and topological information.However,the lack of semantic information acquisition of the external environment limits the robot's semantic perception of the external environment.Based on the research of visual SLAM,this project combines the visual SLAM algorithm and the image semantic segmentation model based on deep convolutional neural network to realize the construction of three-dimensional dense semantic map containing semantic information,enhance the mobile robot's perception of the semantic information in the environment,and make it possible for the robot to complete more complex tasks.The main work of this paper is the following three aspects:(1)According to the experimental environment in this paper,the RGB-D camera is used as the sensor.In this paper,the Elastic Fusion algorithm and the ORB-SLAM2 algorithm are respectively implemented to construct the 3D dense map based on the surface model,and the mobile robot can represent the geometric information of the external environment.The experimental results show that the three-dimensional dense map based on the surface model is better than the point cloud model in the large-scale scene.(2)According to the semantic information perception needs of mobile robots,the Deeplab V3+ semantic segmentation model is adopted to achieve mobile robots' acquisition of semantic information of external scenes.As the Deeplab V3+ model containing a large number of parameters is difficult to meet the real-time problem,a lightweight Deeplab V3+ semantic segmentation model is designed to achieve the balance of segmentation accuracy and rate,so as to ensure that the network segmentation model combined with the visual SLAM system can meet the implementation requirements of 3D semantic map construction.The experimental results show that the improved Deeplab V3+ can maintain the segmentation accuracy and greatly improve the segmentation rate compared with the original model on the PASCAL VOC2012 dataset.(3)Based on visual SLAM and image semantic segmentation model,a semantic map construction method is designed.Firstly,to solve the problem of low processing efficiency of input depth image based on joint bilateral filtering algorithm,a joint bilateral filtering algorithm(JBF)based on threshold processing was proposed to optimize depth image processing speed.Finally,based on Bayesian updating method,Elastic Fusion algorithm and the improved Deeplab V3+ model are fused to dynamically integrate the semantic information obtained from 2D RGB image segmentation by lightweight Deeplab V3+ network into 3D dense map,and the construction of 3D map with semantic annotation is completed.In the 3D dense map constructed by the visual SLAM system,the object to which it belongs contains semantic information such as object category.Experimental results on NYU V2 dataset show that the proposed method can achieve accurate and rapid 3D dense semantic map construction.
Keywords/Search Tags:Visual SLAM, ElasticFusion, Deeplab V3+, Semantic mapping
PDF Full Text Request
Related items