Font Size: a A A

Research On Semantic Mapping Based On Visual SLAM

Posted on:2019-09-02Degree:MasterType:Thesis
Country:ChinaCandidate:S Y ChangFull Text:PDF
GTID:2428330566996880Subject:Control engineering
Abstract/Summary:PDF Full Text Request
The autonomous positioning and the semantic perception of mobile robots in an unknown complex environment are currently the frontiers of research in the field of robotics and computer vision,and both are the basis for advanced complex tasks such as autonomous exploration,behavioral decision making,and human-computer interaction.Therefore,the main research content of this topic is the research of semantic map based on visual SLAM.It finishes the mobile robot's estimation of its own motion and pose through RGB-D camera,and perceives the environmental semantic information to build a semantic map.This paper designs a new way of constructing a semantic map,while estimating its own movement and position,it perceive semantically with object-oriented way and build semantic map.The relevant research methods of the semantic mapping system design in this paper are as follows:First,the visual SLAM algorithm is designed with RGB-D camera as the sensor to realize self-motion and pose estimation and optimization.The visual SLAM algorithm extracts and matches the features of two adjacent frames of images.The camera pose is estimated using the geometric relationship of the feature point pairs.The bundle adjustment algorithm is used to optimize the pose and feature points.During the entire process,the algorithm judges whether there are closed loops in the camera's motion trajectory through the degree of similarity between images,and using this constraint to optimize the global pose and eliminate the cumulative error.Secondly,the object detection algorithm of convolutional neural network is designed to perceive the semantic information of objects in the environment.The network model of the object detection is based on the YOLOv3 model under the Darknet framework.And we train the parameters of the YOLOv3 model by data samples of the laboratory and incorporates open data sets,and test its detection performance?Thirdly,the 3D semantic mapping algorithm is designed.we realized the RGB-D segmentation of detection objects,and established the object model by data association and model updating algorithm,and combined the camera pose to map the environmental geometric information and object semantic information to the 3D space,and we use the colorful octree map structure to build and store 3D semantic maps.Finally,the overall design of the semantic mapping system is realized in the Ubuntu system,and the overall performance of the semantic mapping system is tested and verified after each major module of the system is tested.Running the system in a laboratory environment,the system build a readable and accurate 3D semantic map at the same time of its own positioning,pose estimating,and semantic perceiving,which verifies the feasibility and veracity of the semantic mapping algorithm.
Keywords/Search Tags:visual SLAM, object detection, RGB-D segmentation, semantic map
PDF Full Text Request
Related items