Font Size: a A A

Research On Visual SLAM Technology Based On Deep Learning In Dynamic Scene

Posted on:2024-04-20Degree:MasterType:Thesis
Country:ChinaCandidate:Y T YinFull Text:PDF
GTID:2568306914461814Subject:Communication Engineering (including broadband network, mobile communication, etc.) (Professional Degree)
Abstract/Summary:PDF Full Text Request
In recent years,with the progress of technology,intelligent equipment has gradually spread to all walks of life,and the application of intelligent robots has become more and more extensive,which can complete more and more complex intelligent tasks.As a key technology for robots to realize autonomous movement in the unknown real world,Simultaneous Localization and Mapping(SLAM)technology has been booming.Among them,visual SLAM technology with camera as the main sensor has gradually become a research hotspot in industry and academia due to its advantages of low cost,easy installation and wide application range.However,the traditional visual SLAM technology is based on the assumption of Static environment,which makes the system vulnerable to the influence of moving objects in the environment in practical application,resulting in large deviation of pose estimation,and the constructed environment map also has a lot of gander noise,which greatly reduces the performance and robustness of the system.In order to improve the defects of traditional visual SLAM system in dynamic scenes,this thesis combines deep learning technology and uses the semantic information to help visual SLAM system better understand the surrounding environment.The following research is carried out:1.A dynamic point detection module is proposed by combining deep learning technology and geometric methods.Three dynamic point detection schemes,namely epipolar constraint,dense optical flow,and multi-view geometry,are used to make up for the problem of excessive dependence on object prior information in the income-segmentationbased network elimination algorithm.Finally,the module is integrated into the ORB-SLAM2 framework to construct a visual SLAM system adapting to dynamic scenes.Compared with the original ORB-SLAM2 framework,the accuracy of the proposed module is improved by up to 90%on highly dynamic scene sequences.Compared with other open source dynamic visual SLAM frameworks,such as DS-SLAM and DynaSLAM,the proposed module has higher positioning accuracy.2.In order to improve the decrease in system operation efficiency caused by the introduction of instance segmentation network,this thesis chooses the Mask R-CNN pre-training model with a lightweight backbone network.However,the improvement of efficiency will inevitably bring a decrease in accuracy,resulting in the instability of the instance segmentation network output mask,resulting in defects,missing and other problems.To solve the above problems,this thesis proposes a dynamic mask inpainting algorithm based on optical flow method.The segmentation mask is propagated through the optical flow field between image frames,and then optimized by morphological filtering.Finally,the multi-frame masks are synthesized as the instance segmentation result of the current image frame.Then it improves the effect of the subsequent dynamic point elimination algorithm based on segmentation mask,and finally improves the positioning accuracy of the system as a whole.3.The semantic information obtained by the instance segmentation network is integrated into the map building module,and a static map building scheme based on dynamic object removal is proposed.In the point cloud mapping process,the prior information provided by the dynamic object mask is used to filter out the point clouds related to dynamic objects,solve the problem of map ghost in the traditional visual SLAM system in dynamic scenes,and realize the global consistent static point cloud map construction.Finally,the dense point cloud is used to generate the static octree map and two-dimensional raster map with navigation obstacle avoidance function online.
Keywords/Search Tags:dynamic scene, visual slam, instance segmentation, optical flow method, point cloud construction
PDF Full Text Request
Related items