Font Size: a A A

Monocular Vision SLAM Based On CNN Feature Point Extraction

Posted on:2020-11-21Degree:MasterType:Thesis
Country:ChinaCandidate:W J ZhengFull Text:PDF
GTID:2518306518470284Subject:Control Engineering
Abstract/Summary:PDF Full Text Request
In recent years,the vision based SLAM(Simultaneous Localization and Mapping)has attracted wide attention from scholars at home and abroad.Although SLAM is only one of many tasks performed by robots,it provides its own pose(position and orientation)estimation for upper-layer applications such as robot motion,navigation and entertainment.In view of instability of feature extraction and tracking caused by illumination and viewpoint changes in traditional visual SLAM algorithms,this thesis proposes a monocular SLAM algorithm which uses convolutional neural network(CNN)to extract features and combines with traditional methods.The main work is summarized as follows:An improved algorithm combining CNN with traditional SLAM system is proposed.The improved SLAM system can still perform feature extraction and tracking well when the light changes.Firstly,a filter-added SOJKA(Feature extraction algorithm proposed by Sojka E)feature extraction algorithm is used to mark the basic graphic data sets,and then the network trained by simple graphics data sets is extended to a network that can extract complex images.The trained network model is then used for feature extraction and feature tracking of the monocular SLAM system.Using Pn P(Perspective-n-Point)or polar geometry algorithm,the initial pose of the robot is computed by using the feature points extracted by CNN.An improved beam adjustment algorithm is proposed,which is optimized on the basis of the initial pose estimation of the robot to adapt to the SLAM system using CNN as the front-end.Firstly,a key frame selection strategy is proposed to optimize the pose based on these key frames.Then,the optimization process of the beam adjustment algorithm is improved,and some variables are not optimized repeatedly.Finally,in the process of solving,the robot pose values obtained in Previous paragraph are taken as initial values,and the optimal solution is obtained by using the conjugate gradient method using the sparse property of the variable in the visual SLAM.Through the robot motion experiment under a motion capture system,it is verified that the proposed SLAM algorithm still has good positioning accuracy under the illumination change.Firstly,the acquisition image and motion capture system are designed to collect the real motion trajectory of the robot.Then,the motion trajectory of the robot calculated from the image information.The real motion trajectory of the robot is given by the motion capture system,the trajectory calculated by ORB?SLAM(Oriented FAST and rotated BRIEF SLAM),and the trajectory calculated by the improved algorithm are fitted.Finally,two contrast experiments are designed.One is to collect images with different lights and different viewing angles.The traditional feature extraction algorithm and the improved one are used for feature tracking comparison.The second is to use ORB?SLAM and the improved SLAM system to calculate the robot's motion trajectory using image data,then compare them with the true value,and use the relative pose error to evaluate the robot's localization accuracy.The results show that the CNN based monocular SLAM algorithm is feasible,and it has good feature tracking effect under the condition of light and viewing angle change,and the localization accuracy has also been improved.
Keywords/Search Tags:Robot, Feature Point, Key Frame Selection, CNN, Bundle Adjustment, Matrix Sparsity
PDF Full Text Request
Related items