Font Size: a A A

Visual SLAM Positioning Technology Based On Deep Learning

Posted on:2024-01-21Degree:MasterType:Thesis
Country:ChinaCandidate:T Q DingFull Text:PDF
GTID:2542307064985569Subject:Software engineering
Abstract/Summary:PDF Full Text Request
At the present stage,as the crossover and integration between the traditional automotive industry and the computer science field become deeper and deeper,how to achieve fast and safe autonomous driving has become a hot topic in the world of artificial intelligence today.Its broad market and precise technical requirements have greatly attracted the interest of scientists and engineers in the field.The key to autonomous driving technology lies in how a vehicle can autonomously construct a model of its surroundings and accurately obtain information about its own position within the constructed model.Therefore,Simultaneous Localization and Mapping(SLAM)has naturally received widespread attention,and nowadays SLAM positioning technology has developed a variety of sensor-based directions.Among them,visual SLAM technology uses camera as the main sensor,which can ensure relatively stable positioning effect even in areas with weak satellite signal or relatively harsh environment,and the camera has the advantages of relatively low cost,rich surrounding environment information and easy installation,which makes visual SLAM technology become an important hot spot in SLAM-related fields.A key problem of traditional visual SLAM technology is that it relies on the effect of camera calibration,which is quite complicated and tedious.The effect of the front-end visual odometry(VO)tracking to detect feature points in the environment directly affects the final localization effect of the visual SLAM system.In recent years,deep learningbased methods have provided more accurate implementation solutions for visual odometry.However,the effectiveness of existing deep learning-based visual odometry to express features is still defective,and there are still problems in capturing key features.To address these problems,this topic investigates deep learning-based visual SLAM localization techniques,and the main work is as follows:In this paper,we design a deep learning visual odometry algorithm driven by a self-attentive mechanism based on motion coherence.This algorithm firstly adopts the optical flow method,which makes good use of the optical flow information between consecutive images,and again incorporates two network structures,convolutional neural network(CNN)as well as recurrent neural network(RNN),so that they together form the feature extraction module.Second,the algorithm also innovatively incorporates a motion consistency-based constraint module driven by a self-attentive mechanism to filter and purify the motion features of the camera by constraining the motion consistency.Finally,the algorithm predicts the camera’s pose and motion with high accuracy through the pose fitting network.In this paper,we design a depth vision SLAM system based on a depth vision odometry driven by a motion coherent self-attentive mechanism.This technique introduces a loopback detection module incorporating deep learning and a back-end optimization module applying a pose map optimization method to optimize the camera motion and pose globally,based on the front-end visual odometry.The paper demonstrates the effectiveness of the proposed algorithm and technique in terms of generalization capability,localization accuracy,system robustness,and computational cost through extensive experimental tests.
Keywords/Search Tags:Simultaneous Localization and Mapping, Deep Learning, Visual Odometry, Selfattentive Mechanisms
PDF Full Text Request
Related items