Font Size: a A A

A Study Of Vision-based Mobile Robot SLAM

Posted on:2018-08-02Degree:MasterType:Thesis
Country:ChinaCandidate:J Y ZhangFull Text:PDF
GTID:2348330542987160Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
Simultaneous Localization and Mapping(SLAM)is the basic module for autonomous control of mobile robots.Specifically,it solves two problems in robot technology: one is to locate the robot and the other is to build the environment perception model.These two functions are not independent of each other,but to build the map according to its own localization with the environment perception module,and then achieve more accurate localization of robot itself through the built map.Then it uses the numerical optimization algorithm to integrate the sensor measurement data,realizes the integration of pose estimation and map construction,and provides the data support for the interaction between the robot system and the environment.At present,the sensors adopted by SLAM technology are laser radar,ultrasonic,camera sensors and other environmental sensors.The visual sensor got great attention due to its low price and richer environmental information,etc.However,traditional visual SLAM algorithm based on image feature points can locate the environment by constructing the environment roadmap,but it cannot play the advantages of visual SLAM because of a serious lack of map information.This paper first builds a mobile experiment platform based on monocular vision system which includes mechanical structure,low level controller,driver,attitude sensor and monocular camera.As the robot platform uses the Mecanum Wheel as a platform dynamic motion,the robot platform can achieve the movements as forward,horizontal,oblique,rotation and combination,so as to facilitate the later coordinate transformation of the mapping and establish a new all-round movement model.It should be noted that the attitude sensor in the platform only exists as a detection device,and its data is not applied to the visual algorithm.The mobile platform achieves its own SLAM process only through the monocular vision system.In order to solve the above problems,this paper proposes an iterative optimization of the optimal estimation algorithm of single head camera depth information based on the triangulation rule.In order to show the details of the environment as much as possible,the algorithm performs inter-frame matching for all pixels with obvious texture features and gradient values.In order to analyze the localization accurately,this paper focuses on the influence of gray error and calibration error on depth measurement.Based on the graph optimization technology,this paper puts forward the localization and mapping algorithm,and proves the feasibility and validity of using the algorithm to estimate the location and build the map from theoretical level.Adding the loopback detection mechanism to identify the experience of the scene and using cumulative error of modified position posture estimation process to optimize the map's integrity and track integrity.Finally,it proves the feasibility of building a dense map SLAM method based on the mobile robot platform to achieve experimental verification.
Keywords/Search Tags:Autonomous Mobile Robot, SLAM, Mobile Platform, Monocular Camera
PDF Full Text Request
Related items