Font Size: a A A

Research On Localization And Mapping For Mobile Robot Based On Deep Learning And Visual Image

Posted on:2017-09-12Degree:MasterType:Thesis
Country:ChinaCandidate:L Y XuFull Text:PDF
GTID:2348330566956660Subject:Control engineering
Abstract/Summary:PDF Full Text Request
In the indoor environment,simultaneous localization and mapping(SLAM)of the robot is an important research direction in the field of intelligence.Robot obtains the environment information by using sonar and laser sensors,and extracts features from the data,then modifies its position through updating information constantly,and finally realizes the SLAM by utilizing the characteristic information.The amount of the observation data is usually small,and the characterizations of the observation data are not intuitive enough.Using visual sensor to collect environment information has a great advantage of improving the observability of the information and result,and also has a good effect on the accuracy and effectiveness of the environment description and positioning.Meanwhile,deep learning for visual image in the target feature extraction and recognition has a good performance.In this paper,the feature extraction and SLAM based on robot monocular vision image are investigated,and the main research contents are as follows:(1)As much noise exists in visual image data gathered from the environment and the edge detection feature is not discontinuous,a method based on Canny operator combined with Hough transform is presented in this paper.After preprocessing the acquisition of image,the image edge information is extrated through the Canny operator and the straight edges are located by using the Hough transform,and finally the connection and superposition of the edge are realized.Compared with roberts,prewitt and sobel methods,this approach suppresses the most noise and improves the completeness and accuracy of the edge information detection,as well as improves the of monocular vision image preprocessing effectively.(2)Since the traditional feature extraction algorithm can not extract the overall characteristics of the target in the image collected by visual sensor effectively,the improved convolutional neural network algorithm is proposed.The characteristics of the whole image are extracted by combining the unsupervised initialization of the auto-encoder and convolution.The characteristics of the target detection is completed by learning and training.The results show that the overall image feature extraction is achieved effectively,and the performance of the network initialization process is improved.(3)Considering the scale of the visual image is variety and the drifting phenomenon exists in feature matching,after improving the performance of network extracting theoverall characteristics of visual image,a method based on SIFT feature points detection and joint feature matching is presented.The template is established for extracting the feature points of image,then the similarity matrix is constructed and the overall similarity is detected,and finally feature points information is matched.The matching feature information is observed and updated iteratively by EKF,and SLAM is realized.The results show that the algorithm has the higher accuracy of feature extraction and matching,and the good performance of SLAM can be ensured.(4)A wide range of image data is collected from the laboratory internal environment by the Pioneer3-DX mobile robot and Basler monocular vision sensor.The experiments are implemented by adopting the method of this paper,and the coordinate transformation conversion from local map to the global map is built.The experimental results verify the practicability of the proposed method.
Keywords/Search Tags:robot vision, feature extraction, deep network, joint feature matching, SLAM
PDF Full Text Request
Related items