Font Size: a A A

Study On Simultaneous Localization And Mapping For Mobile Robots Based On Vision

Posted on:2019-03-25Degree:MasterType:Thesis
Country:ChinaCandidate:W ChenFull Text:PDF
GTID:2428330566497548Subject:Mechanical and electrical engineering
Abstract/Summary:PDF Full Text Request
SLAM is a key issue in autonomous navigation of intelligent robots.The research of this problem is conducive to advancing the process of industrial intelligence.Because of the advantages of low cost,large amount of information and semantic information,vision sensors are widely used by researchers.However,when the SLAM based on monocular vision represented by ORB-SLAM is used in practical dynamic scenarios,there are still some shortcomings such as poor robustness,lack of scale information and low accuracy.In order to solve these problems,this thesis uses the integration of odometer information and image information to solve the scale problem,and proposes the use of multi-object tracking to improve the robustness of the system in dynamic environment and build a semantic map.In order to solve the problem of the lack of scale information in monocular vision SLAM,a multi-sensor fusion SLAM system is proposed to combine odometer and monocular camera.After analyzing the motion model of the mobile robot drove by steering wheel and the ORB-SLAM system framework,the whole system is divided into the front-end and the back-end.In the front-end,after using RANSAC to eliminate the incorrect feature point matching,the odometer data is used to track the pose of the camera.Then the matching feature point pair is triangulated by using the pose transformation matrix between the current frame and the reference key frame.Its position in the world coordinate system is estimated.When the current frame meets certain conditions,it will be considered as a new reference key frame.In the back-end,in addition to the optimization of Bundle Adjustment(minimization of projection errors)which is constructed by camera pose and its associated map point locations,there are constraints brought by the odometer measurement between key frames.When closed-loop is detected,the cumulative error caused by the odometer can be eliminated in the global optimization.In a real factory scenario,the proposed SLAM system was tested and the results showed that it has solved the scale problem and achieved the accuracy of 10 cm.In order to improve the robustness of monocular vision SLAM system in complex dynamic environment,a scheme to integrate multi-object tracking into SLAM framework is proposed.YOLOv2,a convolution neural network,is used to detect the objects that may move in the image and obtain the position.On the one hand,the results of neural network detection is sent to the front-end of SLAM,and only the feature points that are not on the object are used to track the movement of the camera.On the other hand,they are used as the measurement value of multiobject tracking module.In the multi-object tracking module,the location of the objects in the world coordinate system is estimated by the map points on the objects.The camera pose and the Hungarian algorithm are used to correlate the detection results in two consecutive frames.For the failure of the detection caused by the occasional occlusion and the sudden change of illumination conditions,map points of the objects are projected to the image plane,and the results of detection are corrected by features matching.For the objects that are successfully associated,the motion state of the objects is distinguished by the Epipolar constraint.And the static objects are preserved and constructed into a semantic map.Finally,the simulation test is carried out in the high dynamic dataset Robot Car and the KITTI dataset with ground truth to verify the robustness and accuracy of monocular SLAM with multiobject tracking...
Keywords/Search Tags:slam, multi-sensor fusion, multi-object tracking, monocular vision, dynamic environment
PDF Full Text Request
Related items