Font Size: a A A

Research On The Algorithm Of Monocular Vision Based Simultaneous Localization And Mapping

Posted on:2011-06-28Degree:DoctorType:Dissertation
Country:ChinaCandidate:X J MengFull Text:PDF
GTID:1118330332484609Subject:Electronic information technology and instrumentation
Abstract/Summary:PDF Full Text Request
With the rapid development of computer science, artificial intelligence and sensor technology, there is a growing trend in the robotics research towards autonomous and intelligent system. Self locating and mapping are essential preconditions for real autonomous and intelligent robots. In an unknown environment, robots have to rely upon sensors to explore the environment, and simultaneously locate itself with the obtained information. Hence, the way for self locating and mapping is tightly coupled with the capability of its sensors. Compared with sonar or laser range finders, cameras offer the advantages of compactness, power saving and low cost. More and more researchers have focused on visual sensors for the self locating and mapping problem. This thesis research mainly deals with simultaneous localization and mapping (SLAM) based on the monocular vision.In chapter 1, the significance of the research work is being discussed first. Then the current research status of computer vision and mobile robots are illustrated. Next a comprehensive description of the vision based simultaneous localization and mapping algorithm are presented. Finally, the research contents and the architecture of the thesis are described.In chapter 2, the main focus is on the feature initialization problem for the monocular vision based simultaneous localization and mapping algorithm. A new feature parameterization method using homogenous coordinates is proposed, and then the linearity of measurement equation is analyzed according to an uncertainty propagation model for the depth estimation. Finally, the feature initialization algorithm is presented.In chapter 3, a new feature selection algorithm is proposed based on an in depth analysis of the relationship between the feature tracking and the uncertainty of robot localization:the image motion can be estimated with the output of the feature tracking stage in the simultaneous localization and mapping algorithm, and then the tracking time of each feature can be predicted using the forward iteration algorithm; Since the feature distribution is not known beforehand, a delayed method is given for the feature extraction.In chapter 4, an inertial-aided monocular vision based simultaneous localization and mapping algorithm is presented. The scale of the map is ambiguous with the monocular system. By using extended kalman filter to update the information from both the inertial system and the monocular vision system, the scale ambiguity can be greatly eased. An improved SIFT (Scale-Invariant Feature Transform) algorithm based on a prior information is proposed for the monocular vision processing:by predicting the scale spaces and the image coordinates, an exhaustive search in the whole scale-space images can be avoided.In chapter 5, a new metric map merging algorithm is proposed for vision based simultaneous localization and mapping applications without knowing each robot's relative position information:first, the 3-dimension map has to be projected to a 2-dimension grid map. Then, two algorithms are proposed for grid map merging, and the rotation relationship can be estimated. Lastly, the 3-dimension map is rotated, and the Iterative Closest Point (ICP) algorithm is used for 3-dimension map merging.In the last chapter, the thesis is concluded and the prospect of the future research is presented.
Keywords/Search Tags:monocular vision, simultaneous localization and mapping, extended kalman filter, feature initialization, feature selection, feature matching, map merging
PDF Full Text Request
Related items