Font Size: a A A

Mobile Robot Self-localization By Using Vision And Laser

Posted on:2009-05-27Degree:DoctorType:Dissertation
Country:ChinaCandidate:K WangFull Text:PDF
GTID:1118360272470744Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
Autonomous mobile robot is the research focus in the field of robotics and automation. Self-localization is one of the foremost problems for intelligent navigation and envoriment exploration. As a complicated issue, self-loalization task should consider sensor characters, envorimental features and implementation of localization algorithms, et al. This dissertation provides a systematic research towards self-localization of mobile robot. We firstly expound state of the art in localization research, and present the leading methods, key technical issues and future development trends. To summarize the central contribution of this dissertation from two aspects, the first is to develop the localization methods based on multi-sensor fusion strategy; on the other hand, this dissertation interperate the cognitive procedure of robot from a statiscal pattern recognition viewpoint. Morever, this dissertation provides a mass of practical experiments based on real robot platform to verify the proposed methods.Towards Robocup Middle-Sized Soccer, a localization system is developed for mobile robot. The robot estimates its pose recursively through a MAP estimator that incorporates the information collected from odometry and unidirectional camera. We build a 3D envorimental map for soccer field, the nonlinear sensor models and, maintain that the uncertainty manipulation of robot motion and inaccurate sensor measurements should be embedded and tracked throughout our system. We describe the uncertainty framework in a probabilistic geometry viewpoint and, use unscented transform to propagate the uncertainty which undergoes the given nonlinear functions. Considering the processing power of our robot, image features are extracted in the vicinity of corresponding projected features. In addition, data associations are evaluated by statistical distance. We conduct a series of systematic comparisons to prove the reliable and accurate performance of this self-localization system.Towards large scale corridor environment, a novel metric-topological 3D map is proposed for robot self-localization based on omnidirectional vision. The local metric map, in a hierarchical manner, defines geometrical element according to its environmental feature level. Then, the topological parts in global map are used to connect the adjacent local maps. We design a nonlinear omnidirectional camera model to project the probabilistic map elements with uncertainty manipulation. For self-localization task, a human-machine interaction system is developed using hierarchical logic. It provides a fusion center which applies feedback hierarchical fusion method to fuse local estimates generated from multi-observations.Without loss of generality, any indoor envoriment consists of at least two kinds of descriptions, namely, structural and semi-structural descriptions. The former is consistent with the proposed metric-topological 3D map, while the latter representing the office envoriment can not be simply modeled as a map. Therefore, we propose a hybrid localization system based on a sensor switching strategy between unidirectional camera and laser range finder. In this system, a sence analyzer is used to identify the envorimental features and, make desicisions on when to use camera or laser. In this way, the corresponding map-based and scan-matching methods are invoked when system is operating in camera and laser mode, respectively. Experimental results are shown accordingly.Regression analysis between features of high-dimension is receiving attention in environmental learning of mobile robot. In this dissertation , we propose a novel framework, namely General regression neural network (GRNN), for approximating the functional relationship between high-dimensional map features and robot's states. We firstly adopt batch-PCA and increment-PCA to preprocess images taken from omnidirenctional vision. The method extracts map features optimally and reduces the correlated features while keeping the minimum reconstruction error. Then, the robot states and corresponding features of the training panoramic snapshots are used to train the given neural network.
Keywords/Search Tags:Map-based Self-localization, Sensor Modeling, Scan Matching, Multi-sensor Fusion, Neural Networks
PDF Full Text Request
Related items