Font Size: a A A

Gas Source Localization By Fusing A Mobile Robot's Vision/Olfaction Information

Posted on:2011-08-06Degree:DoctorType:Dissertation
Country:ChinaCandidate:P JiangFull Text:PDF
GTID:1118330338483219Subject:Detection Technology and Automation
Abstract/Summary:PDF Full Text Request
Research results show that many animals use olfaction or/and vision cues to search for food, find same species, evade predators, exchange information and so on. Inspired by those biological activities, in the early 1990s researchers started to build mobile robots with onboard gas sensors or/and visual sensor to accomplish the gas-source-localization task. It is expected that such research results will play more and more roles in the application areas like judging toxic/harmful gas leakage location, checking contraband, searching for survivors in collapsed buildings, and fighting against terrorist attacks.This dissertation focuses on the mobile robot based gas source localization by fusing vision and olfaction information. The achievements can be concluded as follows.Firstly, in view of the drawbacks of the existing visual information processing methods for the robot based gas source localization, a novel top-down visual attention mechanism (TDVAM) computation model is proposed. The important features and the optimal scales of the salient objects are determined via learning. Since the limited computing and memory resources of the microprocessor are mainly used for processing the salient objects, it can meet the real-time requirement of the mobile robot. Meanwhile, shape analysis method is combined with the TDVAM computation model in order to recognize the object more accurately. Several shape features, including area, perimeter and compactness, are extracted to identify whether the candidate salient regions are the plausible areas or not. The shape analysis method is also compared with the template matching method. Experimental results validate the accuracy and reliability of the proposed method.Secondly, for the gas source localization in relatively stable airflow environments in which both the wind speed and direction have no large-scale fluctuation, a novel vision/olfaction fusion method based on least square estimation is put forward. In such environments, the gas concentration approximates to Gaussian distribution. The plausible areas are determined using vision information, and the theoretical gas concentration of all the sampled locations in every plausible area can be calculated using the turbulent–diffusion model. The minimum deviation between the theoretical gas concentration and the real one is estimated using least square method to declare which plausible area is the real source. Real-robot experimental results demonstrate the efficiency of the proposed method.Thirdly, the subsumption architecture based vision/olfaction fusion method is presented to accomplish the gas source localization task in the airflow environments where both the wind speed and direction have relatively large-scale fluctuation. In such environments, it is difficult to describe the distribution of the gas concentration using a mathematic model due to the influence of turbulence. In order to make full use of the multi-sensor information, the different behavior strategies with different priorities are set up. The behavior with higher priority can subsume or inhibit the behavior with lower priority, which makes the robot generate an optimization strategy to deal with the dynamic, complex and unstructured environments. With the subsumption architecture and the gas source localization task could be accomplished efficiently. The reliability and robustness of the proposed method are validated with the real robot experiments.
Keywords/Search Tags:Mobile robot, Gas source localization, Multi-sensor information fusion, Top-down visual attention mechanism, Shape matching, Least square estimation, Subsumption architecture
PDF Full Text Request
Related items